The document Situational Awareness by Leopold Aschenbrenner is a high-level forecast and strategic outlook on the trajectory of artificial intelligence from GPT-4 to artificial general intelligence (AGI), and then to superintelligence—all within the next few years, possibly by 2027.
1. The AGI Race Has Begun
We're in a full-scale AI arms race, both economically and geopolitically.
Industrial-scale investment is ramping up (trillion-dollar compute clusters, energy grid mobilization).
Aschenbrenner argues that AGI—AI as smart or smarter than humans—is highly plausible by 2027, following trendlines in compute, algorithmic improvements, and “unhobbling” (turning raw models into agents/co-workers).
2. Counting the OOMs (Orders of Magnitude)
Progress is being tracked by the exponential increases in effective compute (combining hardware scale and software efficiency).
From GPT-2 (~preschooler-level) to GPT-4 (~smart high-schooler) took about 4 years. A similar qualitative leap is expected by 2027, potentially reaching PhD-level AI researchers.
3. From AGI to Superintelligence
Once AI can improve itself (automate AI R&D), we’ll enter an intelligence explosion—compressing decades of progress into months or even weeks.
This could give rise to vast superintelligence, akin to the leap from atomic bombs to hydrogen bombs—profoundly powerful and destabilizing.
4. National Security & Global Stakes
The U.S. is likely to treat AGI like the Manhattan Project—eventually nationalizing or heavily securing its development.
The “free world” vs. authoritarian powers framing is emphasized; the stakes are likened to existential Cold War-era risks.
5. Technical & Strategic Bottlenecks
Challenges include: compute availability, securing AI labs from espionage (especially the CCP), solving alignment (controlling superintelligence), and preparing governance and infrastructure.
6. The Project
As AGI nears, a major U.S. government initiative will likely emerge—“The Project”—to handle the implications and steer development toward national objectives.
7. Uncertainty and Acceleration
The document acknowledges large error bars in the timeline but asserts that if current trendlines hold, AGI and superintelligence are imminent.
There’s potential for a sudden commercial and societal transformation—what he calls the “sonic boom” effect—once agents are capable enough to be drop-in replacements for knowledge workers.