The bmannconsulting.com website
1---
2link: https://ai-2027.com/
3published: 2025-04-03
4tags:
5 - article
6 - AI
7---
8Authors: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
9
10We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.
11
12We wrote a scenario that represents our best guess about what that might look like.[^1] It’s informed by trend extrapolations, wargames, expert feedback, experience at OpenAI, and previous forecasting successes.[^2]
13
14[PDF](https://ai-2027.com/scenario.pdf)
15
16## Home
17
18### What is this?
19
20The CEOs of [OpenAI](https://www.bloomberg.com/features/2025-sam-altman-interview), [Google DeepMind](https://www.bigtechnology.com/p/google-deepmind-ceo-demis-hassabis), and [Anthropic](https://www.wsj.com/livecoverage/stock-market-today-dow-sp500-nasdaq-live-01-21-2025/card/anthropic-ceo-says-ai-could-surpass-human-intelligence-by-2027-9tka9tjLKLalkXX8IgKA) have all predicted that AGI will arrive within the next 5 years. Sam Altman [has said](https://webcf.waybackmachine.org/web/20250106014723/https://blog.samaltman.com/reflections) OpenAI is setting its sights on “superintelligence in the true sense of the word” and the “glorious future.”[^3]
21
22What might that look like? We wrote AI 2027 to answer that question. Claims about the future are often frustratingly vague, so we tried to be as concrete and quantitative as possible, even though this means depicting one of many possible futures.
23
24We wrote two endings: a “slowdown” and a “race” ending. However, AI 2027 is not a recommendation or exhortation. Our goal is predictive accuracy.[^4]
25
26We encourage you to debate and counter this scenario.[^5] We hope to spark a broad conversation about where we’re headed and how to steer toward positive futures. We’re [planning to give out thousands in prizes](https://ai-2027.com/about#bets-and-bounties) to the best alternative scenarios.
27
28### How did we write it?
29
30Our research on key questions (e.g. what goals will future AI agents have?) can be found [here](https://ai-2027.com/research).
31
32The scenario itself was written iteratively: we wrote the first period (up to mid-2025), then the following period, etc. until we reached the ending. We then scrapped this and did it again.
33
34We weren’t trying to reach any particular ending. After we finished the first ending—which is now colored red—we wrote a new alternative branch because we wanted to also depict a more hopeful way things could end, starting from roughly the same premises. This went through several iterations.[^6]
35
36Our scenario was informed by approximately 25 [tabletop exercises](https://ai-2027.com/about?tab=tabletop-exercise#tab-box-tabletop-exercise) and feedback from over 100 people, including dozens of experts in each of AI governance and AI technical work.
37
38### Why is it valuable?
39
40_“I highly recommend reading this scenario-type prediction on how AI could transform the world in just a few years. Nobody has a crystal ball, but this type of content can help notice important questions and illustrate the potential impact of emerging risks.”_ —_Yoshua Bengio[^7]_
41
42We have set ourselves an impossible task. Trying to predict how superhuman AI in 2027 would go is like trying to predict how World War 3 in 2027 would go, except that it’s an even larger departure from past case studies. Yet it is still valuable to attempt, just as it is valuable for the US military to game out Taiwan scenarios.
43
44Painting the whole picture makes us notice important questions or connections we hadn’t considered or appreciated before, or realize that a possibility is more or less likely. Moreover, by sticking our necks out with concrete predictions, and encouraging others to publicly state their disagreements, we make it possible to evaluate years later who was right.
45
46Also, one author wrote a lower-effort AI scenario [before, in August 2021](https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like). While it got many things wrong, overall it was surprisingly successful: he predicted the rise of chain-of-thought, inference scaling, sweeping AI chip export controls, and $100 million training runs—all more than a year before ChatGPT.
47
48### Who are we?
49
50[Daniel Kokotajlo](https://x.com/DKokotajlo) ([TIME100](https://time.com/7012881/daniel-kokotajlo/), [NYT piece](https://www.nytimes.com/2024/06/04/technology/openai-culture-whistleblowers.html)) is a former OpenAI researcher whose previous [AI predictions](https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like) have [held up well](https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating-what-2026-looks-like-so-far).
51
52[Eli Lifland](https://www.linkedin.com/in/eli-lifland/) co-founded [AI Digest](https://theaidigest.org/), did [AI robustness research](https://scholar.google.com/citations?user=Q33DXbEAAAAJ&hl=en), and ranks #1 on the [RAND Forecasting Initiative](https://www.rand.org/global-and-emerging-risks/forecasting-initiative.html) all-time leaderboard.
53
54[Thomas Larsen](https://www.linkedin.com/in/thomas-larsen/) founded the [Center for AI Policy](https://www.centeraipolicy.org/) and did AI safety research at the [Machine Intelligence Research Institute](https://intelligence.org/).
55
56[Romeo Dean](https://www.linkedin.com/in/romeo-dean-789313200/) is completing a computer science concurrent bachelor’s and master’s degree at Harvard and previously was an [AI Policy Fellow](https://www.iaps.ai/romeo-dean) at the Institute for AI Policy and Strategy.
57
58[Scott Alexander, blogger extraordinaire](https://en.wikipedia.org/wiki/Slate_Star_Codex), volunteered to rewrite our content in an engaging style; the fun parts of the story are his and the boring parts are ours.
59
60
61## Mid 2025: Stumbling Agents
62
63## Late 2025: The World’s Most Expensive AI
64
65> Although models are improving on a wide range of skills, one stands out: <mark>OpenBrain focuses on AIs that can speed up AI research.</mark> They want to win the twin arms races against China (whose leading company we’ll call “DeepCent”)[16](https://ai-2027.com/footnotes#footnote-16) and their US competitors.
66
67## Early 2026: Coding Automation
68
69
70## Mid 2026: China Wakes Up
71
72> Chip export controls and lack of government support have left China under-resourced compared to the West. By smuggling banned Taiwanese chips, buying older chips, and producing domestic chips about three years behind the US-Taiwanese frontier, China has managed to maintain about 12% of the world’s AI-relevant compute—but the older technology is harder to work with, and supply is a constant headache.[^35]
73
74## Late 2026: AI Takes Some Jobs
75
76> AI has started to take jobs, but has also created new ones. The stock market has gone up 30% in 2026, led by OpenBrain, Nvidia, and whichever companies have most successfully integrated AI assistants. <mark>The job market for junior software engineers is in turmoil: the AIs can do everything taught by a CS degree, but people who know how to manage and quality-control teams of AIs are making a killing.</mark> Business gurus tell job seekers that familiarity with AI is the most important skill to put on a resume. Many people fear that the next wave of AIs will come for their jobs; there is a 10,000 person anti-AI protest in DC.
77
78## January 2027: Agent-2 Never Finishes Learning
79
80## February 2027: China Steals Agent-2
81
82## March 2027: Algorithmic Breakthroughs
83
84## April 2027: Alignment for Agent-3
85
86## May 2027: National Security
87
88## June 2027: Self-improving AI
89
90## August 2027: The Geopolitics of Superintelligence
91
92## September 2027: Agent-4, the Superhuman AI Researcher
93
94## October 2027: Government Oversight
95
96# Slowdown
97
98
99# Race
100
101## November 2027: Superhuman Politicking
102
103## December 2027: The Agent-5 Collective
104
105## 2028: The AI Economy
106
107## 2029: The Deal
108
109> Humans realize that they are obsolete. A few niche industries still trade with the robot economy, supplying goods where the humans can still add value.[^30] Everyone else either performs a charade of doing their job—leaders still leading, managers still managing—or relaxes and collects an incredibly luxurious universal basic income. Everyone knows that if the AIs turned on humans, they would be completely overpowered. Not that most humans would even resist, the political institutions are too thoroughly captured. But it doesn’t seem like this is happening. Every week that goes by with no dramatic AI treachery, is another week that confidence and trust grow.
110
111## 2030: Takeover
112
113> For about three months, Consensus-1 expands around humans, tiling the prairies and icecaps with factories and solar panels. Eventually it finds the remaining humans too much of an impediment: in mid-2030, the AI releases a dozen quiet-spreading biological weapons in major cities, lets them silently infect almost everyone, then triggers them with a chemical spray. Most are dead within hours; the few survivors (e.g. preppers in bunkers, sailors on submarines) are mopped up by drones. Robots scan the victims’ brains, placing copies in memory for future study or revival.[^31]
114
115
116---
117## About
118
119The AI 2027 scenario is the first major release from the [AI Futures Project](https://ai-futures.org/). We’re a new nonprofit forecasting the future of AI. We created this website in collaboration with [Lightcone Infrastructure](http://lightconeinfrastructure.com/).
120
121---
122
123[^1]: We disagree somewhat amongst ourselves about AI timelines; our median AGI arrival date is somewhat longer than what this scenario depicts. This scenario depicts something like our mode. See our [timelines forecast](https://ai-2027.com/research/timelines-forecast) for more details.
124
125[^2]: One author, Daniel Kokotajlo, did a [lower-effort scenario exercise in 2021](https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like) that got many things right including the rise of chatbots, chain of thought, inference scaling, sweeping AI chip export controls, and $100 million training runs. Another author, [Eli Lifland](https://www.vox.com/future-perfect/2024/2/13/24070864/samotsvety-forecasting-superforecasters-tetlock), ranks #1 on the [RAND Forecasting Initiative](https://www.rand.org/global-and-emerging-risks/forecasting-initiative.html) leaderboard.
126
127[^3]: It’s tempting to dismiss this as just hype. We think this would be a grave mistake—it is not _just_ hype. We have no desire to hype AI ourselves, yet we also think it is [strikingly plausible](https://situational-awareness.ai/) that superintelligence could arrive by the end of the decade. See our [timelines forecast](https://ai-2027.com/research/timelines-forecast) for more.
128
129[^4]: Sometimes people mix prediction and recommendation, hoping to create a self-fulfilling-prophecy effect. We emphatically are not doing this; we hope that what we depict does not come to pass!
130
131[^5]:Feel free to [contact us](http://mailto%40info@ai-futures.org/) if you’re writing a critique or an alternative scenario.
132
133[^6]: It was overall more difficult, because unlike with the first ending, we were trying to get it to reach a good outcome starting from a rather difficult situation.
134
135[^7]: Full Professor, Université de Montréal and Founder and Scientific Advisor, Mila - Quebec AI Institute. Also the world’s [most-cited computer scientist](https://en.wikipedia.org/wiki/Yoshua_Bengio).
136
137[^35]: There are now 3M H100e in China, up from the 1.5M they had one year ago in mid 2025. See the compute supplement’s [distribution section](https://ai-2027.com/supplements/compute-forecast#section-2-compute-distribution) for more details. We expect [smuggling efforts](https://www.cnas.org/publications/reports/preventing-ai-chip-smuggling-to-china) to secure around 60K [GB300s](https://www.trendforce.com/news/2025/03/10/news-nvidia-to-unveil-gb300-at-gtc-with-shipment-reportedly-to-begin-in-may-driving-cooling-demands/) (450K H100e), with another 2M [Huawei 910Cs](https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance) being [produced](https://x.com/ohlennart/status/1899488375574278336) (800k H100e) and a mixture of ~1M legally imported chips (such as Nvidia’s [H20s](https://www.reuters.com/technology/artificial-intelligence/nvidias-h20-chip-orders-jump-chinese-firms-adopt-deepseeks-ai-models-sources-say-2025-02-25/) or [B20s](https://www.reuters.com/technology/nvidia-preparing-version-new-flaghip-ai-chip-chinese-market-sources-say-2024-07-22/)) making up the last 250K H100e.
138
139[^30]: E.g. by finding old and unused equipment and taking it to collection sites to sell for scrap.
140
141[^31]: Arguably this means only a few people actually died. Arguably.