Came across this fuckin disaster on Ye Olde LinkedIn by 'Caroline Jeanmaire at AI Governance at The Future Society'
"I've just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027.
Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it's a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.
What makes this forecast exceptionally credible:
-
One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed
-
The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio
-
It makes concrete, testable predictions rather than vague statements that cannot be evaluated
The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.
As the authors state: "It would be a grave mistake to dismiss this as mere hype."
For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let's at least take a look inside for some of their deep quantitative reasoning...

....hmmmm....

O_O

The answer may surprise you!
Is this the corresponding lesswrong post: https://www.lesswrong.com/posts/TpSFoqoG2M5MAAesg/ai-2027-what-superintelligence-looks-like-1 ?
Committing to a hard timeline at least means making fun of them and explaining how stupid they are to laymen will be a lot easier in two years. I doubt the complete failure of this timeline will actually shake the true believers though. And the more experienced ~~grifters~~ forecasters know to keep things vaguer so they will be able to retroactively reinterpret their predictions as correct.
Every competent apocalyptic cult leader knows that committing to hard dates is wrong because if the grift survives that long, you'll need to come up with a new story.
Luckily, these folks have spicy autocomplete to do their thinking!
I was going to make a comparison to Elron, but... oh, too late.
I think Eliezer has still avoided hard dates? In the Ted talk, I distinctly recall he used the term "0-2 paradigm shifts" so he can claim prediction success for stuff LLMs do, and paradigm shift is vague enough he could still claim success if its been another decade or two and there has only been one more big paradigm shift in AI (that still fails to make it AGI).
late reply but yes Eliezer has avoided hard dates because “predictions are hard”
the closest he’s gotten is his standing bet with Bryan Chaplan that it’ll happen before 2030 (when I looked into this bet Eliezer himself said that he made it so he could “exploit Bryan’s amazing bet-winning ability and my amazing bet-losing ability” to ensure AGI doesn’t wipe everyone out before 2030) he said in a 2024 interview that if you put a gun to his head and forced him to make probabilities, “it would look closer to 5 years than 50” (unhelpfully vague since it puts the ballpark at like 2-27 years) but did say in a more recent interview that he thinks 20 years feels like it’s starting to push it (possible but he doesn’t think so)
So basically, no hard dates but “sooner rather than later” vagueness