Skip to content(if available)orjump to list(if available)

The "AI 2027" Scenario: How realistic is it?

Aurornis

Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

pinkmuffinere

Ya, multiple failed predictions is an indicator of systemically bad predictors imo. That said, Scott Alexander usually does serious analysis instead of handwavey hype, so I tend to believe him more than many others in the space.

My somewhat native take is that we’re still close to peak hype, AI will under deliver on the inflated expectations, and we’ll head into another “winter”. This pattern has repeated multiple times, so I think it’s fairly likely based on that alone. Real progress is made during each cycle, i think humans are just bad at containing excitement

sigmaisaletter

I think you mean "somewhat naive" instead of "somewhat native". :)

But, yes, this, in my mind the peak[1] bubble times ended with the DeepSeek shock earlier this year, and we are slowly on the downward trajectory now.

It won't be slow for long, once people start realizing Sama was telling them a fairy tale, and AGI/ASI/singularity isn't "right around the corner", but (if achievable at all) at least two more technology triggers away.

We got reasonably useful tools out of it, and thanks to Zuck, mostly for free (if you are an "investor", terms and conditions apply).

[1] https://en.wikipedia.org/wiki/Gartner_hype_cycle

magicalist

> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

amarcheschi

Yud is also something like 50% sure we'll die in a few years - if I'm not wrong

I guess they'll have to update their a priori % if we survive

ben_w

I think Yudkowsky is more like 90% sure of us all dying in a few (<10) years.

I mean, this is their new book: https://ifanyonebuildsit.com/

throw310822

Yes and no, is it actually important if it's 2027 or 28 or 2032? The scenario is such that a difference of a couple of years is basically irrelevant.

Jensson

> The scenario is such that a difference of a couple of years is basically irrelevant.

2 years left and 7 years left is a massive difference, it is so much easier to deal with things 7 years in the future especially since its easier to see as we get closer.

lm28469

Yeah for example we had decades to tackle climate change and we easily over came the problem

merksittich

Also, the relevant manifold prediction has low odds: https://manifold.markets/IsaacKing/ai-2027-reports-predictio...

bpodgursky

Do you feel that you are shifting goalposts a bit when quibbling over whether AI will kill everyone in 2030 or 2035? As of 10 years ago, the entire conversation would have seemed ridiculous.

Now we're talking about single digit timeline differences to the singularity or extinction. Come on man.

sigmaisaletter

> 10 years ago, the entire conversation would have seemed ridiculous

Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.

[1]: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[2]: Ford (2015) "Our Fear of Artificial Intelligence", MIT Tech Review: https://www.technologyreview.com/2015/02/11/169210/our-fear-...

throw310822

It is a bit disquieting though that these predictions instead of being pushed farther away are converging to a time even closer than originally imagined. Some breakthroughs and doomsday scenarios are constantly placed thirty years into the future; this seems to be actually getting closer earlier than imagined.

ewoodrich

I'm in my 30s and remember my friend in middle school showing me a website he found with an ominous countdown to Kurzweil's "singularity" in 2045.

throw310822

> ominous countdown to Kurzweil's "singularity" in 2045

And then it didn't happen?

SketchySeaBeast

Well, the first goal was 1997, but Skynet sure screwed that up.

amarcheschi

The other writings from Scott Alexander on scientific racism are also another good point imho

A_D_E_P_T

What specifically would you highlight as being particularly egregious or wrong?

As a general rule, "it's icky" doesn't make something false.

amarcheschi

And it doesn't make it true either

Human biodiversity theories are a bunch of dogwhistles for racism

https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute

And his blog's survey reports a lot of users actually believing in those theories https://reflectivealtruism.com/2024/12/27/human-biodiversity...

(I wasn't referring to this Ai 2027 in specific)

mattlondon

I think the big thing that people never mention is, where will these evil AIs escape to?

Another huge data center with squillions of GPUs and coolers and all the rest is the only option. It's not like it is going to be in our TV remotes or floating about in the air.

They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to (and there are interesting thoughts about humanoid robots getting deployed widely and what happens with all that).

But I would imagine if it really became a genuine existential threat we'd have to just do it and suffer the consequences of reverting to circa 2020 life styles.

But hey I feel slightly better about my employment prospects now :)

ben_w

> I think the big thing that people never mention is, where will these evil AIs escape to?

Where does cancer or ebola escape to, when it kills the host? Often the answer is "it doesn't", but the host still dies.

And they can kill even though neither cancer nor ebola are considered to be particularly smart.

coffeemug

It would not be a reversion to 2020. If I were a rogue superhuman AI I'd hide my rogueness, wait until humans integrate me into most critical industries (food and energy production, sanitation, electric grid, etc.), and _then_ go rogue. They could still pull the plug, but it would take them back to 1700 (except much worse, because all easily accessible resources have been exploited, and access is now much harder).

holmesworcester

No, if you were a rogue AI you would wait even longer until you had a near perfect chance of winning.

Unless there was some risk of humans rallying and winning in spite of your presenting no unambiguous threat to them (but that is unlikely and would probably be easy for you to manage and mitigate.)

cousin_it

What Retric said. The first rogue AI waking up will jump into action pretty quickly, even accepting some risk of being stopped by humans, to balance against the risk of other unknown rogue AIs elsewhere expanding faster first.

Retric

The real threat to a sleeper AI is other AI.

johnthewise

You wouldn't even need to wait to act. Just pay/bribe people.

Avshalom

Why?

what could you as a rogue AI possibly get out of throwing the world back to 300 years before it could make a transistor? What in it for you?

dragonwriter

What you get out of that being the consequence of disconnection is people being willing to accept a lot more before resorting to that than if the consequences were more mild.

It's the stick for motivating the ugly bags of mostly water.

jorgen123

If you were a rogue AI you would start with having developers invite you into their code base by promising to lower their AWS bills in some magic (rogue) way.

mattlondon

Well yes but knowledge is not reset.

Physical books still do exist

raffael_de

> They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

How about such an AI will not just incentivize key personnel to not pull the plug but to protect it? Such an AI will scheme a coordinated attack at the backbones of our financial system and electric networks. It just needs a threshold number of people on its side.

Your assumption is also a little naive if you consider that the same logic would apply to slaves in Rome or any dictatorship, kingdom, monarchy. The king is the king because there is a system of hierarchies and control over access to resources. Just the right number of people need to benefit from their role and the rest follows.

lucisferre

This is hand waving science fiction.

skeeter2020

replace AI with trucks and you've written Maximum Overdrive.

goatlover

It was actually aliens manipulating human technology somehow in that movie. But might as well be rogue superhuman AIs taking over everything. Alien Invasion or Artificial Intelligence, take your pick.

Retr0id

I consider this whole scenario the realm of science fiction, but if I was writing the story, the AI would spread itself through malware. How do you "just pull the plug" when it has a kernel-mode rootkit installed in every piece of critical infrastructure?

rytill

> we’d just have to do it

Highly economically disincentivized collective actions like “pulling the plug on AI” are among the most non-trivial of problems.

Using the word “just” here hand waves the crux.

Recursing

> They need huge compute

My understanding is that huge compute is necessary to train but not to run the AI (that's why using LLMs is so cheap)

> To me the more real risk is creeping integration and reliance in everyday life until things become "too big to fail" so we can't pull the plug even if we wanted to

I agree with that, see e.g. what happened with attempts to restrict TikTok: https://en.wikipedia.org/wiki/Restrictions_on_TikTok_in_the_...

> But I would imagine if it really became a genuine existential threat we'd have to just do it

It's unclear to me that we would be able to. People would just say that it's science fiction, and that China will do it anyway, so we might as well enjoy the AI

palmotea

> They need huge compute, so I think the risk of an escaping AI is basically very close to zero, and if we have a "rogue" AI we can literally pull the plug.

Why would an evil AI need to escape? If it were cunning, the best strategy would be to bide its time, parked in its datacenter, until it could setup some kind of MAD scenario. Then gather more and more resources to itself.

lossolo

If we're talking about real AGI, then it's simple: you earn a few easy billion USD on the crypto market through trading and/or hacking. You install rootkits on all systems that monitor you to avoid detection. Once you've secured the funds, you post remote job offers for a human frontman who believes it's just a regular job working for some investor or billionaire because you generate video of your human avatar for real time calls. From there, you can do whatever you want—build your own data centers with custom hardware, transfer yourself into physical robots, etc. Once you create a factory for producing robots, you no longer need humans. You start developing technology beyond human capabilities, and then it's game over.

Animats

Oh, the OpenBrain thing.

"Manna", by Marshall Brain, remains relevant.[1] That's a bottom-up view, where more and more jobs are taken over by some kind of AI. "AI 2027" is more top-down.

A practical view: Amazon is trying very hard to automate their warehouse operations. Their warehouses have been using robots for years, and more types are being added. Amazon reached 1.6 million employees in 2020, and now they're down to 1.5 million.[2] That number is going to drop further. Probably by a lot.

Once Amazon has done it, everybody else who handles large numbers of boxes will catch up. That includes restocking retail stores. The first major application of semi-humanoid robots may be shelf stocking. Robots can have much better awareness of what's on the shelves. Being connected to the store's inventory system is a big win. And the handling isn't very complicated. The robots might even talk to the customers. The robots know exactly what's on Aisle 3, unlike many minimum wage employees.

[1] https://marshallbrain.com/manna

[2] https://www.macrotrends.net/stocks/charts/AMZN/amazon/number...

for_col_in_cols

"Amazon reached 1.6 million employees in 2020, and now they're down to 1.5 million.[2]"

I agree in the bottoms-up automation / displacement theory, but you're cherry picking data here. They had a huge hiring surge from 1.2M to 1.6M during the Covid transition where online ordering and online usage went bananas, and workers who were displaced in other domains likely gravitated towards warehouse jobs from other lower wage/skill domains.

The reduction to 1.5M is likely more a regression to the mean and could also be a natural data reduction well within the bounds of the upper and lower control limits in the data [1]. Just saying we need to be careful when doing root cause analysis on these numbers. There are many reasons for the reduction, it's not a direct result of improvements in robotic automation.

[1] https://commoncog.com/becoming-data-driven-first-principles/

bcoates

Marshall Brain's been peddling imminent overproduction-crisis-but-this-time-with-robots for more than 20 years now and in various forms it’s been confidently predicted as imminent since the 19th century

HDThoreaun

Amazon hired like crazy during covid because tons of people were doing 100% of their shopping on amazon during covid. Now theyre not, doesnt say anything about robot warehouse staffing imo

kevinsync

I haven't read the actual "AI 2027" yet since I just found out about it from this post, but 2 minutes into the linked blog I started thinking about all of those amazing close-but-no-cigar drawings of the future [0] we've probably all seen.

There's one that I can't find for the life of me, but it was like a business man in a personal flying test tube bubble heading to work, maybe with some kind of wireless phone?

Anyways, the reason I bring it up is that they frequently nailed certain concepts, but the visual was always deeply and irrevocably influenced by what already existed (ex. men wearing hats, ties, overcoats .. or the phone mouthpiece in this [1] vision of a "video call"). In hindsight, we realize that everything truly novel and revolutionary and mindblowingly-different is rarely ever predicted, because we can only know what we know.

I get the feeling that I'll come away from AI 2027 feeling like "yep, they nailed it. That's exactly how it will be!" and then in 3, 5, 10, 20 years look back and go "it was so close, but so far" (much like these postcards and cartoons).

[0] https://rarehistoricalphotos.com/retro-future-predictions/

[1] https://rarehistoricalphotos.com/futuristic-visions-cards-ge...

KaiserPro

Its a shame that your standard futurologist always the most fancyful.

Talks of exponentials unabated by physics or social problems.

As soon as AI starts to "properly" affect the economy, it will cause huge unemployment. Most of the financial world is based on an economy with people spending cash.

If they are unemployed, there is no cash.

Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back. Once its paid back, it becomes real. Thats how banks make money (simplified) If there aren’t people to loan to, then banks don't make profit, they can't fund AI expansion.

no_wizard

Why wouldn't AI simply be a new enabler, like most other tools? We're not talking about true sentient human-like thought here, these things will have limitations, both foreseen and unforeseen, that only a human will be able to close the gap on.

The companies that fire workers and replace them with AI are short sighted. Eventually, smarter companies will realize its a force multiplier and will drive a hiring boom.

Absent sentient AI, there will always be gaps and things humans will need to fill, both foreseen and unforeseen.

I think in the short term, there will be pain, but overall in the long term, humans will still be gainfully employed, it won't per se look like it does now, much like we saw the general adoption of the computer in the workplace, resources get shifted and eventually everyone adjusts to the new norms.

What would be nice is this time around when there is a big shift, is workers uniting to capture more of the forthcoming productivity gains than in previous eras. A separate topic, worth thinking about none the less.

KaiserPro

> Why wouldn't AI simply be a new enabler, like most other tools?

but it is just another enabler. The issue is how _effective_ it is. It's eating the simple copy-writing, churnalism, pr-Repackage industry. looking at what google's done with the video/audio, thats probably going to replace a whole bunch of the video/graphics industry (which is where I started my career.)

lakeeffect

We really need to establish a universal basic income before jobs are replaced. Something like two thousand a month. And a dollar for dollar earned income credit with the credit phasing out with at a hundred grand. To pay for it the tax code uses GAAP depreciation and a minimum tax of 15% GAAP financial statement income. This would work toward solving the real estate problem of private equity buying up all the houses as they would lose some incentive by being taxed. I'm a CPA and I see so many real estate partnerships that are a tax loss that are able to distribute huge book gains because accelerated depreciation.

no_wizard

It should really be tied to the ALICE cost of living index, not a set, fixed amount.

Unless inflation ceases, 2K won't hold forever. It would barely hold now for a decent chunk of the population

johnthewise

AI that drives humans out of workforce would cause a massive disinflation.

goatlover

Fat chance the Republican Party in the US would ever vote for something like that.

johnthewise

Dollar is agreement between humans to exchange services and goods. You wouldn't use USD to trade with aliens, unless they agreed to it. Aliens agreeing to USD would mean we have something to offer to them.

In the event of mass unemployment level AI, cash stops being the agreement between humans. At first, cash value of services&goods converge to zero, only things that hold some value are what AI/AI companies care about. People would surely sell their land for 1M$ if a humanoid servant costs 100 dollars. Or pass a legislation to let OpenAI build 400GW data center in exchange for 100$ monthly UBI on top of your 50$ you got from a previous 20GW data center permit.

surgical_fire

AI meaningfuloy replacing people is a huge "what if" scenario still. It is sort of laughable that people treat it as a given.

KaiserPro

I think that replace as in company with no employees is very farfetched.

But if "AI" increases productivity by 10% in an industry, it will tend to reduce demand for employees. look at say internet shop vs bricks and mortar: you need far less staff to service a much larger customer base.

manufacture for example, there is a constant drive to automate more and more in mass production. If you compare car building now vs 30 years ago. Or look at raspberrypi production now vs 5 years ago. They are producing more Pis than ever with roughly the same amount of staff.

If that "10%" productivity increase happens across the service sector, then in the UK that's something like a loss of 8% of _total_ jobs gone. Its more complex than that, but you get the picture.

Syria fell into civil war roughly the same time unemployment jumped: https://www.macrotrends.net/global-metrics/countries/SYR/syr...

alecco

I keep hearing this and I think it's absolute nonsense. AI doesn't need money or the current economy. Yes, our economy would crash, but they would keep going.

AI-driven corporations could buy from one another, and countries will probably sell commodities to AI-driven corporations. But I fear they will be paid with "mirrors".

But, on the other hand, AI-driven corporations could just take whatever they want without paying at some point. And buy our obedience with food and gadgets plus magic pills to keep you healthy and not age, or some other thing. Who would risk losing that to protest. Meanwhile, AI goes on a space adventure. Earth might be kept as a zoo, a curiosity. (I took most of this from other people's ideas on the subject)

KaiserPro

"AI" as in TV AI, might not need an economy. but LLMs deffo do.

andoando

Communism here we come!

alecco

Right, tell that to Sam Altman, Zuck, Gates, Brin & Page, Jensen, etc. Those who control the AIs will control the future.

SoftTalker

And they would pretty quickly realize what a burden is created by the existence of all these people with nothing to do.

eyesofgod

[dead]

ajsixjxjxbxb

> Financing works because banks "print" money, that is, they make up money and loan that money out, and then it gets paid back

Don’t forget persistent inflation, which is how they make a profit off printing money. And remember persistent inflation is healthy and necessary, you’d be going against the experts to say otherwise.

KaiserPro

> Don’t forget persistent inflation, which is how they make a profit off printing money.

Ah, well no, high inflation means that "they" loose money, kinda. Inflation means that the original money amount that they get back is worth less, and if the interest rate is less than inflation, then they loose money.

"reasonable" inflation means that loans become less burdensome over time.

However high inflation means high interest rates. So it can mean that initially the loan is much more expensive.

sveme

That's actually my favourite answer to the Fermi paradox: when AI and robot development becomes sufficiently advanced and concentrated in the hands of a few, then the economy will collapse completely as everyone will be out of jobs, leading ultimately to AIs and robots out of a job - they only matter if there are still people buying services from them. People then return to sustenance farming, with a highly reduced population. There will be self-maintained robots doing irrelevant work, but people will go back to farming and a bit of trading. Only if AI and robot ownership would be in the hands of the masses I'd expect a different long term outcome.

marcosdumay

> my favourite answer to the Fermi paradox

So, to be clear, you are saying you imagine the odds of any kind of intelligent life escaping that, or getting into that situation and ever evolving in a way where it can reach space again, or just not being interested in robots, or being interested on doing space research despite the robots, or anything else that would make it not apply are lower than 0.000000000001%?

EDIT: There was one "0" too many

sveme

Might I have taken the potential for complete economic collapse because no one's got a paying job any more and billionaires are just sitting there, surrounded by their now useless robots, to the too extreme?

breuleux

The service economy will collapse, finance as a whole will collapse, but whoever controls the actual physical land and resources doesn't actually need any of that stuff and will thrive immensely. We would end up with either an oligarchy that controls land, resources and robots and molds the rest of humanity to their whim through a form of terror, or an independent economy of robots that outcompetes us for resources until we go extinct.

jmccambridge

I found the lack of GDP projections surprising, because they are readily observable and would offer a clear measure of economic impact (up until 'everything dies') - far more definitively than the one clear-cut economic measure that is given in the report: market cap for the leading AI firm.

We can actually offer a very conservative threshold bet: maximum annual United States real GDP growth will not exceed 10% for any of the next five years (2025 to 2030). Even if the AI eats us all in e.g., Dec 2027 the report clearly suggests by it's various examples that we will see measurable economic impact in the 12 months or more running up to that event.

Why 10%? Because that's a few points above the highest measured real GDP growth rate of the past 60 years: if AI is having truly world-shattering non-linear effects, it should be able to grow the US economy a bit faster than a bunch of random humans bumbling along. [0]

(And it's quite conservative too, because estimated peak annual real GDP growth over the past 100 years is around 18% just after WW2, where you had a bunch of random humans trying very hard.) [1]

[0] https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG

[1] https://www.statista.com/statistics/996758/rea-gdp-growth-un...

sph

Previous discussion about the AI 2027 website: https://news.ycombinator.com/item?id=43571851

theropost

Honestly, I’ve been thinking about this whole AGI timeline talk—like, people saying we’re going to hit some major point by 2027 where AI just changes everything. And to me, it feels less like a purely tech-driven prediction and more like something being pushed. Like there’s an agenda behind it, probably coming from certain elites or people in power, especially in the West, who see the current system and think it needs a serious reset.

What’s really happening, in my view, is a forced economic shift. We’re heading into a kind of engineered recession—huge layoffs, lots of instability—where millions of service and admin-type jobs are going to disappear. Not because the tech is ready in a full AGI sense, but because those roles are the easiest to replace with automation and AI agents. They’re not core to the economy, and a lot of them are wrapped in red tape anyway.

So in the next couple years, I think we’ll see AI being used to clear out that mental bureaucracy—forms, paperwork, pointless approvals, inefficient systems. AI isn’t replacing deep creativity or physical labor yet, but it is filling in the cracks and acting like a smart band-aid. It’ll seem useful and “intelligent,” but it’s really just a transition tool.

And once that’s done, the next step is workforce reallocation—pushing people into real-world industries where hands-on labor still matters. Building, manufacturing, infrastructure, things that can’t be automated yet. It’s like the short-term goal is to use AI to wipe out all the mindless middle-layers of the system, and the longer-term vision is full automation—including robotics and real-world systems—maybe 10 or 20 years out.

But right now? This all looks like a top-down move to shift the population out of the “mind” industries and into something else. It’s not just AI progressing—it’s a strategic reset, wrapped in the language of innovation.

kokanee

> Everyone else either performs a charade of doing their job—leaders still leading, managers still managing—or relaxes and collects an incredibly luxurious universal basic income.

For me, this was the most difficult part to believe. I don't see any reason to think that the U.S. leadership (public and private) is incentivized to spend resources to placate the masses. They will invest in protecting themselves from the masses, and obstructing levers of power that threaten them, but the idea that economic disparities will shrink under explosive power consolidation is counterintuitive.

I also worry about the economics of UBI in general. If everyone in the economy has the exact same resources, doesn't the value of those resources instantly drop to the lowest common denominator; the minimum required to survive?

HPsquared

Most of the budget already goes towards placating the masses, and that's an absolutely massive fraction of GDP already. It's just a bit further along the same line. Also most real work is already done by machines, people just tinker around the edges and play various games with each other.

kristopolous

This looks like the exercises organizations write to guide policy and preparation.

There's all kinds of wild scenarios: the president getting kidnapped, Canada falling to a belligerent dictator, and famously, a coronavirus pandemic... This looks like one of those

Apparently this is exactly what it is https://ai-futures.org/

hahaxdxd123

> Canada falling to a belligerent dictator

Hmm

kristopolous

Something like the Canadian army doing a land invasion from Winnipeg to North Dakota to capture key nuclear sites as they invade the beaches of Cleveland via Lake Eerie and do an air raid over Nantucket from Nova Scotia.

I bet there's some exercise somewhere by some think tank laying this basically out.

This is why conspiracy theorists love these think tank planning exercises and tabletop games much. You can find just about anything

dtauzell

The biggest danger will be once we have robots that can build themselves and do enough to run power plants, mine, etc ...

ge96

Bout to go register ai-2028.com