Skip to content(if available)orjump to list(if available)

The Karpathy Interview, 6 Months After AI 2027

smokel

If you enjoy this kind of trend forecasting, you might also be interested in seeing which colors are expected to dominate clothing stores this winter:

https://www.pantone.com/articles/fashion-color-trend-report/...

0_____0

I looked down and saw that my new trousers are, in fact, very close to a color that's in the forecast.

HarHarVeryFunny

The whole thesis that AGI will come about through advances in coding agents makes zero sense. This isn't a coding problem - it's first a matter of defining the goal ("AGI" means nothing), then considering architectures and learning algorithms, etc, capable of achieving it. What's needed isn't agentic coding ability but rather creativity and ability to design new learning algorithms that are not to be found in the training data.

Coding is the least problem, and I'd guess today's Claude Code, etc is well capable of doing the drudgework.

andy_ppp

I find this article quite poorly constructed. It does not even give a statement about the reasoning behind why Karpathy said getting to https://ai-2027.com is unlikely. It also does not clearly define what AI 2027 is?

The following paragraph is almost complete gibberish:

"For AI experts, Karpathy's view is a better counterargument to short timelines than ours. But for non-AI-experts, we think the practical considerations we raised are worth reflecting on with 6 more months of evidence. As forecasters, this is more of an "outside view" - regardless of how exactly AI improves, what problems might slow down an R&D-based takeoff scenario?"

Why would Karpathy's view be different for AI and non-AI-experts?

Did they use AI to write the article?

ddp26

Hi, author here, sorry I was unclear. This article does make more sense if you've listened to the Dwarkesh podcast linked, and read AI 2027 as was linked.

I realize now that it was presumptuous to assume people had done both of these things.

ddp26

And to actually answer your question:

> Why would Karpathy's view be different for AI and non-AI-experts?

For people who understand AI, they can engage with the substance of his claims, about reinforcement learning, continuous learning, and his points about the 9s of reliability.

For people who don't, the article suggests thinking about AI as some black-box technology, and asking questions about base rates: how long does adoption normally take? What do the companies developing the technology normally do?

> It does not even give a statement about the reasoning behind why Karpathy said getting to https://ai-2027.com is unlikely.

That's the substance of the podcast, Karpathy justifies his views fairly well and at length.

> It also does not clearly define what AI 2027 is?

Dwarkesh covered AI 2027 when it came out, but for those who don't know, it's a deeply researched case of runaway AI that effectively destroys humanity in just 2-3 years after publication. This is what I mean by "short timelines".

runako

I hate to lean on credentialism and experience, I really do. But is Karpathy the only one of these who is a) an engineer and also b) over the age of 30?

Why are these relevant? Engineer, because we are talking about a set of technologies that are engineering projects. There is no substitute for hands-on experience in systems. And likely an engineer has taken at least one course that included some history of AI to give a sense of the time scales involved in getting from the perceptron to Sonnet 4.5.

Over 30 primarily because that's roughly old enough to have seen at least one tech hype cycle through which to filter the AI hype cycle. (Some people are old enough to remember the predictions that nobody would use screens in 2025, everything would be a voice interface. Or how economics had fundamentally changed and companies didn't need to make money in the New Economy. Or how Tesla would for sure have 1 million robot axis on the road in 2020. Etc.)

IMHO it's a bearish sign that boosters are not looking to experienced engineers for this kind of analysis.

827a

Another thing age often brings: There's a ton of young people (20s-early 30s) in SV right now who didn't get to materially participate in the first (1990-99) or second (2010-19) tech revolutions. There must be a third, because FOMO.

thaumasiotes

What was the tech revolution of the 2010s?

runako

The sibling comments focus on the money side, but the tech drivers were mobile & cloud. The boom kicked off right around 2008 when the App Store launched.

827a

The decade of largest stock market growth in the history of humanity, primarily led by the growth of mega-tech conglomerates and venture capital-fueled exits? Maybe you missed it?

ajkjk

It was... the 2010s? the era where tech came back and everyone got rich again? startups and VCs and big goog and facebook and AI? all that?

uvaursi

Not a snark comment genuinely curious because I’m confused: is Scott Alexander a psychiatrist or clinical psychologist or some kind? Why would he be involved in something like this?

Unless I’m confusing Slate Star Codex and this is a different S.A.

ajkjk

he is a psychiatrist and also a blogger at astralcodexten.com

ajkjk

I feel like it's important to keep in mind that everyone who has predicted a concrete near-future timeline at all is completely full of shit.

skywhopper

So much delusion on display here. The folks talking about replacing 95% of remote jobs by 2030 in particular seem to have no idea what people actually do in their work, and how decisions get made, not to mention the actual current state of generative AI.

consumer451

There is a lot to be said about Zuck, but in his last Dwarkesh interview I remember him being a realist about fast take-off. It was something like "If I look at team xzy, I don't see the bottleneck being a lack of smart devs."

However, to be fair, since that podcast, he has spent insane money on ML researchers...

recursivecaveat

It's a tale as old as time: "Everyone else's job can be easily automated. They are basically just mindlessly copy pasting from PDF to spreadsheet to word doc all day. My job on the other hand requires some subtle nuance and human judgement that won't be automated anytime soon."

mnky9800n

yes but it gets them views which gets ad revenue payout from youtube, tiktok, instagram, etc.

bgwalter

Yes, these people have never written anything useful, so they don't know. At best an "AI" researcher writes some plagiarized version of the transformer architecture, which is the used to plagiarize other people's hard earned code.

The entire absence of guilt in these "academics" is notable. They are complete psychopaths.

belter

I honestly thought this was meant as humour… then I realised its intended as serious. Surreal bubble...