Skip to content(if available)orjump to list(if available)

The Hater's Guide to the AI Bubble

The Hater's Guide to the AI Bubble

94 comments

·July 22, 2025

wulfstan

In July 2023, I wrote this to a friend:

"...being entirely blunt, I am an AI skeptic. I think AI and LLM are somewhat interesting but a bit like self-driving cars 5 years ago - at the peak of a VC-driven hype cycle and heading for a spectacular deflation.

My main interest in technology is making innovation useful to people and as it stands I just can't conceive of a use of this which is beneficial beyond a marginal improvement in content consumption. What it does best is produce plausible content, but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject. If a factory produced widgets with the same defect rate as ChatGPT has when producing content, it would be closed down tomorrow. We already have a problem with large volumes of bad (and deceptive!) content on the internet, and something that automatically produces more of it sounds like a waking nightmare.

Add to that the (presumed, but reasonably certain) fact that common training datasets being used contain vast quantities of content lifted from original authors without permission, and we have systems producing well-crafted lies derived from the sweat of countless creators without recompense or attribution. Yuck!"

I'll be interested to see how long it takes for this "spectacular deflation" to come to pass, but having lived through 3 or so major technology bubbles in my working life, my antennae tell me that it's not far off now...

whywhywhywhy

> but everything it produces needs careful checking for errors, mistakes and 'hallucinations' by someone with some level of expertise in a subject

Nah you just post it, if people point out the mistakes the comment is treated as a positive engagement by the algorithm anyway, unfortunately for anyone that cares.

K0balt

I too am deeply skeptical of the current economic allocation, but it’s typical of frontier expansions in general.

Somehow, in AI, people lost sight of the fact that transformer architecture AI is a fundamentally extractive process for identifying and mining the semantic relationships in large data sets.

Because human cultural data contains a huge amount of inferred information not overtly apparent in the data set, many smart people confused the results with a generative rather than an extractive mechanism.

….To such a point that the entire field is known as “generative” AI, when fundamentally it is not in any way generative. It merely extracts often unseen or uncharacterized semantics, and uses them to extrapolate from a seed.

There are, however, many uses for such a mechanism. There are many, many examples of labor where there is no need to generate any new meaning or “story”.

All of this labor can be automated through the application of existing semantic patterns to the data being presented, and to do so we suddenly do not need to fully characterize or elaborate the required algorithm to achieve that goal.

We have a universal algorithm, a sonic screwdriver if you will, with which we can solve any fully solved problem set by merely presenting the problems and enough known solutions so that the hidden algorithms can be teased out into the model parameters.

But it only works on the class of fully solved problems. Insofar as unsolved problems can be characterized as a solved system of generating and testing hypothesis to solve the unsolved, we may potentially also assail unsolved problems with this tool.

frithsun

I believe this is a "good" bubble in the sense that the 19th century railroad bubble and original dot com bubble both ended up invested in infrastructure that created immense value.

That said, all of these LLMs are interchangeable, there are no moats, and the profit will almost entirely be in the "last mile," in local subject matter experts applying this technology to their bespoke business processes.

dinkblam

> "good" bubble in the sense

how can massively buying hardware that will have to be thrown away in a few years be a "good" bubble in the sense of being a lasting infrastructure investment?

falcor84

Why would that hardware "have to be thrown away"? I've seen quite old GPUs still in use; given the current demand, I expect the vast majority of hardware used in these data centers to see a lot more extended use than most other types of electronics around the world (e.g. phones).

entropi

I am pretty optimistic that as long as hardware capacity exists, people will find ways of using it. Whether it will be profitable or not is another story of course.

kevindamm

Rivers overflowing with legacy hardware and villages incinerating boards for their metals, and the caustic effects on people & their environment that causes, are already happening. The hardware capacity exists only as long as it is operational and within a few generations. Perhaps we should be careful before building Manhattan-sized data centers.

Up to a point it is better than having additional compute sitting idle at the edge, economies of scale and all that, but after some point it becomes excess and wasteful, even if people figure out ways to entertain themselves with it.

And if people don't want to pay what it costs to improve and maintain these city-sized electronic brains? Then it all becomes waste, or the majority transformed into office or warehouse space or something else.

Proceeding with combined 1% (US GDP)-sized budgets despite this risk being an elephant in the room is what makes it a bubble.

h3lp

One large but forgotten effect of the dotcom bubble was an excess fiber capacity, that allowed smooth growth of internet in the following 25 years---average internet speed in the US is 200 Mbps, and a significant number of households is on a gigabit uplink. I take your point that GPU hardware amortizes away faster than fiber, but that's true of all computing hardware: the average lifecycle of a server is around five years.

benterix

The prices are falling down. I do a lot of Machine Learning and sometimes work with large datasets. The ability to (1) put all data in VRAM and (2) have the results in hours/days instead of weeks is amazing - and in the past it wouldn't be easy for a normal researcher like me. Now I can have access to these beefy machines, do my research and publish the results without taking a loan from my bank.

schnable

The models themselves, and methods and knowledge used to build and use them, are part of the "infrastructure" being built.

miltonlost

You're redefining infrastructure. A supply and demand model is not infrastructure. A Taylor expansion method is not infrastructure.

illiac786

Completely agree. I would ask also what “infrastructure” the dotcom bubble created?

schnable

Data centers and fiber optic connections across the world.

wulfstan

I keep saying to people - "if you have a good idea that can make use of large amounts of really really cheap GPUs to do something genuinely useful - get ready for a massive glut of spare capacity". I still haven't thought of anything, unfortunately...

bee_rider

These are sort of compute-focused GPUs, right? I bet a lot of university labs would like them.

I wonder if ubiquitous, user-friendly finite elements analysis tools could become a boon for 3D printers.

variadix

Hopefully they can be repurposed for something like cheap drug discovery rather than shitcoin mining.

wulfstan

If that ends up being the case then we could all genuinely agree that good has eventually emerged from the compute/inference infrastructure that LLMs paid for. I hope that comes to pass.

miltonlost

Ok, but what's the infrastructure that will remain after the AI bubble that can be retooled like railroads or dot com?

jakobnissen

I think the author's take is overly bleak. Yes, he supports his claim that AI businesses are currently money pits and unsustainable. But I don't think it's reasonable to claim that AI can't be profitable. This whole thing is moving so extremely fast. Models are getting better by the month. Cost is rapidly coming down. We broadly speaking still don't know how to apply AI. I think it's hubris to claim that, in the wake of this whole bubble noone will figure out how to use AI to provide value and noone will be profitable.

cobertos

"Cost is rapidly coming down" but capital expenditures are still high. They'll have to charge for this eventually, no?

appreciatorBus

Not necessarily. The ppl and firms making the capital expenditures can go bankrupt for instance. The world will carry on without them, while the infrastructure they built with those expenditures continues to provide value, just to someone else, and now at a dramatically lower capital cost.

We could compare it to the railroad boom, and the telecom boom - in both cases vast sums capital expenditures were made, and reasonable people might have concluded that eventually these expenses would have to be reimbursed through higher prices. However, in both cases, many firms simply went bankrupt and all that excess infrastructure went time to serve humanity for decades at lower cost.

hiAndrewQuinn

I am so, so glad you brought up what should be the obvious conclusion here. "B-but they spent all that money, how do they get it back!?" "That's the fun part, they don't."

Creative destruction is a woefully underappreciated force in capitalism. Shareholders can lose everything. Debt can be restructured or sold for pennies on the dollar. Debt can go unsold and unpaid, and the creditors can lose everything.

I think here it has to be mentioned that bankruptcy in the United States actually works very differently to bankruptcy in the European Union, where creditors have a lot more legal means at their disposal to haunt you if you try risky plays like taking on more debt to moonshot your way out of your current debt. In a funny way, a country's bankruptcy laws are their most important ones when it comes to wealth transfer.

bdelmas

“The world will carry on without them”. Sure but at the end of the day it’s not because companies can go bankrupt that debts etc magically disappear. It still impact other companies.

nicce

It is not about profitability alone, but whether benefits are net positive for society over the long term. Profitability is easy with current standards. Get the users. Make them dependent. Increase the price. Make AI mandatory. List goes on.

jsnell

From a quick skim, at least 90% of the article is about profitability. The remaining 10% is mostly bragging.

troupo

> Profitability is easy with current standards. Get the users. Make them dependent. Increase the price. Make AI mandatory. List goes on.

"Easy". "Just" get more users and "just" increase prices to somehow cover hundreds of billions of invested dollars and hundreds of millions of running costs.

It's that easy. I'm surprised none of the companies mentioned in the article thought of that.

hopelite

What is noone? You sure put a lot of confidence in it

ddddang

[dead]

42lux

We are pretty much plateauing in base model performance since gpt4. It's mostly tooling and integration now. The target is also AGI so no matter your product you will get measured on your progress towards it. With new "sota" models popping up left and right you also have no good way of user retention because the user is mostly interested in the models performance not the funny meme generator you added. looking at you openai...

"They called me bubble boy..." - some dude at Deutsche.

impossiblefork

So, how do you feel about the recent IMO stuff? Don't they cause a consistency problem for your view that we've plateaued-- to me at least, I felt we were something like two years away from this kind of thing.

Probably very expensive to run of course, probably ridiculously so, but they were able to solve really difficult maths problems.

narrator

The biological brain of the top human IMO guy runs on 20 watts. I wonder how much electricity Google used to match that performance.

helicalmix

The transformer paper was published in 2017, and within 8 years (less so, if i'm being honest), we have bots that passed the Turing test. To people with shorter term memories, passing the turing test was a big deal.

My point is that even if things are pleatuing, a lot of these advancements are done in step change fashion. All it takes is one or two good insights to make massive leaps, and just because things are plateauing now, it's a bad predictor for how things will be in the future.

GaggiX

>We are pretty much plateauing in base model performance since gpt4.

Reasoning models didn't even exist at the time, LLMs were struggling a lot with math at the time, now it's completely different with SOTA models, there have been massive improvements since gpt4.

usrnm

Are we in a bubble that's going to pop and take a large part of the economy with it? Almost certainly. Does it mean that the AI is a scam? Not really. After all, the Internet did not disappear after the dotcom burst, and, actually, almost everything we were promised by the dotcoms became reality at some point.

Palomides

"doing everything on the internet" definitely worked out, but I don't see why that implies "GPU accelerated LLMs will replace large swathes of human labor" will also be true

illiac786

I disagree on the “doing everything on the internet”. Social network is something we definitely “do” on the internet nowadays but I wouldn’t say that this “worked out”, and we’re only now starting to grasp how badly it’s working.

But agreed on the overall meaning of the comment, LLMs promises are still exaggerated.

usrnm

That's not what I'm saying. What dotcoms prove is that some technology can be a bubble and a real technological revolution at the same time, there is no contradiction here. "AI is a bubble and I probably shouldn't invest all my savings in NVDA" is a valid point, "AI is a bubble and therefore stupid and will never work" is not

cmrdporcupine

If there's anything that can be reliably predicated to be true over multiple decades it's that capitalism will continually seek to reduce labour costs and automate everything.

You can bet that even if the specific forms attempted in this interval don't take hold, they will eventually.

You and I are too expensive, and have had too much power.

andsoitis

> If there's anything that can be reliably predicated to be true over multiple decades it's that capitalism will continually seek to reduce labour costs and automate everything.

what about improved life quality? what about an explosion of types of jobs?

> You and I are too expensive, and have had too much power.

do you think the average citizen (or the collective) have MORE power or LESS power than 100 years ago, than 200 years ago?

topaz0

Worth noting that the essay acknowledges that there are ways that people use this stuff and actually like it. Saying it's a scam is about those uses being orders of magnitude less valuable than the companies involved (and credulous media) claim, and even orders of magnitude less than the amount of money that they are actively investing in this stuff. Saying it's a bubble is not a claim that it will go away entirely and never be seen again, it's a claim that reality will eventually manifest and result in massive upheaval as companies go bankrupt, valuations plummet, and associated downstream effects.

skeezyboy

> almost everything we were promised by the dotcoms became reality at some point. remember the blockchain bubble? used much blockchain lately? are blockchains changing anything?

qsort

See, the problem when making predictions is that the timeframe is effectively the prediction. I don't know what will happen. When I saw GPT-3 I thought it was hot garbage and never took it seriously. As a result I now have large error bars about what the future holds.

What we got from the Internet was some version of the original promises, on a significantly longer timescale, mostly enabled by technology that didn't exist at the time those promises were made. "Directionally correct" is a euphemism for "wrong".

hotpotat

Lots of in-depth analysis, but I think the author is very clearly emotionally invested to the point that they are only drawing conclusions that justify and support their emotions. I agree that we’re in a bubble in the sense that a lot of these companies will go bankrupt, but it won’t be Google or Anthropic (unless Google makes a model that’s an order of magnitude better or order of magnitude cheaper with capability parity). Claude is simply too good at coding in well-represented languages like Python and Typescript to not pay hundreds of dollars a month for (if not thousands, subsidized by employers). These companies are racing to have the most effective agents and models right now. Once the bottleneck is clearly humans’ ability the specify the requirements and context, reducing the cost of the models will be the main competitive edge, and we’re not there yet (although even now the better you are at providing requirements and context, the more effective you are with the models). I think that once cost reduction is the target, Google will win because they have the hardware capabilities to do so.

danenania

OpenAI was arguably an oom ahead at one point, and competitors caught up in about a year. So I’m not sure even an advantage like that is insurmountable. Like we saw with Anthropic, you just need a group of key researchers to leave the incumbent and start their own thing—they’ll then have a pretty good shot at catching up.

thoroughburro

The bubble will pop, just like the web bubble popped; and that’s going to suck. AI technologies will remain and be genuinely transformative, just like the web remained and was transformative (for good and ill).

troupo

It's a source of constant amusement to me that "arguments" used for AI are indistinguishable from "arguments" used for crypto.

(With a caveat that LLMs actually do have their uses)

jjjggggggg

Keep up the good work, but this could be said with more strength and in far fewer words by removing the indulgent rambling.

bibelo

The irony is that I asked ChatGPT to make a summary in french. However, i'm tired of the AI bubble and seeing half of my twitter feed filled w AI announcements and threads

CharlesXY

Reddit and LinkedIn especially has become a cesspool of generated content, thankfully its pretty easy to spot and block

bgwalter

SoftBank is also more cautious and the "$500 billion" Stargate project that was hyped in the White House will just build a single data center by the end of 2025:

https://www.wsj.com/tech/ai/softbank-openai-a3dc57b4

tomjuggler

Best rant I have read in such a long time. Subscribed despite the fact that I am all-in on AI for coding (plus much more) and disagree completely with the author's point of view.

CharlesXY

This is quite refreshing to read, while I would classify myself more in the group of “optimists”, I do believe there is a severe lack of skepticism, and those that share negative or more conservative views are indeed held to different standard to those who paint themselves as "optimists". Unlike other trends before, the wave of grifters in the AI space is atounding, anything can be “AI-powered” as long as its a wrapper/ chatbot