The AI wildfire is coming. it's going to be painful and healthy
198 comments
·December 7, 2025striking
redrix
You’re absolutely right! It’s not just pestilence—It’s the death of the internet as we know it. …I’m sorry, I couldn’t help myself.
Edit: I forgot HN strips emojis.
nick486
> "it's not X (emdash) it's Y" pestilence.
I wonder for how long this will keep working. Can't be too hard to prompt an AI to avoid "tells" like this one...
ben_w
Anyone lazy enough to not check the output is also going to be lazy enough to be easy to spot.
People who put the effort into checking the output aren't necessarily checking more than style, but some of them will, so it will still help.
phantasmish
The trouble is "AI" is waaaaay less of a boost to productivity if you have to actually check the output closely. My wife does a lot with AI-assisted writing and keeps running into companies that think it's going to let them fire a shitload of writers and have the editors do everything... but editing AI slop is way more work than editing the output of a half-decent human writer, let alone a good one.
If you're getting a lot of value out of LLM writing right now, your quality was already garbage and you're just using it to increase volume, or you have let your quality crater.
evanelias
Luckily there are plenty of other obvious tells!
Biggest one in this case, in my opinion: it's an extremely long article with awkward section headers every few paragraphs. I find that any use of "The ___ Problem" or "The ___ Lesson" for a section header is especially glaring. Or more generally, many superfluous section headers of the form "The [oddly-constructed noun phrase]". I mean, googling "The Fire-Retardant Giants" literally only returns this specific article.
Or another one here: the historic stock price data is slightly wrong. For whatever reason, LLMs seem to make mistakes with that often, perhaps due to operating on downsampled data. The initial red-flag here is the first table claims Apple's split-adjusted peak close in 2000 was exactly $1.00.
There are plenty of issues with the accuracy of the written content as well, but it's not worth getting into.
dist-epoch
People are already prompting with "yeah, don't do these things":
ssl-3
"That's such a great observation that highlights an important social issue — let's delve into it!"
I've been prompting the bot to avoid its tics for as long as I've been using it for anything; 3 years or so, now, I'd guess.
It's just a matter of reading and understanding the output, noticing patterns that are repetitious or annoying, and instructing the bot as such: "No. Fucking stop that."
jakeydus
You forgot the italics on Y
clickety_clack
I’m just glad everyone stopped posting their perfect prompting strategies.
PunchyHamster
Even if bubble burst will be massive, the slop factories will not stop with using it, because it's one of the use cases LLMs are just good at
jen20
It's not like many of those places weren't produce slop beforehand, either.
debo_
It makes me wonder what verbal tics / tells it has in other languages.
anthk
In Spanish: "En resúmen.." (in conclusion...)
Bluestrike2
> it's not X (emdash) it's Y
No, no, no! Stop that! The em dash is an wonderful little punctuation mark that's damned useful when used with purpose. You can't turn it into some scarlet glyph just because normal people finally noticed they exist. LLMs use them because we used them, damn it.
For god's sake, are we supposed to go back to the dark ages of the double hyphen like typographic barbarians in the hopes that a future update won't ruin that, too? After all the work to get text editors to automatically substitute them in the first place?
What's funny is that, when people first started noticing that LLMs tended to like the em dash, I'd mentioned to a friend that I hoped—rather naively—it might lead to a resurgence and people would think to themselves "huh, that looks pretty useful." Needless to say, I got that one wrong. Are we really going to sacrifice the poor em dash just because people can't come up with a better signifier for LLM text?
striking
Oh, no thanks. The emdash is lazy writing, through and through, for the same reason a parenthetical expressed any other way might be. LLMs overuse them the same way humans do: to pack in context where it doesn't belong. I'd happily lay the emdash and all its terrible cousins upon the sacrificial altar to see a renaissance in editing and proper sentence construction.
tjr
I first learned about em dash reading the GNU Texinfo manual in the 1990s. Now I have to wear a red, slightly long horizontal line on my shirt, and passersby shun me.
spwa4
Of course, that's exactly what won't happen. AI as "better spam" is not going away, it's going to wriggle in everywhere.
It's more things like AI delivering pizza that's under threat. You know, the actual value.
8f2ab37a-ed6c
With little growth and hiring happening outside of firms betting the farm on AI—and getting the funding to stay alive and play the lottery—what is a random tech employee supposed to do here?
It seems like right now the most rational move to stay in the industry is to milk the AI wave as much as possible, learn all of the tools, get a big brand name on one's resume, and then land somewhere still-alive once the AI music stops? But ultimately if nothing outside of AI is growing, it's one big game of musical chairs and even that might not save you?
karlgkk
That “rational move” has always been a good move, regardless of AI. This is a boom/bust industry, and the next boom will come in a few years. While we’re at it, if you’re making engineer money, you should be targeting retirement at 50. I’m not saying you have to do that, but it sure helps to have that option.
delfinom
> if you’re making engineer money,
SV & big tech engineer money.
Majority of engineering fields do not make that kind of money to retire at 50. Comfortable compared to the rest of the country, sure.
karlgkk
I think maybe that was implied, considering the topic of conversation and website we’re on.
That said if you’re making $250k+ a year and not on track to retire by 50, seriously please open a retirement calculator and figure out what you need to do to get there.
onraglanroad
When you can retire depends on how little you need.
Though, of course, if you're living from investment income you should be aware you're living off the work of other people.
reactordev
This. People act like we’ve gotten $200k+ for more than a decade. Most of us haven’t. It wasn’t until 10 year into my career that I hit $100k so this is boomer math that doesn’t account for inflation of everything.
wakawaka28
A majority of software engineers don't make enough money to retire at 50. People who have retired so young tend to be very lucky in both employment and their investments. Most probably stayed unmarried, inherited significant amounts of money, and/or married into even more money. It also helps to be lucky enough to start with a $100k+ job at age 23 and never have any bad luck to set you back. I've met people who check some/all of these boxes, and even they seem to not be retiring at 50.
null
dspillett
> what is a random tech employee supposed to do here?
My plan as someone who was thinking of leaving tech anyway (remote work is not for me, and practically any new tech job I get will be at least as remote as this one has become if not more so, and I want to program not manage programmers, artificial or otherwise) is to stay where I am pushing through to the other side if possible and if not, I'll find myself redundant. At that point I'll end up on a lower wage doing something else from the ground up, but if LLMs are going to be what we are told they are programming will become a minimum wage job for most anyway. Either way, sticking where I am for now, tightening the purse strings a bit, saving as much as I can, is the best course of action.
swatcoder
If you're a tech employee in a large company with lucrative compensation, you should be aggressively reducing your expenses and banking your excess so you can weather what might be long period of unemployment and can adapt more smoothly to employment at more modest compensation when you manage to get back in.
Unless you're working very obviously outside the blast radius of an AI-bubble correction (you'd know if you were) or are a very high-value VIP (again, you'd know), you should assume you'll be spending some time without a job within the next few years. Possibly a long time.
You might get lucky, but it's not really going to be in your control and "milking the AI wave, learning all the tools" isn't going to change your odds much. It really is musical chairs. Whether you lose your job will depend on where you happen to be standing when the music stops. And there are going to be so many other people looking for the same new chair as you, with resumes that look almost exactly like yours, that getting a new job will basically come down to a lottery draw.
If you think the AI stuff is cool, study it and play with it. Otherwise, just save money and start working on the outline for that novel you've been thinking about writing.
torginus
Do you think this has something to do with the current US policy of antagonizing most of the Western world?
Tech and software's investment balance sheet comes down to a largely fixed cost of development vs. a large customer base where every customer has little to no additional cost.
If you manage to burn the bridges or at least scare hundreds of millions of those people into exploring alternatives, that really eats into your total target market in the long run.
pbkompasz
That is a —good— point.
pizlonator
> AI inference demand is directed at improving actual earnings. Companies are deploying intelligence to reduce customer acquisition costs, lower operational expenses, and increase worker productivity. The return is measurable and often immediate, not hypothetical.
Is the return measurable and immediate?
Is it really?
never_inline
It's AI writing. Big words and rule of 3.
khannn
I forgot about the rule of 3 but that's obviously AI writing
pizlonator
Yeah maybe the AI thought leaders will be replaced by AI
But not in the sense of singularity and explosive intelligence, but in the sense of a flaming explosive bubble of slop
com2kid
Yes.
Dentists offices that only need 1 receptionist instead of 2.
A dramatic reduction in front line tier 1 customer support reps.
Translation teams laid off.
Documentation teams dramatically reduced.
Data entry teams replaced by vision models.
pizlonator
That's a cool dream, but my question is: is it happening?
Out of the things you listed the only ones that seem plausible are translation team and data entry team, though even there, I'd want humans to deslop the output.
com2kid
I'm telling you of what I've either worked on or seen myself.
Just a couple days ago a scheduled a furnace repair through an AI receptionist on the phone.
Layoffs in tech support and customer service already happened last year.
Entry level sales jobs doing cold calling have been replaced all over the place.
carlosjobim
You are not capable of telling the difference between human translated and AI translated communication.
jakeydus
I think that is it happening is an important question, but “does the consumer actually want it to happen” should be equally important. It won’t be, because the c suite will just make the decision for us all, but it ought to be.
emp17344
Historically, this is not how technology that improves productivity has affected the economy. I’d encourage you to learn more about economics and the history of automation.
com2kid
Doing this stuff is literally my job.
Large banks have tens of thousands of call center employees and a large % of calls they handle are perfectly solvable with a good AI bot. They are working very hard to cut call center staff as quickly as possible.
People don't realize how much a call to customers service costs. Back when I was at MSFT, a call to tech support for our product costs $20 to have someone pick up the phone. Since we were selling low margin HW, a single call to tech support completely erased the profit from that product's sale.
Layoffs have already happened and they will continue to happen.
One can argue this is a positive, as a customer if I can push a few buttons and issue a voice command to an AI to fix my problem instead of waiting on hold, that is a net positive. Also the price of goods will drop since the expected cost of customer service factored into the product price will drop.
E.g. $30 / support call, 1 in 10 customers call support during the lifetime of a product, $3 saved, but the way costs are structured, $3 saved in manufacturing can end up as nearly $10 off the final retail price of a product.
(And in competitive markets prices do drop when cost savings are found!)
lm28469
Meanwhile I can't get a hold of my landlord because they removed both their support email and online formular in favor of an AI chatbot, which means I can't get them to repair my heaters and have been without heating since thursday
They're saving pennies but at what cost?
OptionOfT
And yet, whenever I pick up the phone I do so because I need to do something I cannot do on the website.
The chatbot, acting as my agent, whether on the website on on a call doesn't have more permissions than I have.
Nevermark
> training compute looks more like an operating expense with a short payback window than a durable capital asset
Today they are a durable asset functionally, longer than they are economically. So there is no reason in a market with less demand, that their economic payback windows cannot be extended further into their functional lifetimes.
There will be energy cost incentives to replace GPUs. But turnover can respond sensibly to demand as it revives, while older GPUs continue working.
Also, the data centers themselves, and especially any associated increase in power generation, will carry forward as long term functional value.
I doubt any downturn in compute demand lasts long. The underlying trend, aside from AI, was for steady increases in demand. Regardless of bad AI business models, or investment overhangs, a greater focus by more entities on AI product-market fits, along with cheaper compute, will quickly soak up cycles in new and better ways.
The wildflowers will grow fast.
blibble
I don't see how nvidia come out of this stronger
their huge customers will be able to produce ASICs hat will be faster and cheaper to operate than their GPUs
jensen has to be the luckiest man in the world, first crypto, now "AI"
jonas21
> their huge customers will be able to produce ASICs hat will be faster and cheaper to operate than their GPUs
Why? NVIDIA is better positioned to produce faster and more efficient ML ASICs any of their huge customers (except possibly Google). And on top of that, the fact that there is a huge library of CUDA code that will run out of the box on NVIDIA hardware is a big advantage.
Arguably, this shift has already happened. Modern NVIDIA datacenter GPUs, like the H100, only bear a passing resemblance to a GPU -- most of the silicon is dedicated to accelerating ML workloads.
throw234234234
I think this is what the "circular financing" is all about actually. While you are in the 'picks and shovels' phase you want to use your high margins to buy up the value chain and become more vertically integrated. Effectively investing when the sun shines to diversify the company.
As a possibility for example I can see them transforming from a GPU based corp into a parent company for many full or partially owned "subsidiaries". They still manufacture chips to be "vertically integrated" but that becomes bread and butter as an enablement rather than the main story (e.g. Google TPU's). As their margins go down the value accrues to what they are owning (the business units/product areas).
vb-8448
Nvidia is the Cisco of .com ... cisco still exists, and it's doing pretty well.
diamond559
Just took them a few 26 years to touch their peak dot com bubble stock price again.
vb-8448
And it will be probably the same for nvidia, unless they didn't find another business stream apart from selling "shovels".
BTW, stock price is not everything, Cisco survived, grew, and it's the backbone of internet today.
cr125rider
I forgot they bought Splunk. Enterprises love shoveling money into that fire pit
patapong
> their huge customers will be able to produce ASICs hat will be faster and cheaper to operate than their GPUs
Are we sure this will be the case? Perhaps the sweet spot for hardware that can train/run language models is the GPU already, especially with the years of head start Nvidia has?
tim333
They were working on adapting GPUs for machine learning back in 2005. The getting lucky with AI was preceded by a lot of preparation.
null
malux85
Gaming, then crypto, then AI - all GPU hungry!
flopsamjetsam
And each one requiring an order of magnitude more GPUs than the last!
bdangubic
this trend will always continue with next big thing
mNovak
Is there an elegant term to describe a severely overburdened metaphor? I'm getting lost in the thick bark of an intertwined canopy root system here..
Sgt_Apone
Glad I was the only one getting lost in that forest. I’ve never seen a metaphor get hammered so much in one piece (AI or otherwise).
dlojudice
The text reminded me of one of Veritasium's latest videos [1] about power law, self-organized criticality, percolation, etc... and it also has a wildfire simulation
daemontus
The metaphor sure seems plausible, but why does the whole thing read like a LinkedIn post that was fed to an LLM to farm attention? :(
lbreakjai
Because it most certainly is.
nubg
Could the author please post the prompt this article was generated with?
dmix
VC is inherently high risk capital. It's by design most companies will fail or at most break even via acquisitions/acquihires, while a small few make investors massive amounts of money.
The only real difference this time around is all of the datacenters being built. There's real hard asset costs making it much riskier and capital intensive.
saulpw
The big difference this time around is that this 'high risk capital' isn't a small amount, it's 1-10% of the entire economy.
thelastgallon
AI is the only 'technology' that nobody knows what it solves. If it is a fridge, people buy it. If its a dishwasher, people buy it. The use cases of these technologies are immediately understood. AI is pushed down hard by the 'leaders', C-suite is pushing everyone to use AI at most companies. Nobody knows what its supposed to help with but a great many people claim 'success' with AI. Every full text search that was perfectly working before got converted to AI search and is instantly 100x worse. Same with lots of customer facing FAQs, customer support, etc.
Meanwhile, 67% of my time is gone fixing autocorrect on apple devices.
lkbm
A million different people: I've used AI in X way and it helped me.
You: No one knows anything AI helps with.
Yeah, okay, if you ignore everything every user says then it is indeed a mystery.
AstroBen
How much astroturfing is happening online? These companies certainly have the funds to do so on a wide scale
The only thing I trust about these right now is my own experience
mattgreenrocks
I’m sure it is. Though I can never tell if it is astroturfing or extremely weird AI maximalists just reminding us that they’re in a cult.
bgwalter
This is how it is done openly with clearly Grok-edited or written comments:
https://xcancel.com/elonmusk/status/1997307084853870793#m
One can only imagine the amount of covert promotion.
diamond559
Are these "million people" in the room with you now? Or are they just the bot and shill accounts you're reading on "X"?
lkbm
They're me, my coworkers, my friends. Talk to people. ChatGPT and the other big LLMs has hundreds of millions of users.
You might not like using LLMs. You might not find them useful. You might think they're bad and harmful (I do). But to claim that no one finds them useful is a completely different position, and one that's about as disconnected as it's possible to be.
credit_guy
There is no wildfire coming. The model providers have narrowed down to 4: OpenAI, Anthropic, Google and xAI. The chip manufacturers are Nvidia, Google (TPUs) and AMD is trying to break in as well. Microsoft is position very well as a middleman. All these guys are giants already. Just Nvidia, Microsoft and Google together have a market cap above ten trillion. OpenAI, Anthropic, AMD and xAI probably add one trillion more.
Sure, there might be hundreds or thousands of small startups in the AI game, and some are probably as viable as the fabled Pets.com. But even if they all crash and burn, it's going to be a rounding error compared to the 7 companies I mentioned above. The AI will be alive and kicking, and nobody will even notice.
I'm excited for the AI wildfire to come and engulf these AI-written thinkpieces. At this point I'd prefer a set of bullet points over having to sift through more "it's not X (emdash) it's Y" pestilence.