LLM Inevitabilism
619 comments
·July 15, 2025mg
pavlov
Compare these positive introductory experiences with two technologies that were pushed extremely hard by commercial interests in the past decade: crypto/web3 and VR/metaverse.
Neither was ever able to offer this kind of instant usefulness. With crypto, it’s still the case that you create a wallet and then… there’s nothing to do on the platform. You’re expected to send real money to someone so they’ll give you some of the funny money that lets you play the game. (At this point, a lot of people reasonably start thinking of pyramid schemes and multi-level marketing which have the same kind of joining experience.)
With the “metaverse”, you clear out a space around you, strap a heavy thing on your head, and shut yourself into an artificial environment. After the first oohs and aahs, you enter a VR chat room… And realize the thing on your head adds absolutely nothing to the interaction.
vidarh
The day I can put on a pair of AR glasses as lightweight as my current glasses and gain better vision, I'd pay a huge amount for that.
I hate my varifocals because of how constrained they make my vision feel...
And my vision is good enough that the only thing I struggle with without glasses is reading.
To me, that'd be a no-brainer killer app where all of the extra AR possibilities would be just icing.
Once you get something like enough and high resolution enough, you open up entirely different types of applications like that which will widen the appeal massively, and I think that is what will then sell other AR/VR capability. I'm not interested enough to buy AR glasses for the sake of AR alone, but if I could ditch my regular glasses (without looking like an idiot), then I'm pretty sure I'd gradually explore what other possibilities it'd add.
xnorswap
I just want the ability to put on a lightweight pair of glasses and have it remind me who people are.
Ideally by consulting a local database, made up of people I already know / have been introduced.
And yet while this capability would be life-changing, and has been technically possible for a decade or more, yet it was one of the first things banned/removed from APIs.
I understand privacy concerns of facial recognition looking up people against a global database, but I'm not asking for that. I'd be happy to have the burden of adding names/tags myself to the hashes.
I'd just like to be able to have what other people take for granted, the ability to know if you've met someone before (sometimes including people you've known for years).
ryanjshaw
Every single HN post on AI or crypto I see this argument and it’s exhausting.
When Eliza was first built it was seen a toy. It took many more decades for LLMs to appear.
My favourite example is prime numbers: a bunch of ancient nerds messing around with numbers that today, thousands of years later, allow us to securely buy anything and everything without leaving our homes or opening our mouths.
You can’t dismiss a technology or discovery just because it’s not useful on an arbitrary timescale. You can dismiss it for other reasons, just not this reason.
Blockchain and related technologies have advanced the state of the art in various areas of computer science and mathematics research (zero knowledge proofs, consensus, smart contracts, etc.). To allege this work will bear no fruit is quite a claim.
pavlov
Research is fine. But when corporations and venture capitalists are asking for your money today in exchange for vague promises of eventual breakthroughs, it's not wrong to question their motives.
dale_glass
> With the “metaverse”, you clear out a space around you, strap a heavy thing on your head, and shut yourself into an artificial environment. After the first oohs and aahs, you enter a VR chat room… And realize the thing on your head adds absolutely nothing to the interaction.
It doesn't if you use it as just a chat room. For some people it does add a lot, though.
The "metaverse" as in Active Worlds, Second Life, VR Chat, our own Overte, etc has been around for a long time and does have an user base that likes using it.
What I'm not too sure about is it having mass appeal, at least just yet. To me it's a bit of a specialized area, like chess. It's of great interest to some and very little to most of the population. That doesn't mean there's anything wrong with places like chess.com existing.
jl6
I don’t have a problem with chess.com existing, but if someone starts shouting loudly about how chess.com is going to be the future of everything, and that I’ll need to buy a bunch of expensive-but-still-kinda-crappy hardware to participate in the inevitable chess.com-based society, and that we need to ground-up rearchitect computing to treat chess as fundamental component of UI… well, it just gets a little tiresome.
kozikow
> And realize the thing on your head adds absolutely nothing to the interaction.
There are some nice effects - simulating sword fighting, shooting, etc.
It's just benefits still outweigh the cost. Getting to "good enough" for most people is just not possible in short and midterm.
zorked
> With crypto, it’s still the case that you create a wallet and then… there’s nothing to do on the platform. You’re expected to send real money to someone so they’ll give you some of the funny money that lets you play the game.
This became a problem later due to governments cracking down on cryptos and some terrible technical choices made transactions expensive just as adoption was ramping. (Pat yourselves on the back, small blockers.)
My first experience with crypto was buying $5 in bitcoin from a friend. If I didn't do it that way I could go on a number of websites and buy crypto without opening an account, via credit card, or via SMS. Today, most of the $5 would be eaten by fees, and buying for cash from an institution requires slow and intrusive KYC.
cornholio
> buying for cash from an institution requires slow and intrusive KYC.
Hello my friend, grab a seat so we can contemplate the wickedness of man. KYC is not some authoritarian or entrenched industry response to fintech upstarts, it's a necessary thing that protects billions of people from crime and corruption.
oytis
Bitcoin seems to be working as a kind of digital gold if you look at price development. It's not that much about technology though.
baxtr
The question I have for your observation (which I think is correct btw) is:
Do you think it's inherent to the technology that the use cases are not useful or is it our lack of imagination so far that we haven't come up with something useful yet?
9dev
Solutions in search for a problem just don’t tend to be very good solutions after all.
Maybe the answer isn’t that we’re too dumb/shallow/unimaginative to put it to use, but that the metaverse and web3 are just things that turned out to not work in the end?
jcfrei
Give it some time - just like LLMs the first VR headsets were created in the 90s (for example by Nintendo). But it took another 30 years for the hardware to achieve levels of functionality and comfortableness that make it a viable consumer product. Apple Vision is starting to get there. And crypto is even younger - it started in early 2009. For people living in countries without a proper banking infrastructure the stablecoins are already very helpful. Billions of people live in countries that don't have a well audited financial sector, that respects the rule of law or an independent central bank that makes sound monetary decisions irrespective of the government. For them stablecoins and their cheap transactions are huge.
shaky-carrousel
Hours of time saved, and you learned nothing in the process. You are slowly becoming a cog in the LLM process instead of an autonomous programmer. You are losing autonomy and depending more and more on external companies. And one day will come that, with all that power, they'll set whatever price or conditions they want. And you will accept. That's the future. And it's not inevitable.
baxtr
Did you build the house you live in? Did you weave your own clothes or grow your own food?
We all depend on systems others built. Determining when that trade-off is worthwhile and recognizing when convenience turns into dependence are crucial.
shaky-carrousel
Did you write your own letters? Did you write your own arguments? Did you write your own code? I do, and don't depend on systems other built to do so. And losing the ability of keep doing so is a pretty big trade-off, in my opinion.
Draiken
We're talking about a developer here so this analogy does not apply. If a developer doesn't actually develop anything, what exactly is he?
> We all depend on systems others built. Determining when that trade-off is worthwhile and recognizing when convenience turns into dependence are crucial.
I agree with this and that's exactly what OP is saying: you're now a cog in the LLM pipeline and nothing else.
If we lived in a saner world this would be purely a net positive but in our current society it simply means we'll get replaced for the cheaper alternative the second it becomes viable, making any dependence to it extremely risky.
It's not only for individuals too. What happens when our governments are now dependent on LLMs from these private corporations to function and they start the enshitification phase?
chii
> and you learned nothing in the process.
why do you presume the person wanted to learn something, rather than to get the work done asap? May be they're not interested in learning, or may be they have something more important to do, and saving this time is a life saver?
> You are losing autonomy and depending more and more on external companies
do you also autonomously produce your own clean water, electricity, gas and food? Or do you rely on external companies to provision all of those things?
shaky-carrousel
The pretty big difference is that I'm not easily able to produce my electricity or food. But I'm easily able to produce my code. We are losing autonomy we already have, just for pure laziness, and it will bite us.
77pt77
> Hours of time saved, and you learned nothing in the process
Point and click "engineer" 2.0
We all know this.
Eventually someone has to fix the mess and it won't be him. He will be management by then.
bt1a
these 3090s are mine. hands off!
bambax
The problem with LLM is when they're used for creativity or for thinking.
Just because LLMs are indeed useful in some (even many!) context, including coding, esp. to either get something started, or, like in your example, to transcode an existing code base to another platform, doesn't mean they will change everything.
It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).
More like AI is the new VBA. Same promise: everyone can code! Comparable excitement -- although the hype machine is orders of magnitude more efficient today than it was then.
eru
I don't know about VBA, but spreadsheets actually delivered (to a large extent) on the promise that 'everyone can write simple programs'. So much so that people don't see creating a spreadsheet as coding.
Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.
TeMPOraL
Right. Spreadsheeds already delivered on their promise (and then some) decades ago, and the irony is, many people - especially software engineers - still don't see it.
> Before spreadsheets you had to beg for months for the IT department to pick your request, and then you'd have to wait a quarter or two for them to implement a buggy version of your idea. After spreadsheets, you can hack together a buggy version of your idea yourself over a weekend.
That is still the refrain of corporate IT. I see plenty of comments both here and on wider social media, showing that many in our field still just don't get why people resort to building Excel sheets instead of learning to code / asking your software department to make a tool for you.
I guess those who do get it end up working on SaaS products targeting the "shadow IT" market :).
bambax
True, Excel is in the same category, yes.
6510
People know which ingredients to use, the ratios, how long to bake and cook them but the design of the kitchen prevents them from cooking the meal? Professional cooks debate which gas tube to use with which adapter and how to organize all the adapters according to ISO standards while the various tubes lay on the floor all over the building. The stove switches off if you try to use the wrong brand of pots. The cupboard has a retina scanner. Eventually people go to the back of the garden and make a campfire. There is no fridge there and no way to wash dishes. They are even using the wrong utensils. The horror!
mettamage
> everyone can code!
I work directly with marketers and even if you give them something like n8n, they find it hard to be precise. Programming teaches you a "precise mindset" that one doesn't have when they aren't really thinking about tech professionally.
I wonder if seasoned UX designers can code now. They do think professionally about software. I wonder if it's at a deep enough granularity such that they can simply use natural language to get something to work.
petra
Can an LLM detect a lack of precision and point it to you ?
TeMPOraL
> It doesn't mean “AI is the new electricity.” (actual quote from Andrew Ng in the post).
I personally agree with Andrew Ng here (and I've literally arrived at the exact same formulation before becoming aware of Ng's words).
I take "new electricity" to mean, it'll touch everything people do, become part of every endeavor in some shape of form. Much like electricity. That doesn't mean taking over literally everything; there's plenty of things we don't use electricity for, because alternatives - usually much older alternatives - are still better.
There's still plenty of internal combustion engines on the ground, in the seas and in the skies, and many of them (mostly on extremely light and extremely heavy ends of the spectrum) are not going to be replaced by electric engines any time soon. Plenty of manufacturing and construction is still done by means of hydraulic and pneumatic power. We also sometimes sidestep electricity for heating purposes by going straight from sunlight to heat. Etc.
But even there, electricity-based technology is present in some form. The engine may be this humongous diesel-burning colossus, built from heat, metal, and a lot of pneumatics, positioned and held in place by hydraulics - but all the sensors on it are electric, where in the past some would be hydraulic and rest wouldn't even exist; it's controlled and operated by electricity-based computing network; it's been designed on computers, and so on.
In this sense, I think "AI is a new electricity" is believable. It's a qualitatively new approach to computing, that's directly or indirectly applicable everywhere, and that people already try to apply to literally everything[0]. And, much like with electricity, time and economics will tell which of those applications make sense, which were dead ends, and which were plain dumb in retrospect.
--
[0] - And they really did try to stuff electricity everywhere back when it was the new hot thing. Same with nuclear energy few decades later. We still laugh at how people 100 years ago imagined the future will look like... in between crying that we got short-changed by reality.
camillomiller
AI is not a fundamental physical element. AI is mostly closed and controlled by people who will inevitably use it to further their power and centralize wealth and control. We acted with this in mind to make electricity a publicly controlled service. There is absolutely no intention nor political strength around to do this with AI in the West.
ben_w
While I'd agree with your first line:
> The problem with LLM is when they're used for creativity or for thinking.
And while I also agree that it's currently closer to "AI is the new VBA" because of the current domain in which consumer AI* is most useful.
Despite that, I'd also aver that being useful in simply "many" contexts will make AI "the new electricity”. Electricity itself is (or recently was) only about 15% of global primary power, about 3 TW out of about 20 TW: https://en.wikipedia.org/wiki/World_energy_supply_and_consum...
Are LLMs 15% of all labour? Not just coding, but overall? No. The economic impact would be directly noticeable if it was that much.
Currently though, I agree. New VBA. Or new smartphone, in that we ~all have and use them, while society as a whole simultaneously cringes a bit at this.
* Narrower AI such as AlphaFold etc. would, in this analogy, be more like a Steam Age factory which had a massive custom steam engine in the middle distributing motive power to the equipment directly: it's fine at what it does, but you have to make it specifically for your goal and can't easily adapt it for something else later.
informal007
LLM is helpful for creativity and thinking When you run out of your ideas
andybak
I sometimes feel that a lot of people bringing up the topic of creativity have never spent much time thinking, studying and self-reflecting on what "creativity" actually is. It's a complex topic and one that's mixed up with many other complex topics ("originality", "intellectual property", "aesthetic value", "art vs engineering" etc etc)
You see a lot of Motte and Bailey arguments in this discussion as people shift (often subconsciously) between different definitions of key terms and different historical perspectives.
I'd recommend someone tries to gain at least a passing familiarity with art history and the social history of art/design etc. Reading a bit of Edward De Bono and Douglas Hofstadter isn't a bad shout either (although it's many years since I've read the former so I can't guarantee it will stand up as well as my teenage self thought it did)
kazinator
You're discounting the times when it doesn't work. I recently experienced a weird 4X slowdown across multiple VirtualBox VM's on a Windows 10 host. AI led me down rabbit holes that didn't solve the problem.
I finally noticed a configuration problem. For some weird reason, in the Windows Features control panel, the "Virtual Machine Platform" checkbox had become unchecked (spontaneously; I did not touch this).
I mentioned this to AI, which insisted on not flipping that option, that it is not it.
> "Virtual Machine Platform" sounds exactly like something that should be checked for virtualization to work, and it's a common area of conflict. However, this is actually a critical clarification that CONFIRMS we were on the right track earlier! "Virtual Machine Platform" being UNCHECKED in Windows Features is actually the desired state for VirtualBox to run optimally.'
In fact, it was that problem. I checked the option, rebooted the host OS, and the VMs ran at proper speed.
AI can not only not be trusted to make deep inferences correctly, it falters on basic associative recall of facts. If you use it as a substitute for web searches, you have to fact check everything.
LLM AI has no concept of facts. Token prediction is not facts; it's just something that is likely to produce facts, given the right query in relation to the right training data.
baxtr
I am absolutely on board with the LLM inevitablism. It seems inevitable as you describe it. Everyone will use them everyday. Like smartphones.
I am absolutely not on board with AGI inevitablism. Saying “AGI is inevitable because models keep getting better” is an inductive leap that is not guaranteed.
AndyKelley
You speak with a passive voice, as if the future is something that happens to you, rather than something that you participate in.
TeMPOraL
They are not wrong.
The market, meant in a general sense, is stronger than any individual or groups of people. LLMs are here, and already demonstrate enough productive value to make them in high demand for objective reasons (vs. just as a speculation vehicle). They're not going away, nor is larger GenAI. It would take a collapse of technological civilization to turn the tide back now.
suddenlybananas
The market is a group of people.
tankenmate
I have a parallel to suggest; I know it's the rhetorical tool of analogous reasoning, but it deeply matches the psychology of the way most people think. Just like getting to a "certain" number of activated parameters in a model (for some "simple" tasks like summarisation) can be as low as 1.8 billion, once that threshold is breached the "emergent" behaviour of "reasonable", "contextual", or "lucid" text is achieved; or to put this in layman's terms, once your model is "large enough" (and this is quite small compared to the largest models currently in daily use by millions) the generated text goes from jibberish to uncanny valley to lucid text quite quickly.
In the same way once a certain threshold is reached in the utility of AI (in a similar vein to the "once I saw the Internet for the first time I knew I would just keep using it") it becomes "inevitable"; it becomes a cheaper option than "the way we've always done it", a better option, or some combination of the two.
So, as is very common in technological innovation / revolution, the question isn't will it change the way things are done so much as where will it shift the cost curve? How deeply will it displace "the way we've always done it"? How many hand weaved shirts do you own? Joseph-Marie Jacquard wants to know (and King Cnut has metaphorical clogs to sell to the Luddites).
stillpointlab
There is an old cliché about stopping the tide coming in. I mean, yeah you can get out there and participate in trying to stop it.
This isn't about fatalism or even pessimism. The tide coming in isn't good or bad. It's more like the refrain from Game of Thrones: Winter is coming. You prepare for it. Your time might be better served finding shelter and warm clothing rather than engaging in a futile attempt to prevent it.
OtomotO
The last tide being the blockchain (hype), which was supposed to solve all and everyone's problems about a decade ago already.
How come there even is anything left to solve for LLMs?
Applejinx
If you believe that there is nobody there inside all this LLM stuff, that it's ultimately hollow and yet that it'll still get used by the sort of people who'll look at most humans and call 'em non-player characters and meme at them, if you believe that you're looking at a collapse of civilization because of this hollowness and what it evokes in people… then you'll be doing that, but I can't blame anybody for engaging in attempts to prevent it.
FeepingCreature
Reminder that the Dutch exist.
imdsm
You can fight against the current of society or you can swim in the direction it's pulling you. If you want to fight against it, you can, but you shouldn't expect others to. For some, they can see that it's inevitable because the strength of the movement is greater than the resistance.
It's fair enough to say "you can change the future", but sometimes you can't. You don't have the resources, and often, the will.
The internet was the future, we saw it, some didn't. Cryptocurrencies are the future, some see it, some don't. And using AI is the future too.
Are LLMs the endpoint? Obviously not. But they'll keep getting better, marginally, until there's a breakthrough, or a change, and they'll advance further.
But they won't be going away.
staunton
I think it's important not to be too sure abot what of the future one is "seeing". It's easy to be confidently wrong and one may find countless examples and quotes where people made this mistake.
Even if you don't think you can change something, you shouldn't be sure about that. If you care about the outcome, you try things also against the odds and also try to organize such efforts with others.
(I'm puzzled by poeple who don't see it that way but at the same time don't find VC and start-ups insanely weird...).
PeterStuer
The reality for most people is that at a macro level the future is something that happens to them. They try to participate e.g. through voting, but see no change even on issues a significant majority of 'voters' agree on, regardless of who 'wins' the elections.
nradov
What are issues that a significant majority of voters agree on? Polls indicate that everyone wants lower taxes, cleaner environment, higher quality schools, lower crime, etc. But when you dig into the specifics of how to make progress on those issues, suddenly the consensus evaporates.
salviati
Isn't it kind of both?
Did luddites ever have a chance of stopping the industrial revolution?
Yizahi
Luddites weren't stopping industrial revolution. They were fighting against mass layoffs, against dramatic lowering of wages and against replacement of skilled workers with unskilled ones. Now this reminds me of something, hmmm...
StanislavPetrov
Did the Dutch ever have a chance to stop the massive run up in tulip prices?
It's easy to say what was inevitable when you are looking into the past. Much harder to predict what inevitable future awaits us.
bgwalter
No, but software engineers for example have more power, even in an employer's market, than Luddites.
You can simply spend so much time on meticulously documenting that "AI" (unfortunately!) does not work that it will be quietly abandoned.
stiray
Are you sure, that the code works correctly? ;)
Now, imagine, what you would do, if you never learned to read the code.
As you were always using only AI.
Anyway, coding is much simpler and easier than reading someone else's code. And I rather code it myself than spend time to actually read and study what AI has outputted. As at the end, I need to know that code works.
---
At one point, my former boss was explaining to me, how they were hired by some plane making company, to improve their firmware for controlling rear flaps. They have found some float problem and were flying to meeting, to explain what the issue was. (edit:) While flying, they figured out that they are flying with plane having that exact firmware.
TeMPOraL
Regarding your plane story, I can't help but notice that the fact this plane was in operation, and they were willing to fly on it, implies the problem wasn't that big of an issue.
stiray
It actually was, but no one bothered with plane model until they were in the air, but fair point, should mentioned it.
(I would love to explain more, but deliberately type of error and company name were omitted, anyway it is fixed for a decade)
brulard
Are you sure code from another developer (junior or not) works correctly? Or that it is secure? You have the same need to review the code regardless of the source.
a_wild_dandan
I'm uncertain if MY code works correctly lol. I know many code-illiterate folk; some of them I call "boss" or "client." They get along fine dining on my spaghetti. I do likewise never touching the wheel/pedals on my car's 45-minute commute to work.
Will someone eventually be scraping me off of the highway? Will my bosses stop printing money with my code? Possibly! But that's life -- our world is built upon trust, not correctness.
thefz
> use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case
It's all great until it breaks and you have to make changes. Will you be asking the same agent that made the errors in the first place?
lsy
I think two things can be true simultaneously:
1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.
2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.
There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.
erlend_sh
Exactly. This is basically the argument of “AI as Normal Technology”.
highfrequency
Thanks for the link. The comparison to electricity is a good one, and this is a nice reflection on why it took time for electricity’s usefulness to show up in productivity stats:
> What eventually allowed gains to be realized was redesigning the entire layout of factories around the logic of production lines. In addition to changes to factory architecture, diffusion also required changes to workplace organization and process control, which could only be developed through experimentation across industries.
eric-burel
Developers haven't even started extracting the value of LLMs with agent architectures yet. Using an LLM UI like open ai is like we just figured fire and you use it to warm you hands (still impressive when you think about it, but not worth the burns), while LLM development is about building car engines (here is you return on investment).
Jensson
> Developers haven't even started extracting the value of LLMs with agent architectures yet
There are thousands of startups doing exactly that right now, why do you think this will work when all evidence points towards it not working? Or why else would it not already have revolutionized everything a year or two ago when everyone started doing this?
eric-burel
Most of them are a bunch of prompts and don't even have actual developers. For the good reason that there is no training system yet and the wording of how you call the people that build these system isn't even there or clearly defined. Local companies haven't even setup a proper internal LLM or at least a contract with a provider. I am in France so probably lagging behind USA a bit especially NY/SF but the word "LLM developer" is just arriving now and mostly under the pressure of isolated developers and companies like me. This feel really really early stage.
20k
Every 6 months someone tells me that the latest AI model has revolutionised everything. Every time I try them, they're still terrible
The issue is that AI fundamentally can never work well for what I could want it to do, which is code. Its a fundamental mismatch between what AI is good at, and the problem trying to be solved. For anything else, i dgaf because my existing tools are far better - except in niche areas - which isn't enough to sustain the current LLM craze. I've got 0 desire to subscribe to proprietary tooling that will be dead within 5 years
clarinificator
Every booster argument is like this one. $trite_analogy triumphant smile
Karrot_Kream
[flagged]
__loam
3 years into automating all white collar labor in 6 months.
pydry
Theyre doing it so much it's practically a cliche.
There are underserved areas of the economy but agentic startups is not one.
mns
>evelopers haven't even started extracting the value of LLMs with agent architectures yet.
Which is basically what? The infinite monkey theorem? Brute forcing solutions for problems at huge costs? Somehow people have been tricked to actually embrace and accept that now they have to pay subscriptions from 20$ to 300$ to freaking code? How insane is that, something that was a very low entry point and something that anyone could do, is now being turned into some sort of classist system where the future of code is subscriptions you pay for companies ran by sociopaths who don't care that the world burns around them, as long as their pockets are full.
eric-burel
I don't have a subscription not even an Open AI account (mostly cause they messed up their google account system). You can't extract value of an LLM by just using the official UI, you just scratch the surface of how they work. And yet there aren't much developers able to actually build an actual agent architecture that does deliver some value. I don't include the "thousands" of startups that are clearly suffer from a signaling bias: they don't exist in the economy and I don't care about them like at all in my reasonning. I am talking about actual LLM developers that you can recruit locally the same way you recruit a web developer today, and that can make sense out of "frontier" LLM garbage talk by using proper architectures. These devs are not there yet.
frizlab
I cannot emphasize how much I agree with this comment. Thank you for writing it, I would never have had written it as well.
camillomiller
>> Developers haven't even started extracting the value of LLMs with agent architectures yet.
What does this EVEN mean? Do words have any value still, or are we all just starting to treat them as the byproduct of probabilistic tokens?
"Agent architectures". Last time I checked an architecture needs predictability and constraints. Even in software engineering, a field for which the word "engineering" is already quite a stretch in comparison to construction, electronics, mechanics.
Yet we just spew the non-speak "Agentic architectures" as if the innate inability of LLMs in managing predictable quantitative operations is not an unsolved issue. As if putting more and more of these things together automagically will solves their fundamental and existential issue (hallucinations) and suddenly makes them viable for unchecked and automated integration.
Msurrow
> first signs of pulling back investments
I agree with you, but I’m curious; do you have link to one or two concrete examples of companies pulling back investments, or rolling back an AI push?
(Yes it’s just to fuel my confirmation bias, but it’s still feels nice:-) )
0xAFFFF
Most prominent example was this one: https://www.reuters.com/technology/microsoft-pulls-back-more...
fendy3002
LLMs need significant optimization or we get significant improvement on computing power while keeping the energy cost the same. It's similar with smartphone, when at the start it's not feasible because of computing power, and now we have one that can rival 2000s notebooks.
LLMs is too trivial to be expensive
EDIT: I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate
killerstorm
LLM can give you thousands of lines of perfectly working code for less than 1 dollar. How is that trivial or expensive?
sgt101
Looking up a project on github, downloading it and using it can give you 10000 lines of perfectly working code for free.
Also, when I use Cursor I have to watch it like a hawk or it deletes random bits of code that are needed or adds in extra code to repair imaginary issues. A good example was that I used it to write a function that inverted the axis on some data that I wanted to present differently, and then added that call into one of the functions generating the data I needed.
Of course, somewhere in the pipeline it added the call into every data generating function. Cue a very confused 20 minutes a week later when I was re-running some experiments.
fendy3002
well I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate
and the 1 dollar cost for your case is heavily subsidized, that price won't hold up long assuming the computing power stays the same.
zwnow
Thousands of lines of perfectly working code? Did you verify that yourself? Last time I tried it produced slop, and I've been extremely detailed in my prompt.
lblume
Imagine telling a person from five years ago that the programs that would basically solve NLP, perform better than experts at many tasks and are hard not to anthropomorphize accidentally are actually "trivial". Good luck with that.
jrflowers
>programs that would basically solve NLP
There is a load-bearing “basically” in this statement about the chat bots that just told me that the number of dogs granted forklift certification in 2023 is 8,472.
clarinificator
Yeah it solved NLP about 50% of the time, and also mangles data badly and in often hard-to-detect ways.
Applejinx
"hard not to anthropomorphize accidentally' is a you problem.
I'm unhappy every time I look in my inbox, as it's a constant reminder there are people (increasingly, scripts and LLMs!) prepared to straight-up lie to me if it means they can take my money or get me to click on a link that's a trap.
Are you anthropomorphizing that, too? You're not gonna last a day.
trashchomper
Calling LLMs trivial is a new one. Yea just consume all of the information on the internet and encode it into a statistical model, trivial, child could do it /s
fendy3002
well I presented the statement wrongly. What I mean is the use case for LLM are trivial things, it shouldn't be expensive to operate
hammyhavoc
> all of the information on the internet
Total exaggeration—especially given Cloudflare providing free tools to block AI and now tools to charge bots for access to information.
moffkalast
ML models have the good property of only requiring investment once and can then be used till the end of history or until something better replaces them.
Granted the initial investment is immense, and the results are not guaranteed which makes it risky, but it's like building a dam or a bridge. Being in the age where bridge technology evolves massively on a weekly basis is a recipe for being wasteful if you keep starting a new megaproject every other month though. The R&D phase for just about anything always results in a lot of waste. The Apollo programme wasn't profitable either, but without it we wouldn't have the knowledge for modern launch vehicles to be either. Or to even exist.
I'm pretty sure one day we'll have an LLM/LMM/VLA/etc. that's so good that pretraining a new one will seem pointless, and that'll finally be the time we get to (as a society) reap the benefits of our collective investment in the tech. The profitability of a single technology demonstrator model (which is what all current models are) is immaterial from that standpoint.
wincy
Nah, if TSMC got exploded and there was a world war, in 20 years all the LLMs would bit rot.
moffkalast
Eh, I doubt it, tech only got massively better in each world war so far, through unlimited reckless strategic spending. We'd probably get a TSMC-like fab on every continent by the end of it. Maybe even optical computers. Quadrotor UAV are the future of warfare after all, and they require lots of compute.
Adjusted for inflation it took over 120 billion to build the fleet of liberty ships during WW2, that's like at least 10 TSMC fabs.
keiferski
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)
And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.
For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.
SwoopsFromAbove
100%. Not a new phenomenon at all, just the latest bogeyman for the inevitabilists to point to in their predestination arguments.
My aim is only to point it out - people are quite comfortable rejecting predestination arguments coming from eg. physics or religion, but are still awed by “AI is inevitable”.
ikr678
It's inevitable not because of any inherent quality of the tech, but because investors are demanding it be so and creating the incentives for 'inevitability'.
I also think EV vehicles are an 'inevitability' but I am much less offended by the EV future, as they still have to outcompete IC's, there are transitional options (hybrids), there are public transport alternatives, and at least local regulations appear to be keeping pace with the technical change.
AI inevitabilty so far seems to be only inevitable because I can't actually opt out of it when it gets pushed on me.
endymion-light
Techno Calvinists vs Luddite Reformists is a very funny image.
Agree - although it's an interesting view, I think it's far more related to a lack of idealogy and writing where this has emerged from. I find it more akin to a distorted renaissance. There's such a large population of really intelligent tech people that have zero real care for philisophical or religious thought, but still want to create and make new things.
This leads them down the first path of grafting for more and more money. Soon, a good proportion of them realise the futility of chasing cash beyond a certain extent. The problem is this belief that they are beyond these issues that have been dealt with since Mesopotamia.
Which leads to these weird distorted idealogies, creating art from regurgitated art, creating apps that are made to become worse over time. There's a kind of rush to wealth, ignoring the joy of making things to further humanity.
I think LLMs and AI is a genie out of a bottle, it's inevitable, but it's more like linear perpsective in drawing or the printing press rather than electricity. Except because of the current culture we live in, it's as if leonardo spent his life attempting to sell different variations of linear perspective tutorial rather than creating, drawing and making.
theSherwood
I think this is a case of bad pattern matching, to be frank. Two cosmetically similar things don't necessarily have a shared cause. When you see billions in investment to make something happen (AI) because of obvious incentives, it's very reasonable to see that as something that's likely to happen; something you might be foolish to bet against. This is qualitatively different from the kind of predestination present in many religions where adherents have assurance of the predestined outcome often despite human efforts and incentives. A belief in a predestined outcome is very different from extrapolating current trends into the future.
martindbp
Yes, nobody is claiming it's inevitable based on nothing, it's based on first principles thinking: economics, incentives, game theory, human psychology. Trying to recast this in terms of "predestination" gives me strong wordcel vibes.
bonoboTP
It's a bit like pattern matching the Cold War fears of a nuclear exchange and nuclear winter to the flood myths or apocalyptic narratives across the ages, and hence dismissing it as "ah, seen this kind of talk before", totally ignoring that Hiroshima and Nagasaki actually happened, later tests actually happened, etc.
It's indeed a symptom of working in an environment where everything is just discourse about discourse, and prestige is given to some surprising novel packaging or merger of narratives, and all that is produced is words that argue with other words, and it's all about criticizing how one author undermines some other author too much or not enough and so on.
From that point of view, sure, nothing new under the sun.
It's all too well to complain about the boy crying wolf, but when you see the pack of wolves entering the village, it's no longer just about words.
Now, anyone is of course free to dispute the empirical arguments, but I see many very self-satisfied prestigious thinkers who think they don't have to stoop so low as to actually look at models and how people use them in reality, it can all just be dismissed based on ick factors and name calling like "slop".
Few are saying that these things are eschatological inevitabilities. They are saying that there are incentive gradients that point in a certain direction and it cannot be moved out from that groove without massive and fragile coordination, due to game theoretical reasonings, given a certain material state of the world right now out there, outside the page of the "text".
card_zero
Or historicism generally. Hegel, "inexorable laws of historical destiny", that sort of thing.
guelo
Sorry I don't buy your argument.
(First I disagree with A Secular Age's thesis that secularism is a new force. Christian and Muslim churches were jailing and killing nonbelievers from the beginning. People weren't dumber than we are today, all the absurdity and self-serving hypocrisy that turns a lot of people off to authoritarian religion were as evident to them as they are to us.)
The idea is not that AI is on a pre-planned path, it's just that technological progress will continue, and from our vantage point today predicting improving AI is a no brainer. Technology has been accelerating since the invention of fire. Invention is a positive feedback loop where previous inventions enable new inventions at an accelerating pace. Even when large civilizations of the past collapsed and libraries of knowledge were lost and we entered dark ages human ingenuity did not rest and eventually the feedback loop started up again. It's just not stoppable. I highly recommend Scott Alexander's essay Meditations On Moloch on why tech will always move forward, even when the results are disastrous to humans.
keiferski
That isn’t the argument of the book, so I don’t think you actually read it, or even the Wikipedia page?
The rest of your comment doesn’t really seem related to my argument at all. I didn’t say technological process stops or slows down, I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify. And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.
TeMPOraL
> I pointed out how the thought patterns are often the same across time, and the inability and unwillingness to recognize this is psychologically lazy, to over simplify.
It's not lazy to follow thought patterns that yield correct predictions. And that's the bedrock on which "AI hype" grows and persists - because these tools are actually useful, right now, today, across wide variety of work and life tasks, and we are barely even trying.
> And there are indeed examples of technological acceleration or dispersal which was deliberately curtailed – especially with weapons.
Name three.
(I do expect you to be able to name three, but that should also highlight how unusual that is, and how questionable the effectiveness of that is in practice when you dig into details.)
Also I challenge you to find but one restriction that actually denies countries useful capabilities that they cannot reproduce through other means.
ygritte
> the actor has changed from God to technology
Agreed. You could say that technology has become a god to those people.
isqueiros
This is one of those types of comments to change one's whole world view.
> The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology.
I'm gonna fucking frame that. It goes hard
daliboru
This entire conversation is a masterpiece!
Just picture this convo somewhere in nature, at night, by a fire.
delichon
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
rafaelmn
If you claimed that AI was inevitable in the 80s and invested, or claimed people would be inevitably moving to VR 10 years ago - you would be shit out of luck. Zuck is still burning billions on it with nothing to show for it and a bad outlook. Even Apple tried it and hilariously missed the demand estimate. The only potential bailout for this tech is AR, but thats still years away from consumer market and widespread adoption, and probably will have very little to do with shit that is getting built for VR, because its a completely different experience. But I am sure some of the tech/UX will carry over.
Tesla stock has been riding on the self driving robo-taxies meme for a decade now ? How many Teslas are earning passive income while the owner is at work ?
Cherrypicking the stuff that worked in retrospect is stupid, plenty of people swore in the inevitability of some tech with billions in investment, and industry bubbles that look mistimed in hindsight.
gbalduzzi
None of the "failed" innovations you cited were even near the adoption rate of current LLMs.
As much as I don't like it, this is the actual difference. LLMs are already good enough to be a very useful and widely spread technology. They can become even better, but even if they don't there are plenty of use cases for them.
VR/AR, AI in the 80s and Tesla at the beginning were technology that someone believe could become widespread, but still weren't at all.
That's a big difference
alternatex
The other inventions would have quite the adoption rate if they were similarly subsidized as current AI offerings. It's hard to compare a business attempting to be financially stable and a business attempting hyper-growth through freebies.
weatherlite
> They can become even better, but even if they don't there are plenty of use cases for them.
If they don't become better we are left with a big but not huge change. Productivity gains of around 10 to 20 percent in most knowledge work. That's huge for sure but in my eyes the internet and pc revolution before that were more transformative than that. If LLMs become better, get so good they replace huge chunks of knowledge workers and then go out to the physical world then yeah ...that would be the fastest transformation of the economy in history imo.
fzeroracer
> None of the "failed" innovations you cited were even near the adoption rate of current LLMs.
The 'adoption rate' of LLMs is entirely artificial, bolstered by billions of dollars of investment in attempting to get people addicted so that they can siphon money off of them with subscription plans or forcing them to pay for each use. The worst people you can think of on every c-suite team force pushes it down our throats because they use it to write an email every now and then.
The places LLMs have achieved widespread adoption is in environments abusing the addictive tendencies of a advanced stochastic parrot to appeal to lonely and vulnerable individuals to massive societal damage, by true believers that are the worst coders you can imagine shoveling shit into codebases by the truckful and by scammers realizing this is the new gold rush.
ascorbic
The people claiming that AI in the 80s or VR or robotaxis or self-driving cars in the 2010s were inevitable weren't doing it on the basis of the tech available at that point, but on the assumed future developments. Just a little more work and they'd be useful, we promise. You just need to believe hard enough.
With the smartphone in 2009, the web in the late 90s or LLMs now, there's no element of "trust me, bro" needed. You can try them yourself and see how useful they are. You didn't need to be a tech visionary to predict the future when you're buying stuff from Amazon in the 90s, or using YouTube or Uber on your phone in 2009, or using Claude Code today. I'm certainly no visionary, but both the web and the smartphone felt different from everything else at the time, and AI feels like that now.
hammyhavoc
LLM inevitablists definitely assume future developments will improve their current state.
Qwertious
https://www.youtube.com/watch?v=zhr6fHmCJ6k (1min video, 'Elon Musk's broken promises')
Musk's 2014/2015 promises are arguably delivered, here in 2025 (took a little more than '1 month' tho), but the promises starting in 2016 are somewhere between 'undelivered' and 'blatant bullshit'.
rafaelmn
I mean no argument here - but the insane valuation was at some point based on a fleet of self driving cars based on cars they don't even have to own - overtaking Uber. I don't think they are anywhere close to that. (It's hard to keep track what it is now - robots and AI ?) Kudos for hype chasing all these years tho. Only beaten by Jensen on that front.
NBJack
Ironically, this is exactly the technique for arguing that the blog mentions.
Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?
zulban
> Remember ...
No, I don't remember it like that. Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?
You don't. I love the argument ad absurdum more than most but you've taken it a teensy bit too far.
thom
People genuinely did suggest that we were going to redesign our cities because of the Segway. The volume and duration of the hype were smaller (especially once people saw how ugly the thing was) but it was similarly breathless.
Jensson
> Do you have any serious sources from history showing that Segway hype is even remotely comparable to today's AI hype and the half a trillion a year the world is spending on it?
LLM are more useful than Segway, but it can still be overhyped because the hype is so much larger. So its comparable, as you say LLM is so much more hyped doesn't mean it can't be overhyped.
HPsquared
1. The Segway had very low market penetration but a lot of PR. LLMs and diffusion models have had massive organic growth.
2. Segways were just ahead of their time: portable lithium-ion powered urban personal transportation is getting pretty big now.
jdiff
Massive, organic, and unprofitable. And as soon as it's no longer free, as soon as the VC funding can no longer sustain it, an enormous fraction of usage and users will all evaporate.
The Segway always had a high barrier to entry. Currently for ChatGPT you don't even need an account, and everyone already has a Google account.
lmm
> LLMs and diffusion models have had massive organic growth.
I haven't seen that at all. I've seen a whole lot of top-down AI usage mandates, and every time what sounds like a sensible positive take comes along, it turns out to have been written by someone who works for an AI company.
DonHopkins
That's funny, I remember seeing "IT" penetrate Mr. Garrison.
https://www.youtube.com/watch?v=SK362RLHXGY
Hey, it still beats what you go through at the airports.
godelski
I think about the Segway a lot. It's a good example. Man, what a wild time. Everyone was so excited and it was held in mystery for so long. People had tried it in secret and raved about it on television. Then... they showed it... and... well...
I got to try one once. It was very underwhelming...
anovikov
Problem with Segway was that it was made in USA and thus was absurdly, laughably expensive, it cost the same as a good used car and top versions, as a basic new car. Once a small bunch of rich people all bought one, it was over. China simply wasn't in position at a time yet to copycat and mass-produce it cheaply, and hype cycles usually don't repeat so by the time it could, it was too late. If it was invented 10 years later we'd all ride $1000-$2000 Segways today.
positron26
I'm going to hold onto the Segway as an actual instance of hype the next time someone calls LLMs "hype".
LLMs have hundreds of millions of users. I just can't stress how insane this was. This wasn't built on the back of Facebook or Instagram's distribution like Threads. The internet consumer has never so readily embraced something so fast.
Calling LLMs "hype" is an example of cope, judging facts based on what is hoped to be true even in the face of overwhelming evidence or even self-evident imminence to the contrary.
I know people calling "hype" are motivated by something. Maybe it is a desire to contain the inevitable harm of any huge rollout or to slow down the disruption. Maybe it's simply the egotistical instinct to be contrarian and harvest karma while we can still feign to be debating shadows on the wall. I just want to be up front. It's not hype. Few people calling "hype" can believe that this is hype and anyone who does believes it simply isn't credible. That won't stop people from jockeying to protect their interests, hoping that some intersubjective truth we manufacture together will work in their favor, but my lord is the "hype" bandwagon being dishonest these days.
haiku2077
> Remember the revolutionary, seemingly inevitable tech that was poised to rewrite how humans thought about transportation? The incredible amounts of hype, the secretive meetings disclosing the device, etc.? That turned out to be the self-balancing scooter known as a Segway?
Counterpoint: That's how I feel about ebikes and escooters right now.
Over the weekend, I needed to go to my parent's place for brunch. I put on my motorcycle gear, grabbed my motorcycle keys, went to my garage, and as I was about to pull out my BMW motorcycle (MSRP ~$17k), looked at my Ariel ebike (MSRP ~$2k) and decided to ride it instead. For short trips they're a game changing mode of transport.
withinboredom
Even for longer trips if your city has the infrastructure. I moved to the Netherlands a few years ago, that infrastructure makes all the difference.
conradev
ChatGPT has something 300 million monthly users after less than three years and I don't think has Segway sold a million scooters, even though their new product lines are sick.
I can totally go about my life pretending Segway doesn't exist, but I just can't do that with ChatGPT, hence why the author felt compelled to write the post in the first place. They're not writing about Segway, after all.
delichon
I remember the Segway hype well. And I think AI is to Segway as nuke is to wet firecracker.
andsoitis
> AI is to Segway as nuke is to wet firecracker
wet firecracker won’t kill you
antonvs
That was marketing done before the nature of the device was known. The situation with LLMs is very different, really not at all comparable.
ako
Trend vs single initiative. One company failed but overall personal electric transportation is booming is cities. AI is the future, but along the way many individual companies doing AI will fail. Cars are here to stay, but many individual car companies have and will fail, same for phones, everyone has a mobile phone, but nokia still failed…
leoedin
Nobody is riding Segways around any more, but a huge percentage of people are riding e-bikes and scooters. It’s fundamentally changed transportation in cities.
afavour
Feels somewhat like a self fulfilling prophecy though. Big tech companies jam “AI” in every product crevice they can find… “see how widely it’s used? It’s inevitable!”
I agree that AI is inevitable. But there’s such a level of groupthink about it at the moment that everything is manifested as an agentic text box. I’m looking forward to discovering what comes after everyone moves on from that.
XenophileJKO
We haven't even barely extracted the value from the current generation of SOTA models. I would estimate less then 0.1% of the possible economic benefit is currently extracted, even if the tech effectively stood still.
That is what I find so wild about the current conversation and debate. I have claude code toiling away building my personal organization software right now that uses LLMs to take unstructured input and create my personal plans/project/tasks/etc.
WD-42
I keep hearing this over and over. Some llm toiling away coding personal side projects, and utilities. Source code never shared, usually because it’s “too specific to my needs”. This is the code version of slop.
When someone uses an agent to increase their productivity by 10x in a real, production codebase that people actually get paid to work on, that will start to validate the hype. I don’t think we’ve seen any evidence of it, in fact we’ve seen the opposite.
godelski
If you told someone in 1950 that smartphones would dominate they wouldn't have a hard time believing you. Hell, they'd add it to sci-fi books and movies. That's because the utility of it is so clear.
But if you told them about social media, I think the story would be different. Some would think it would be great, some would see it as dystopian, but neither would be right.
We don't have to imagine, though. All three of these things have captured people's imaginations since before the 50's. It's just... AI has always been closer to imagined concepts of social media more than it has been to highly advanced communication devices.
inopinatus
the idea that we could have a stilted and awkward conversation with an overconfident robot would not have surprised a typical mid-century science fiction consumer
godelski
Honestly, I think they'd be surprised that it wasn't better. I mean... who ever heard of that Asimov guy?
tines
> Some would think it would be great, some would see it as dystopian, but neither would be right.
No, the people saying it’s dystopian would be correct by objective measure. Bombs are nothing next to Facebook and TikTok.
godelski
I don't blame people for being optimistic. We should never do that. But we should be aware how optimism, as well as pessimism, can so easily blind us. There's a quote a like by Feynman
The first principle is that you must not fool yourself and you are the easiest person to fool.
There is something of a balance. Certainly, Social Media does some good and has the potential to do more. But also, it certainly has been abused. Maybe so much that it become difficult to imagine it ever being good.We need optimism. Optimism gives us hope. It gives us drive.
But we also need pessimism. It lets us be critical. It gives us direction. It tells us what we need to fix.
But unfettered optimism is like going on a drive with no direction. Soon you'll fall off a cliff. And unfettered pessimism won't even get you out the door. What's the point?
You need both if you want to see and explore the world. To build a better future. To live a better life. To... to... just be human. With either extreme, you're just a shell.
ghostofbordiga
You really think that Hiroshima would have been worse if instead of dropping the bomb the USA somehow got people addicted to social media ?
energy123
> But if you told them about social media, I think the story would be different.
It would be utopian, like how people thought of social media in the oughts. It's a common pattern through human history. People lack the imagination to think of unintended side effects. Nuclear physics leading to nuclear weapons. Trains leading to more efficient genocide. Media distribution and printing press leading to new types of propaganda and autocracies. Oil leading to global warming. IT leading to easy surveillance. Communism leading to famine.
Some of that utopianism is wilful, created by the people with a self-interested motive in seeing that narrative become dominant. But most of it is just a lack of imagination. Policymakers taking the path of local least resistance, seeking to locally (in a temporal sense) appease, avoiding high-risk high-reward policy gambits that do not advance their local political ambitions. People being satisfied with easy just-so stories rather than humility and a recognition of the complexity and inherent uncertainty of reality.
AI, and especially ASI, will probably be the same. The material upsides are obvious. The downsides harder to imagine and more speculative. Most likely, society will be presented with a fait accompli at a future date, where once the downsides are crystallized and real, it's already too late.
godelski
> It would be utopian
People wrote about this. We know the answer! I stated this, so I'm caught off guard as it seems you are responding to someone else, but at the same time, to me.London Times, The Naked Sun, Neuromancer, The Sockwave Rider, Stand on Zanzibar, or The Machine Stops. These all have varying degrees of ideas that would remind you of social media today.
Are they all utopian?
You're right, the downsides are harder to imagine. Yet, it has been done. I'd also argue that it is the duty of any engineer. It is so easy to make weapons of destruction while getting caught up in the potential benefits and the interesting problems being solved. Evil is not solely created by evil. Often, evil is created by good men trying to do good. If only doing good was easy, then we'd have so much more good. But we're human. We chose to be engineers, to take on these problems. To take on challenging tasks. We like to gloat about how smart we are? (We all do, let's admit it. I'm not going to deny it) But I'll just leave with a quote: "We choose to go to the Moon in this decade and do the other things not because they are easy, but because they are hard"
cwnyth
All of this is a pretty ignorant take on history. You don't think those who worked on the Manhattan Project knew the deadly potential of the atom bomb? And Communism didn't lead to famine - Soviet and Maoist policies did. Communism was immaterial to that. And it has nothing to do with utopianism. Trains were utopian? Really? It's just that new technology can be used for good things or bad things, and this goes back to when Grog invented the club. It's has zero bearing on this discussion.
Your ending sentence is certainly correct: we aren't imagining the effects of AI enough, but all of your examples are not only unconvincing, they're easy ways to ignore what downsides of AI there might be. People can easily point to how trains have done a net positive in the world and walk away from your argument thinking AI is going to do the same.
p0w3n3d
Back in 1950s nuclear tech was seen as inevitable. Many people had even bought plates made from uranium glass. They still glow somewhere in my parents' cabinet or maybe I broke them
moffkalast
Well there are like 500 nuclear powerplants online today supplying 10% of the world's power, so it wasn't too far off. Granted it's not the Mr. Fusion in every car as they imagined it back then. We probably also won't have ASI taking over the world like some kind of vengeful comic book villain as people imagine it today.
v3xro
While we can't wish it away we can shun it, educate people why it shouldn't be used, and sabotage efforts to included it in all parts of society.
bgwalter
Smartphones are different. People really wanted them since the relatively primitive Nokia Communicator.
"AI" was introduced as an impressive parlor trick. People like to play around, so it quickly got popular. Then companies started force-feeding it by integrating it into every existing product, including the gamification and bureaucratization of programming.
Most people except for the gamers and plagiarists don't want it. Games and programming fads can fall out of fashion very fast.
gonzric1
Chatgpt Has 800 million weekly active users. That's roughly 10% of the planet.
I get that it's not the panacea some people want us to believe it is, but you don't have to deny reality just because you don't like it.
bgwalter
There are all sorts of numbers floating around:
https://www.theverge.com/openai/640894/chatgpt-has-hit-20-mi...
This one claims 20m paying subscribers, which is not a lot. Mr. Beast has 60m views on a single video.
A lot of weekly active users will use it once a week, and a large part of that may be "hate users" who want to see how bad/boring it is, similar to "hatewatching" on YouTube.
Gigachad
Sure, because it's free. I doubt most users of LLMs would want to even pay $1/month for them.
tsimionescu
> Most people except for the gamers and plagiarists don't want it.
As someone who doesn't actually want or use AI, I think you are extremely wrong here. While people don't necessarily care about the forced integrations of AI into everything, people by and large want AI massively.
Just look at how much it is used to do your homework, or replaces Wikipedia & Google in day to day discussions. How much it is used to "polish" emails (spew better sounding BS). How much it is used to generate meme images instead of trawling the web for them. AI is very much a regular part of day to day life for huge swaths of the population. Not necessarily in economically productive ways, but still very much embedded and unlikely to be removed - especially since it's current capabilities today are already good enough for these purposes, they don't need smarter AI, just keep it cheap enough.
Roark66
Exactly. Anyone who has learned to use these tools to your ultimate advantage (not just short term perceived one, but actually) knows their value.
This is why I've been extremely suspicious of the monopolisation of the LLM services by single business/country. They may well be loosing billions on training huge models now. But once the average work performance shifts up sufficiently so as to leave "non AI enhanced" by the wayside we will see huge price increases and access to these AI tools being used as geopolitics leverage.
Oh, you do not want to accept "the deal" where our country can do anything in your market and you can do nothing? Perhaps we put export controls on GPT5 against your country. And from then on its as if they disconnected you from the Internet.
For this reason alone local AI is extremely important and certain people will do anything possible to lock it in a datacenter (looking at you Nvidia).
Animats
There may be an "LLM Winter" as people discover that LLMs can't be trusted to do anything. Look for frantic efforts by companies to offload responsibility for LLM mistakes onto consumers. We've got to have something that has solid "I don't know" and "I don't know how to do this" outputs. We're starting to see reports of LLM usage having negative value for programmers, even though they think it's helping. Too much effort goes into cleaning up LLM messes.
imiric
> Look for frantic efforts by companies to offload responsibility for LLM mistakes onto consumers.
Not just by companies. We see this from enthusiastic consumers as well, on this very forum. Or it might just be astroturfing, it's hard to tell.
The mantra is that in order to extract value from LLMs, the user must have a certain level of knowledge and skill of how to use them. "Prompt engineering", now reframed as "context engineering", has become this practice that separates anyone who feels these tools are wasting their time more than they're helping, and those who feel that it's making them many times more productive. The tools themselves are never the issue. Clearly it's the user who lacks skill.
This narrative permeates blog posts and discussion forums. It was recently reinforced by a misinterpretation of a METR study.
To be clear: using any tool to its full potential does require a certain skill level. What I'm objecting to is the blanket statement that people who don't find LLMs to be a net benefit to their workflow lack the skills to do so. This is insulting to smart and capable engineers with many years of experience working with software. LLMs are not this alien technology that require a degree to use correctly. Understanding how they work, feeding them the right context, and being familiar with the related tools and concepts, does not require an engineering specialization. Anyone claiming it does is trying to sell you something; either LLMs themselves, or the idea that they're more capable than those criticizing this technology.
rgoulter
A couple of typical comments about LLMs would be:
"This LLM is able to capably output useful snippets of code for Python. That's useful."
and
"I tried to get an LLM to perform a niche task with a niche language, it performed terribly."
I think the right synthesis is that there are some tasks the LLMs are useful at, some which they're not useful at; practically, it's useful to be able to know what they're useful for.
Or, if we trust that LLMs are useful for all tasks, then it's practically useful to know what they're not good at.
ygritte
Even if that's true, they are still not reliable. The same question can produce different answers each time.
imiric
> Or, if we trust that LLMs are useful for all tasks, then it's practically useful to know what they're not good at.
The thing is that there's no way to objectively measure this. Benchmarks are often gamed, and like a sibling comment mentioned, the output is not stable.
Also, everyone has different criteria for what constitutes "good". To someone with little to no programming experience, LLMs would feel downright magical. Experienced programmers, or any domain expert for that matter, would be able to gauge the output quality much more accurately. Even among the experienced group, there are different levels of quality criteria. Some might be fine with overlooking certain issues, or not bother checking the output at all, while others have much higher standards of quality.
The problem is when any issues that are pointed out are blamed on the user, instead of the tool. Or even worse: when the issues are acknowledged, but are excused as "this is the way these tools work."[1,2]. It's blatant gaslighting that AI companies love to promote for obvious reasons.
rightbyte
> Or it might just be astroturfing, it's hard to tell.
Compare the hype for commercial SaaS models to say Deepseek. I think there is an insane amount of astroturfing.
cheevly
Unless you have automated fine-tuning pipelines that self-optimize optimize models for your tasks and domains, you are not even close to utilizing LLMs to their potential. But stating that you don’t need extensive, specialized skills is enough of a signal for most of us to know that offering you feedback would be fruitless. If you don’t have the capacity by now to recognize the barrier to entry, experts are not going to take the time to share their solutions with someone unwilling to understand.
null
mumbisChungo
The more I learn about prompt engineering the more complex it seems to be, but perhaps I'm an idiot.
ygritte
The sad thing is that it seems to work. Lots of people are falling for the "you're holding it wrong" narrative.
keeda
People can't be trusted to do anything either, which is why we have guardrails and checks and balances and audits. That is why in software, for instance, we have code reviews and tests and monitoring and other best practices. That is probably also why LLMs have made the most headway in software development; we already know how to deal with unreliable workers that are humans and we can simply transfer that knowledge over.
As was discussed on a subthread on HN a few weeks ago, the key to developing successful LLM applications is going to be figuring out how to put in the necessary business-specific guardrails with a fallback to a human-in-the-loop.
lmm
> People can't be trusted to do anything either, which is why we have guardrails and checks and balances and audits. That is why in software, for instance, we have code reviews and tests and monitoring and other best practices. That is probably also why LLMs have made the most headway in software development; we already know how to deal with unreliable workers that are humans and we can simply transfer that knowledge over.
The difference is that humans eventually learn. We accept that someone who joins a team will be net-negative for the first few days, weeks, or even months. If they keep making the same mistakes that were picked out in their first code review, as LLMs do, eventually we fire them.
keeda
LLMs may not learn on the fly (yet), but these days they do have some sort of a memory that they automatically bring into their context. It's probably just a summary that's loaded into its context, but I've had dozens of conversations with ChatGPT over the years and it remembers my past discussions, interests and preferences. It has many times connected dots across conversations many months apart to intuit what I had in mind and proactively steered the discussion to where I wanted it to go.
Worst case, if they don't do this automatically, you can simply "teach" them by updating the prompt to watch for a specific mistake (similar to how we often add a test when we catch a bug.)
But it need not even be that cumbersome. Even weaker models do surprisingly well with broad guidelines. Case in point: https://news.ycombinator.com/item?id=42150769
mtlmtlmtlmtl
Yeah, I can't wait for this slop generation hype circlejerk to end either. But in terms of being used by people who don't care about quality, like scammers, spammers, blogspam grifters, people trying to affect elections by poisoning the narrative, people shitting out crappy phone apps, videos, music, "art" to grift some ad revenue, gen AI is already the perfect product. Once the people who do care wake up and realise gen AI is basically useless to them, the internet will already be dead, we'll be in a post-truth, post-art, post-skill, post-democracy world and the only people whose lives will have meaningfully improved are some billionaires in california who added some billions to their net worth.
It's so depressing to watch so many smart people spend their considerable talents on the generation of utter garbage and the erosion of the social fabric of society.
ljosifov
I don't think it's inevitable, for very few things are really inevitable. However, I find LLM-s good and useful. First the chat bots, now the coding agents. Looks to me medical consultation, 2nd opinion and the like - are not far behind. Enough people already use them for that. I give my lab tests results to ChatGPT. Tbh can't fault the author for motivated reasoning. Looks to me it goes like: this is not a future I want -> therefore it should not happen -> therefore it will not happen. Because by the same motivated reasoning: for me it is the future I want. To be able to interact with a computer via language, speech and more. For the computer to be smart, instead of dumb, as it is now. If I can have the computer enhance my smarts, my information processing power, my memory - the way writing allows me to off-load from my head onto paper, a calculator allows me to manipulate numbers, and computer toils for days instead myself - then I will probably want for the AI to complement, enhance me too.
bemmu
I was going to make an argument that it's inevitable, because at some point compute will get so cheap that someone could just train one at home, and since the knowledge of how to do it is out there, people will do it.
But seeing that a company like Meta is using >100k GPUs to train these models, even at 25% yearly improvement it would still take until the year ~2060 before someone could buy 50 GPUs and have the equivalent power to train one privately. So I suppose if society decided to outlaw LLM training, or a market crash put off companies from continuing to do it, it might be possible to put the genie back in the bottle for a few decades.
I wouldn't be surprised however if there are still 10x algorithmic improvements to be found too...
sircastor
The hardest part about inevitablism here is that the people who are making the argument this is inevitable are the same people who are the people who are shoveling hundreds of millions of dollars into it. Into the development, the use, the advertisement. The foxes are building doors into the hen houses and saying there’s nothing to be done, foxes are going to get in so we might as well make it something that works for everyone.
killerstorm
"put your money where your mouth is" is generally a good thing.
Barrin92
It's a good thing in a world where the pot of money is so small it doesn't influence what it's betting on, it's a bad thing when you're talking about Zuckerberg or Lehman Brothers, because when they decide to put their money on strange financial investments they just make reality and regardless how stupid in the long run we're going down with the ship for at least a decade or so
lmm
"Talking your book" is seen as a bad thing, especially when not properly disclosed.
a_wild_dandan
That's probably why the old saw isn't just "put your money."
rsanek
is that really a problem? feel like those working on ai are not shy about it
globular-toast
Except "the money" in this case is just part of funds distributed around by the super rich. The saying works better when it's about regular people actually taking risks and making sacrifices.
cdrini
How do you differentiate between an effective debater using inevitabilism as a technique to win a debate, and an effective thinker making a convincing argument that something is likely to be inevitable?
How do you differentiate between an effective debater "controlling the framing of a conversation" and an effective thinker providing a new perspective on a shared experience?
How do you differentiate between a good argument and a good idea?
I don't think you can really?
You could say intent plays a part -- that someone with an intent to manipulate can use debating tools as tricks. But still, even if someone with bad intentions makes a good argument, isn't it still a good argument?
xmodem
A thinker might say "LLMs are inevitable, here's why" and then make specific arguments that either convince me to change my mind, or that I can refute.
A tech executive making an inevitablist argument won't back it up with any justification, or if they do it will be so vague as to be unfalsifiable.
keiferski
Easy: good arguments take the form of books, usually, not rapid-fire verbal exchanges. No serious intellectual is interested in winning debates as their primary objective.
null
trash_cat
This concept is closely reated to politics of inevitability coined by Timothy Snyder.
"...the politics of inevitability – a sense that the future is just more of the present, that the laws of progress are known, that there are no alternatives, and therefore nothing really to be done."[0]
[0] https://www.theguardian.com/news/2018/mar/16/vladimir-putin-...
This article in question obviously applied it within the commercial world but at the end it has to do with language that takes away agency.
ojr
The company name was changed from Facebook to Meta because Mark thought the metaverse was inevitable, it's ironic that you use a quote from him
seunosewa
The true reason was to have a new untainted brand after the election scandal.
dasil003
Two things are very clearly true: 1) LLMs can do a lot of things that previous computing techniques could not do and we need time to figure out how best to harness and utilize those capabilities; but also 2) there is a wide range of powerful people who have tons of incentive to ride the hype wave regardless of where things will actually land.
To the article's point—I don't think it's useful to accept the tech CEO framing and engage on their terms at all. They are mostly talking to the markets anyway. We are the ones who understand how technology works, so we're best positioned to evaluate LLMs more objectively, and we should decide our own framing.
My framing is that LLMs are just another tool in a long line of software tooling improvements. Sure, it feels sort of miraculous and perhaps threatening that LLMs can write working code so easily. But when you think of all the repetitive CRUD and business logic that has been written over the decades to address myriad permutations and subtly varying contexts of the many human organizations that are willing to pay for software to be written, it's not surprising that we could figure out how to make a giant stochastic generator that can do an adequate job generating new permutations based on the right context and prompts.
As a technologist I want to understand what LLMs can do and how they can serve my personal goals. If I don't want to use them I won't, but I also owe it to myself to understand how their capabilities evolve so I can make an informed decision. I am not going to start a crusade against them out of nostalgia or wishful thinking as I can think of nothing so futile as positioning myself in direct opposition to a massive hype tsunami.
SwoopsFromAbove
This is how I approach the tools too. I believe it’s a healthy approach, but who’s to say whether I’m just a naysayer. shrug
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.
Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.
PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:
https://www.gibney.org/prompt_coding