What if artificial intelligence is just a "normal" technology?
55 comments
·September 8, 2025jcranmer
Well, for starters, it would make The Economist's recent article on "What if AI made the world's economic growth explode?" [1] look like the product of overly credulous suckers for AI hype.
[1] https://www.economist.com/briefing/2025/07/24/what-if-ai-mad...
marginalia_nu
AI is technology that does not exist yet that can be speculated about. When AI materializes into existence it becomes normal technology.
Let's not forget there has been times when if-else statements were considered AI. NLP used to be AI too.
1c2adbc4
Do you have a suggestion for a better name? I care more about the utility of a thing, rather than playing endless word games with AI, AGI, ASI, whatever. Call it what you will, it is what it is.
J_McQuade
Broadly Uneconomical Large Language Systems Holding Investors in Thrall.
OJFord
It will depend on the final form the normal useful tools take, but for now it's 'LLMs', 'coding agents', etc.
marginalia_nu
I don't particularly mind the term, it's a useful shibboleth separating the marketing and sci-fi from the takes grounded in reality.
el_nahual
We have a name: Large Language Models, or "Generative" AI.
It doesn't think, it doesn't reason, and it doesn't listen to instructions, but it does generate pretty good text!
chpatrick
[citation needed]
People constantly assert that LLMs don't think in some magic way that humans do think, when we don't even have any idea how that works.
exe34
I think it's fine to keep the name, we just have to realise it's like magic. real magic can't be done. magic that can be done is just tricks. AI that works is just tricks.
1c2adbc4
I didn't realize that magic was the goal. I'm just trying to process unstructured data. Who's here looking for magic?
lo_zamoyski
Statistics.
A lot of this is marketing bullshit. AFAIK, even "machine learning" was a term made up by AI researchers when the AI winter hit who wanted to keep getting a piece of that sweet grant money.
And "neural network" is just a straight up rubbish name. All it does is obscure what's actually happening and leads the proles to think it has something to do with neurons.
null
michaeldoron
NLP is still AI - LLMs are using Natural Language Processing, and are considered artificial intelligence.
null
danaris
They still are.
Artificial Intelligence is a whole subfield of Computer Science.
Code built of nothing but if/else statements controlling the behavior of game NPCs is AI.
A* search is AI.
NLP is AI.
ML is AI.
Computer vision models are AI.
LLMs are AI.
None of these are AGI, which is what does not yet exist.
One of the big problems underlying the current hype cycle is the overloading of this term, and the hype-men's refusal to clarify that what we have now is not the same type of thing as what Neo fights in the Matrix. (In some cases, because they have genuinely bought into the idea that it is the same thing, and in all cases because they believe they will benefit from other people believing it.)
ACCount37
"AI" is a wide fucking field. And it occasionally includes systems built entirely on if-else statements.
lo_zamoyski
There is no difference between AI and non-AI save for the model the observer is using to view a particular bit of computation.
OkayPhysicist
Eh, I'd be fairly comfortable delineating between AI and other CS subfields based on the idea of higher-order algorithms. For most things, you have a problem with fixed set of fixed parameters, and you need a solution in the form of fixed solution. (e.g., 1+1=2) In software, we mostly deal with one step up from that: we solve general case problems, for a fixed set of variable parameters, and we produce algorithms that take the parameters as input and produce the desired solution (e.g., f(x,y) = x + y). The field of AI largely concerns itself with algorithms that produce models to solve entire classes of problem, that take the specific problem description itself as input (e.g., SAT solvers, artificial neural networks, etc where g("x+y") => f(x,y) = x + y ). This isn't a perfect definition of the field (it ends up catching some things like parser generators and compilers that aren't typically considered "AI"), but it does pretty fairly, IMO, represent a distinct field in CS.
wvbdmp
Okay, so AI isn’t exceptional, but I’m also not exceptional. I run on the same tech base as any old chimpanzee, but at one point our differences in degree turned into one of us remaining “normal” and the other burning the entire planet.
Whether the particular current AI tech is it or not, I have yet to be convinced that the singularity is practically impossible, and as long as things develop in the opposite direction, I get increasingly unnerved.
redwood
I think the "calculator for words" analogy is a good one. It's imperfect since words are inherently ambiguous but then again so is certain forms of digital numbers (floating point anyone?).
Through this lens it's way more normal
sfpotter
Floating point numbers aren't ambiguous in the least. They behave by perfectly deterministic and reliable rules and follow a careful specification.
null
GMoromisato
So are LLMs. Under the covers they are just deterministic matmul.
mhh__
And at scale you even have a "sampling" of sorts (even if the distribution is very narrow unless you've done something truly unfortunate in your FP code) via scheduling and parallelism.
ranger207
AI being normal technology would be the expected outcome, and it would be nice if it just hurried up and happened so I could stop seeing so much spam around AI actually being something much greater than normal technology
Kapura
Digital spreadsheets (excel, etc) have done much more to change the world than so-called "artificial intelligence," and on the current trajectory it's difficult to see that changing.
thepryz
I don’t know if I would agree.
Spreadsheets don’t really have the ability to promote propaganda and manipulate people the way LLM-powered bots already have. Generative AI is also starting to change the way people think, or perhaps not think, as people begin to offload critical thinking and writing tasks to agentic ai.
Swizec
> Spreadsheets don’t really have the ability to promote propaganda and manipulate people
May I introduce you to the magic of "KPI" and "Bonus tied to performance"?
You'd be surprised how much good and bad in the world has come out of some spreadsheet showing a number to a group of promotion chasing type-a otherwise completely normal people.
bilsbie
I’m guessing it will be exactly like the internet. Changes everything and changes nothing.
doc_manhat
https://knightcolumbia.org/content/ai-as-normal-technology
Seems to be the referenced paper?
If so previously discussed here: https://news.ycombinator.com/item?id=43697717
akomtu
Normal? AI is an alien technology to us, and we are being "normalized" to become compatible with it.
aeternum
AI actually seems far less alien than steam engines, trains, submarines, flight, and space travel.
People weren't sure if human bodies could handle moving at >50mph.
pessimizer
I've come to the conclusion that it is a normal, extremely useful, dramatic improvement over web 1.0. It's going to
1) obsolete search engines powered by marketing and SEO, and give us paid search engines whose selling points are how comprehensive they are, how predictable their queries work (I miss the "grep for the web" they were back when they were useful), and how comprehensive their information sources are.
2) Eliminate the need to call somebody in the Philippines awake in the middle of the night, just for them to read you a script telling you how they can't help you fix the thing they sold you.
3) Allow people to carry local compressed copies of all written knowledge, with 90% fidelity, but with references and access to those paid search engines.
And my favorite part, which is just a footnote I guess, is that everybody can move to a Linux desktop now. The chatbots will tell you how to fix your shit when it breaks, and in a pedagogical way that will gradually give you more control and knowledge of your system than you ever thought you were capable of having. Or you can tell it that you don't care how it works, just fix it. Now's the time to switch.
That's your free business idea for today: LLM Linux support. Train it on everything you can find, tune it to be super-clippy. Charge people $5 a month. The AI that will free you from their AI.
Now we just need to annihilate web 2.0, replace it with peer-to-peer encrypted communications, and we can leave the web to the spammers and the spies.
aredox
The potentially "explosive" part of AI was that it could be self-improving. Using AI to improve AI, or AI improving itself in an exponential growth until it becomes super-human. This is what the "Singularity" and AI "revolution" is based on.
But in the end, despite saying AI has PhD-level intelligence, the truth is that even AI companies can't get AI to help them improve faster. Anything slower than exponential is proof that their claims aren't true.
https://archive.ph/NOg8I