What AI Is Really For
50 comments
·November 19, 2025Dilettante_
helterskelter
Honestly one the best use cases I've found for it is creating configs. It used to be that I was able to spend a week fiddling around with, say, nvim settings. Now I tell an LLM what I want and it basically gives it to me without having to do trial and error, or locating some obscure comment from 2005 that tells me what I need to know.
awesome_dude
And bug fixes
"This lump of code is producing this behaviour when I don't want to"
Is a quick way to find/fix bugs (IME)
BUT it requires me to understand the response (sometimes the AI hits the nail on the head, sometimes it says something that makes my brain - that's not it, but now I know exactly what it is
sockgrant
“As a designer…”
IMHO the bleeding edge of what’s working well with LLMs is within software engineering because we’re building for ourselves, first.
Claude code is incredible. Where I work, there are an incredible number of custom agents that integrate with our internal tooling. Many make me very productive and are worthwhile.
I find it hard to buy in to opinions of non-SWE on the uselessness of AI solely because I think the innovation is lagging in other areas. I don’t doubt they don’t yet have compelling AI tooling.
ihaveajob
I'm curious if you could share something about custom agents. I love Claude Code and I'm trying to get it into more places in my workflow, so ideas like that would probably be useful.
verdverm
I've been using Google ADK to create custom agents (fantastic SDK).
With subagents and A2A generally, you should be able to hook any of them into your preferred agentic interface
hollowturtle
Where are the products? This site and everywhere around the internet, on x, linkedin and so is full of crazy claims and I have yet to see a product that people need and that actually works. What I'm experiencing is a gigantic enshittification everywhere, Windows sucks, web apps are bloated, slow and uninteresting. Infrastructure goes down even with "memory safe rust" burning millions and millions of compute for scaffolding stupid stuff. Such a disappointment
redorb
I think chatGPT itself is an epic product, Cursor has insane growth and usage. I also think they are both over-hyped, have too much a valuation.
hagbarth
If you read a little further in the article, the main point is _not_ that AI is useless. But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
NitpickLawyer
> But rather than AGI god building, a regular technology. A valuable one, but not infinite growth.
AGI is a lot of things, a lot of ever moving targets, but it's never (under any sane definition) "infinite growth". That's already ASI territory / singularity and all that stuff. I see more and more people mixing the two, and arguing against ASI being a thing, when talking about AGI. "Human level competences" is AGI. Super-human, ever improving, infinite growth - that's ASI.
If and when we reach AGI is left for everyone to decide. I sometimes like to think about it this way: how many decades would you have to go back, and ask people from that time if what we have today is "AGI".
hagbarth
Sam Altman has been drumming[1] the ASI drum for a while now. I don't think it's a stretch to say that this is the vision he is selling.
[1] - https://ia.samaltman.com/#:~:text=we%20will%20have-,superint...
xeckr
Once you have AGI, you can presumably automate AI R&D, and it seems to me that the recursive self-improvement that begets ASI isn't that far away from that point.
corry
"The best case scenario is that AI is just not as valuable as those who invest in it, make it, and sell it believe."
This is the crux of the OP's argument, adding in that (in the meantime) the incumbents and/or bad actors will use it as a path to intensify their political and economic power.
But to me the article fails to:
(1) actually make the case that AI's not going to be 'valuable enough' which is a sweeping and bold claim (especially in light of its speed), and;
(2) quantify AI's true value versus the crazy overhyped valuation, which is admittedly hard to do - but matters if we're talking 10% of 100x overvalued.
If all of my direct evidence (from my own work and life) is that AI is absolutely transformative and multiplies my output substantially, AND I see that that trend seems to be continuing - then it's going to be a hard argument for me to agree with #1 just because image generation isn't great (and OP really cares about that).
Higher Ed is in crisis; VC has bet their entire asset class on AI; non-trivial amounts of code are being written by AI at every startup; tech co's are paying crazy amounts for top AI talents... in other words, just because it can't one-shot some complex visual design workflow does not mean (a) it's limited in its potential, or (b) that we fully understand how valuable it will become given the rate of change.
As for #2 - well, that's the whole rub isn't it? Knowing how much something is overvalued or undervalued is the whole game. If you believe it's waaaay overvalued with only a limited time before the music stop, then go make your fortune! "The Big Short 2: The AI Boogaloo".
aynyc
A bit of sarcasm, but I think it's porn.
righthand
It’s at least about stimulating you to give richer data. Which isn’t quite porn.
njarboe
Many people use AI as the source for knowledge. Even though it is often wrong or misleading, it's advice is better on average than their own judgement or the judgement of people they know. When an AI is "smarter" than 95%? of the population, even if it does not reach superintelligence, will be a very big deal.
apsurd
This means to me AI is rocket fuel for our post-truth reality.
Post-truth is a big deal and it was already happening pre-AI. AGI, post-scarcity, post-humanity are nerd snipes.
Post-truth on the other hand is just a mundane and nasty sociologically problem that we ran head-first into and we don't know how to deal with. I don't have any answers. Seems like it'll get worse before it gets better.
emp17344
How is this different from a less reliable search engine?
null
xeckr
The AI race is presumably won by whomever can automate AI R&D first, thus everyone who is in an adjacent field will see the incremental benefits sooner than those further away. The further removed, the harder the takeoff once it happens.
block_dagger
> To think that with enough compute we can code consciousness is like thinking that with enough rainbows one of them will have a pot of gold at its end.
What does consciousness have to do with AGI or the point(s) the article is trying to make? This is a distraction imo.
kmnc
It’s a funny anology because what’s missing for the rainbows with pots of gold is magic and fairytales…so what’s missing for consciousness is also magic and fairytales? I’ve yet to see any compelling argument for believing enough computer wouldn’t allow us to code consciousness.
apsurd
Yes that's just it though, it's a logic argument. "Tell me why we aren't just stochastic parrots!" is more logically sound than "God made us", but that doesn't defacto make it "The correct model of reality".
I am suspect that the world is modeled linearly. That physical reality is non-linear is also more logically sound, so why is there such a clear straight line from compute to consciousness?
faceball2000
What about surveillance? Lately I've been feeling that is what it's really for. Because our data can be queried in a much more powerful way when it has all been used to train LLMs.
exceptione
I think this is the best part of the essay:
> But then I wonder about the true purpose of AI. As in, is it really for what they say it’s for?
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have. Their billions buy them ownership over what they are told will remake a future world nearly entirely monetized for them. And if not them, someone else. That’s where the fear comes in. It leads to Manhattan Project rationale, where any lingering doubt over the prudence of pursuing this technology is overpowered by the conviction of its inexorability. Someone will make it, so it should be them, because they can trust them.crazygringo
> There is a vast chasm between what we, the users, and them, the investors, are “sold” in AI. We are told that AI will do our tasks faster and better than we can — that there is no future of work without AI. And that is a huge sell, one I’ve spent the majority of this post deconstructing from my, albeit limited, perspective. But they — the people who commit billions toward AI — are sold something entirely different. They are sold AGI, the idea of a transformative artificial intelligence, an idea so big that it can accommodate any hope or fear a billionaire might have.
> Again, I think that AI is probably just a normal technology, riding a normal hype wave. And here’s where I nurse a particular conspiracy theory: I think the makers of AI know that.
I think those committing billions towards AI know it too. It's not a conspiracy theory. All the talk about AGI is marketing fluff that makes for good quotes. All the investment in data centers and GPU's is for regular AI. It doesn't need AGI to justify it.
I don't know if there's a bubble. Nobody knows. But what if it turns out that normal AI (not AGI) will ultimately provide so much value over the next couple decades that all the data centers being built will be used to max capacity and we need to build even more? A lot of people think the current level of investment is entirely economically rational, without any requirement for AGI at all. Maybe it's overshooting, maybe it's undershooting, but that's just regular resource usage modeling. It's not dependent on "coding consciousness" as the author describes.
dvcoolarun
I believe it’s a bubble. Every app interface is becoming similar to ChatGPT, claiming they’ll “help you automate,” while drifting away from the app’s original purpose.
Most of this feels like people trying to get rich off VC money — and VCs trying to get rich off someone else’s money.
Animats
It's pretty clear that the financialization aspect of AI is a bubble. There's way too much market cap created by trading debt back and forth. How well AI will work remains an open question at this point.
milesskorpen
It's a big number - but still less than tech industry profits.
Octoth0rpe
That is true, but not evenly distributed. Oracle for example: https://arstechnica.com/information-technology/2025/11/oracl...
Also, it may be true that these companies theoretically have the cash flow to cover to spending, but that doesn't mean that they will be comfortable with that risk, especially as that risk becomes more likely in some kind of mass extinction event amongst AI startups. To concretize that a bit, the remote possibility of having to give up all your profits for 2 years to payoff DC investment is fine at 1% chance of happening, but maybe not so ok at a 40% chance.