Game over. AGI is not imminent, and LLMs are not the royal road to getting there
83 comments
·October 18, 2025ACCount37
an0malous
That sign would be more valuable than the one Sam Altman’s been holding saying “AGI in 2025”
p1esk
Did he say that?
omnicognate
https://youtu.be/xXCBz_8hM9w?si=KjaolnjTJd2Lz82k
46:12
> What are you excited about in 2025? What's to come?
> AGI. Excited for that.
MattRix
Sam Altman never said that, except as a joke. An interviewer asked “what are you excited for in 2025” and he said “AGI”, then said “having a kid” as his real answer.
It’s also worth noting that even when he does talk about AGI, he makes a strong distinction between AGI (human level) and ASI (super human level) intelligence. Many people in these kind of discussions seem to conflate those two as being the same.
an0malous
He didn’t say it was a joke and has benefited to the tune of hundreds of billions of dollars from the prevailing belief that AGI is imminent, so it seems terribly convenient and charitable to interpret it as a joke. Should we have given the same charity to Elizabeth Holmes and Sam Bankman Fried when they reported their technological capabilities and cash balance? “Oh it’s not fraud that they materially benefited from, it’s just a joke.”
dogma1138
Symbolic AI didn’t die tho, it was just merged with deep learning either as complementary from the get go e.g. AlphaGo which uses Symbolic AI to feed a deep neural network or now as a post processing / interventionary technique for guiding and optimizing outputs of LLM, human in the loop and MoR are very much Symbolic AI techniques.
bbor
Exactly this, well said. Symbolic AI works so well that we don’t really think of it as AI anymore!
I know I for one was shocked to take my first AI course in undergrad and discover that it was mostly graph search algorithms… To say the least, those are still helpful in systems built around LLMs.
Which, of course, is what makes Mr. Marcus so painfully wrong!
imtringued
I'm not sure it's true that symbolic AI is dead, but I think Gary Marcus style symbolic AI is. His "next decade in AI" paper doesn't even mention symbolic regression. For those who don't know, linear regression tries to find a linear fit against a dataset. Quadratic regression a quadratic fit. And so on. Symbolic regression tries to find a symbolic expression that gives you an accurate data fit.
Symbolic regression has an extremely obvious advantage over neural networks, which is that it learns parameters and architecture simultaneously. Having the correct architecture means that the generalization power is greater and that the cost of evaluation due to redundant parameters is lower. The crippling downside is that the search space is so vast that it is only applicable to toy problems.
But Gary Marcus is in favour of hybrid architectures, so what would that look like? In the SymbolNet paper, they have essentially decided to keep the overall neural network architecture, but replaced the activation functions with functions that take multiple inputs aka symbols. The network can then be pruned down into a symbolic expression.
That in itself is actually a pretty damning blow to Gary Marcus, because now you have most of the benefits of symbolic AI with only a tiny vestige of investment into it.
What this tells us is that fixed activation functions with a single input appear to be a no go, but nobody ever said that biological neurons implement sigmoid, relu, etc in the first place. It's possible that the spike encoding already acts as a mini symbolic regressor and gives each neuron its own activation function equivalent.
The "Neuronal arithmetic" paper has shown that biological neurons can not only calculate sums (ANNs can do this), but also multiply their inputs, which is something the activation functions in artificial neural networks cannot do. LLMs use gating in their MLPs and attention to explicitly model multiplication.
There is also the fact that biological neurons form loops. The memory cells in LSTMs perform a similar function, but in the brain there can be memory cells everywhere whereas in a fixed architecture like LSTMs they are only where the designer put them.
It seems as if the problem with neural networks is that they're too static and inflexible and contain too much influence from human designers.
stingraycharles
This is not a really valuable article. The Apple paper was widely considered as a “well, duh” paper, GPT5 being underwhelming seems to be mostly a cost cutting / supply can’t keep up issue, and the others are just mainly expert opinions.
To be clear, I am definitely an AGI skeptic, and I very much believe that our current techniques of neural networks on GPUs is extremely inefficient, but this article doesn’t really add a lot to this discussion; it seems to self congratulate on the insights by a few others.
an0malous
I don’t think either of your first two statements are accurate, what is your support for those claims?
p1esk
GPT5 compared to the original GPT4 is a huge improvement. It exceeded all my expectations from 2 years ago. I really doubted most GPT4 limitations will be resolved so quickly.
If they manage a similar quality jump with GPT6, it will probably meet most reasonable definitions of AGI.
dns_snek
> GPT5 compared to the original GPT4 is a huge improvement. It exceeded all my expectations from 2 years ago.
Cool story. In my experience they're still on the same order of magnitude of usefulness as the original Copilot.
Every few months I read about these fantastic ground-breaking improvements and fire up whatever the trendiest toolchain is (most recently Claude Code and Cursor) and walk away less disappointed than last time, but ultimately still disappointed.
On simple tasks it doesn't save me any time but on more complex tasks it always makes a huge mess unless I mentor it like a really junior coworker. But if I do that I'm not saving any time and I end up with lower quality, poorly organized code that contains more defects than if I did it myself.
mellosouls
As somebody who used to link to the occasional Marcus essay, this is a really poor "article" by a writer who has really declined to the point of boorishness. The contents here are just a list of talking points already mostly discussed on HN, so nothing new, and his over-familiar soapbox proclamations add nothing to the discourse.
Its not that he's wrong, I probably still have a great deal of sympathy with his position, but his approach is more suited to social media echo chambers than intelligent discussion.
I think it would be useful for him to take an extended break, and perhaps we could also do the same from him here.
hopelite
I’m not sure an ad hominem assault is any different. You make proclamations without any evidence as if what you say has any more credibility than the next person. In fact, this response makes a reasonable person discount you.
Sure, it reads like some biased and coping, possibly even interested or paid hit-piece as if what happens can be changed by just being really negative about LLMs, but maybe consider taking your own advice there, kid; you know, an extended break.
mellosouls
Please give an example of how we might criticise somebody's method of communication and general strong decline in useful contributions to debate (of the order of that of Marcus) without you complaining ad hominem.
socketcluster
TBH. I don't think we actually need AGI. It's not a win-win.. It's a civilization-altering double-edged sword with unclear consequences.
I'm quite satisfied with current LLM capabilities. Their lack of agency is actually a feature, not a bug.
An AGI would likely end up implementing some kind of global political agenda. IMO, the need to control things and move things in a specific, unified direction is a problem, not a solution.
With full agency, an AI would likely just take over the world and run it in ways which don't benefit anyone.
Agency manifests as thirst for power. Agency is man's inability to sit quietly in a room, by himself. This is a double-edged sword which becomes increasingly harmful once you run out of problems to solve... Then agency demands that new problems be invented.
Agency is not the same as consciousness or awareness. Too much agency can be dangerous.
We can't automate the meaning of life. Technology should facilitate us in pursuing what we individually decide to be meaningful. The individual must be given the freedom to decide their own purpose. If most individuals want to be used to fulfill some greater purpose (I.e. someone else's goals), so be it, but that should not be the compulsory plan for everyone.
dmix
I didn't know serious technical people were taking the AGI thing seriously. I figured it was just an "aim for the stars" goal where you try to get a bunch of smart people and capital invested into an idea, and everyone would still be happy if we got 25% of the way there.
analognoise
If our markets weren’t corrupt, everyone in the AI space would be bankrupt by now, and we could all wander down to the OpenAI fire sale and buy nice servers for pennies on the dollar.
dmix
I'd take this more seriously if I didn't hear the same thing every other time there was a spike in VC investment. The last 5 times were the next dot com booms too.
dzink
Why the padding self on the back after a few opinions? You have large tech and government players and then you have regular people.
1. For large players: AGI is a mission worth perusing at the cost of all existing profit (you won’t pay taxes today, the stock market values you on revenue anyway, and if you succeed you can control all people and means of production).
2. For regular people the current AI capabilities have already led to either life changing skill improvement for those who make things for themselves or life changing likely permanent employment reduction for those who do things for others. If current AI is sufficient to meaningfully reduce the employment market, AGI doesn’t matter much to regular people. Their life is altered and many will be looking for manual work until AI enters that too.
3. The AI vendors are running at tremendous expense right now and the sources of liquidity for billions and trillions are very very few. It is possible a black swan event in the markets causes an abrupt end to liquidity and thus forces AI providers into pricing that excludes many existing lower-end users. That train should not be taken for granted.
4. It is also possible WebGPU and other similar scale-ai-accross-devices efforts succeed and you get much more compute unlocked to replace Advertising.
Serious question: Who in HN is actually looking forward to AGI existing?
barrell
I’m not convinced the current capabilities have impacted all that many people. I think the economy is much more responsible for the lack of jobs than “replacement with AI”, and most businesses have not seen returns on AI.
There is a tiny, tiny, tiny fraction of people who I would believe have been seriously impacted by AI.
Most regular people couldn’t care less about it, and the only regular people I know who do care are the ones actively boycotting it.
mapontosevenths
> > Serious question: Who in HN is actually looking forward to AGI existing?
I am.
It's he only serious answer to the question of space exploration. Rockets filled with squishy meat were never going to accomplish anything serious, unless we find a way of beating the speed of light.
Further, humanities greatest weakness is that we can't plan anything long-term. Our flesh decays too rapidly and our species is one of perpetual noobs. Fields are becoming too complex to master in a single lifetime. A decent super-intelligence can not only survive indefinitely, it can plan accordingly, and it can master fields that are too complex to fit inside a single human skull.
Sometimes I wonder if humanity wasn't just evolutions way of building AI's.
card_zero
I don't agree with the point about "perpetual noobs". Fields that are too broad to fit in a single mind in a lifetime need to be made deeper, that is, better explained. If a field only gets more expansive and intricate, we're doing it wrong.
Still, 130+ years of wisdom would have to be worth something, I can't say I dislike the prospect.
ACCount37
It's kind of ironic - that this generation of LLMs has worse executive functioning than humans do. Turns out the pre-training data doesn't really teach them that.
But AIs improve, as technology tends to. Humans? Well...
bossyTeacher
> It's he only serious answer to the question of space exploration.
It is. But the world's wealthiest are not pouring billions so that human can develop better space exploration tech. The goal is making more money
Noaidi
There are a lot of hopeful assumptions in the statement. Who’s to say that if AGI is achieved that it would want us to know how to go faster than the speed of light? you’re assuming that your wisdom and your plans would be AGI’s wisdom and plans. It might end up, just locking us here down on earth, sending us back to a more balanced primitive life, and killing off a mass amount of humans in order to achieve ecological balance on the Earth so humanity can survive without having to leave the planet.. Note that that’s not something I am advocating. I’m just saying it’s a possibility.
card_zero
Well, you're assuming that AGIs would see themselves as belonging to a separate faction, like aliens, instead of seeing themselves as being inorganic humans. They'd presumably be educated by human parents in human culture. We do tend to form groups based on trivial details, though, so I'm sure they'd be a subculture. But they wouldn't automatically gain any superior knowledge anyway, even if they did feel resentment for fleshy types. Being made of silicon (or whatever turns out to work) doesn't grant you knowledge of FTL, why would it?
the_arun
I was content even without AI. I’m good with whatever we have today as long we use them to change the life in a positive way.
card_zero
I'm looking forward to artificial people existing. I don't see how they'd be a money-spinner, unless mind uploading is also developed and so they can be used for life extension. The LLM vendors have no relevance to AGI.
prox
I am not against AGI, just the method and the players we have getting there. Instead of a curiosity to find intelligence, we just have rabid managers and derailed billionaires funding a race to … what? I don’t think even they know beyond a few hype words in their vocab and a buzz to bullshit powerpoint presentation.
pixl97
This is just the world we live in now for everything. Remember .com? Web3? Crypto? And now AI. Hell, really going back in the past you see dumb shit like this happening with tulips.
We're lucky to have managed to progress in spite of how greedy we are.
pohl
The luck may be coming to an end. The last two have serious misuses. Crypto’s best killer apps thus far have been rug-pulls, evading governance of criminal financial activity and corruption. AGI would, today, be likely be called into service to tighten authoritarian grips.
brazukadev
> Serious question: Who in HN is actually looking forward to AGI existing?
90% of the last 12 batches of YC founders would love to believe they are pursuing AGI with their crappy ChatGPT wrapper, agent framework, observability platform, etc.
bossyTeacher
> For regular people the current AI capabilities have already led to either life changing skill improvement for those who make things for themselves or life changing likely permanent employment reduction for those who do things for others
This statement sums up the tech centric bubble HN lives in. Your average former, shop assistant, fisherman or wood worker isn't likely to see significant life improevments from the transformer tech deployed until now.
arnaudsm
Genuine question : why are hyperscalers like OpenAI and Oracle raising hundreds of billions ? Isn't their current infra enough ?
Naive napkin math : a GB200 NVL72 is 3M$, can serve ~7000 concurrent users of gpt4o (rumored to be 1400B A200B), and ChatGPT has ~10M concurrent peak users. That's only ~4B$ of infra.
Are they trying to brute-force AGI with larger models, knowing that gpt4.5 failed at this, and deepseek & qwen3 proved small MoE can reach frontier performance ? Or is my math 2 orders of magnitude off ?
ACCount37
As a rule: inference is very profitable, frontier R&D is the money pit.
They need the money to keep pushing the envelope and building better AIs. And the better their AIs get, the more infra they'll need to keep up with the inference demand.
GPT-4.5's issue was that it wasn't deployable at scale - unlike the more experimental reasoning models, which delivered better task-specific performance without demanding that much more compute.
Scale is inevitable though - we'll see production AIs reach the scale of GPT-4.5 pretty soon. Newer hardware like GB200 enables that kind of thing.
Noaidi
They are raising the money because they can. While these businesses may go bankrupt, many people who ran these businesses will make hundreds of millions of dollars.
Either that or AGI is not the goal, rather it’s they want to function for, and profit off of , a surveillance state that might be much more valuable in the short term.
raws
I feel like even a human would also fail if given all data results but those for x and then get tested on x just as the function results differ from previously observed behavior. Is it not more interesting to observe how the model(human or not) incorporates the new data to better match reality in the case of distribution shift or other irregular distributions and others
dsign
I don't think AGI is imminent in a "few months"-inminent, but in a "few decades" imminent. Personally, I don't own stocks in any AI company, though I'll be affected if the bubble bursts because the world economy right now feels fragile.
But I'm hoping something good comes out of the real push to build more compute and to make it cheaper. Maybe a bunch of intrepid aficionados will use it to run biological simulations to make cats immortal, at which point I'll finally get a cat. And then I will be very happy.
erichocean
If by AGI you mean "can do any economic task humans currently do", that is within the range of a "few months," though the rollout will be incremental since each economic task has to be independently taught, there are supply chain issues, and AI-first companients will need to be developed to take best advantage of it, etc.
But all of this is much closer than people seem to realize.
dns_snek
How about we start with just one, like software development with its abundance of data? It's going to look far less silly for you when that one doesn't work out either.
null
_fat_santa
While Gary is very bearish on AI, I think there's some truth to his claims here though I disagree with how he got there. The problem I see with AI and AGI is not so much a technical problem as an economic one.
If we keep down our current trajectory of pouring billions on top of billions into AI then yes I think it would be plausible that in the next 10-20 years we will have a class of models that are "pseudo-AGI", that is we may not achieve true AGI but the models are going to be so good that it could well be considered AGI in many use cases.
But the problem I see is that this will require exponential growth and exponential spending and the wheels are already starting to catch fire. Currently we see many circular investments and unfortunately I see it as the beginning of the AI bubble bursting. The root of the issue is simply that these AI companies are spending 10x-100x or more on research than they bring in with revenue, OpenAI is spending ~$300B on AI training and infra while their revenue is ~$12B. At some point the money and patience from investors is going to run out and that is going to happen long before we reach AGI.
And I have to hand it to Sam Altman and others in the space that made the audacious bet that they could get to AGI before the music stops but from where I'm standing the song is about to come to an end and AGI is still very much in the future. Once the VC dollars dry up the timeline for AGI will likely get pushed another 20-30 years and that's assuming that there aren't other insurmountable technical hurdles along the way.
Havoc
The karpathy interview struck me as fairly upbeat despite the extended 10 year timeline. That's really not a long time on something with "changes everything" potential...as proper working agents would be
I don't think Gary Marcus had anything of value to say about AI at any point in what, the past 2 decades? You can replace him with a sign that has "the current AI approach is DOOMED" written on it at no loss of function.
Symbolic AI has died a miserable death, and he never recovered from it.