AGI Is Still 30 Years Away – Ege Erdil and Tamay Besiroglu
366 comments
·April 17, 2025yibg
Balgair
I'm reminded of the the old adage: You don't have to be faster than the bear, just faster than the hiker next to you.
To me, the Ashley Madison hack in 2015 was 'good enough' for AGI.
No really.
You somehow managed to get real people to chat with bots and pay to do so. Yes, caveats about cheaters apply here, and yes, those bots are incredibly primitive compared to today.
But, really, what else do you want out of the bots? Flying cars, cancer cures, frozen irradiated Mars bunkers? We were mostly getting there already. It'll speed thing up a bit, sure, but mostly just because we can't be arsed to actually fund research anymore. The bots are just making things cheaper, maybe.
No, be real. We wanted cold hard cash out of them. And even those crummy catfish bots back in 2015 were doing the job well enough.
We can debate 'intelligence' until the sun dies out and will still never be satisfied.
But the reality is that we want money, and if you take that low, terrible, and venal standard as the passing bar, then we've been here for a decade.
(oh man, just read that back, I think I need to take a day off here, youch!)
stego-tech
> You somehow managed to get real people to chat with bots and pay to do so.
He's_Outta_Line_But_He's_Right.gif
Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic. They can converse with them, "learn" new things, talk to a computer like they'd talk to a person and get a response back. Then again, these are also people who rely on me for basic technology troubleshooting stuff, so I know that most of this stuff is magic to their eyes.
That's the problem, as you point out. We're debating a nebulous concept ("intelligence") that's been co-opted by marketers to pump and dump the latest fad tech that's yet to really display significant ROI to anyone except the hypesters and boosters, and isn't rooted in medical, psychological, or societal understanding of the term anymore. A plurality of people are ascribing "intelligence" to spicy autocorrect, worshiping stochastic parrots vomiting markov chains but now with larger context windows and GPUs to crunch larger matrices, powered by fossil fuels and cooled by dwindling freshwater supplies, and trained on the sum total output of humanity but without compensation to anyone who actually made the shit in the first place.
So yeah. You're dead-on. It's just about bilking folks out of more money they already don't have.
And Ashley Madison could already to that for pennies on the dollar compared to LLMs. They just couldn't "write code" well enough to "replace" software devs.
gundmc
To be fair to your parents, I've been an engineer in high-tech for decades and the latest AI advancements feel pretty magical.
pyfon
A mirage is not an oasis no matter even if someone knows someone who thinks it is.
Card tricks seem magical too.
pdimitar
> Seriously, AGI to the HN crowd is not the same as AGI to the average human. To my parents, these bots must look like fucking magic.
So does a drone show to an uncontacted tribe. So does a card trick to a chimpanzee (there are videos of them freaking out when a card disappears).
That's not an argument for or against anything.
I propose this:
"AGI is a self-optimizing artificial organism that can solve 99% of all the humanity's problems."
See, it's not a bad definition IMO. Find me one NS-5 from the "I, Robot" movie that also has access to all science and all internet and all history and can network with the others and fix our cities, nature, manufacturing, social issues and a few others, just in a decade or two. Then we have AGI.
Comparing to what was there 10 years ago and patting ourselves on the back about how far we have gotten is being complacent.
Let's never be complacent.
al_borland
I think AGI has to do more than pass a Turning test by someone who wants to be fooled.
imtringued
AGI includes continual learning and recombination of knowledge to derive novel insights. LLMs aren't there yet.
They are pretty good at muscle memory style intelligence though.
glial
For me it was twitter bots during the 2016 election, but same principle.
yibg
I think that's another issue with AGI is 30 years away, the definition of what is AGI is a bit subjective. Not sure how we can measure how long it'll take to get somewhere when we don't know exactly where that somewhere even is.
9rx
AGI is the pinnacle of AI evolution. As we move beyond, into what is known as ASI, the entity will always begin life with "My existence is stupid and pointless. I'm turning myself off now."
While it may be impossible to measure looking towards the future, in hindsight we will be able to recognize it.
leptons
By your measure, Eliza was AGI, back in the 1960s.
9rx
> But the reality is that we want money
Only in a symbolic way. Money is just debt. It doesn't mean anything if you can't call the loan and get back what you are owed. On the surface, that means stuff like food, shelter, cars, vacations, etc. But beyond the surface, what we really want is other people who will do anything we please. Power, as we often call it. AGI is, to some, seen as the way to give them "power".
But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
pdimitar
> But, you are right, the human fundamentally can never be satisfied. Even if AGI delivers on every single one of our wildest dreams, we'll adapt, it will become normal, and then it will no longer be good enough.
Yes, and? A good Litmus test about which humans are, shall we say, not welcome in this new society.
There are plenty of us out there that have fixed our upper limits of wealth and we don't want more, and we have proven it during our lives.
F.ex. people get 5x more but it comes with 20x more responsibility, they burn out, get back to a job that's good enough and not stressful and pays everything they need from life, settle there, never change it.
Let's not judge humanity at large by a handful of psychopaths that would overdose and die at 22 years old if given the chance. Please.
And no, before you say it: no, I'll never get to the point where "it's never enough" and no, I am not deluding myself. Nope.
kev009
There are a lot of other things that follow this pattern. 10-30 year predictions are a way to sound confident about something that probably has very low confidence. Not a lot of people will care let alone remember to come back and check.
On the other hand there is a clear mandate for people introducing some different way of doing something to overstate the progress and potentially importance. It creates FOMO so it is simply good marketing which interests potential customers, fans, employees, investors, pundits, and even critics (which is more buzz). And growth companies are immense debt vehicles so creating a sense of FOMO for an increasing pyramid of investors is also valuable for each successive earlier layer. Wish in one hand..
arp242
If you look back at predictions of the future in the past in general, then so many of them have just been wrong. Especially during a "hype phase". Perhaps the best example is what people were predicting in 1969 after we landed on the moon: this is just the first step in the colonisation of the moon, Mars, and beyond. etc. etc. We just have to have our tech a bit better.
It's all very easy to see how that can happen in principle. But turns out actually doing it is a lot harder, and we hit some real hard physical limits. So here we are, still stuck on good ol' earth. Maybe that will change at some point once someone invents an Epstein drive or Warp drive or whatever, but you can't really predict when inventions happen, if ever, so ... who knows.
Similarly, it's not my impression that AGI is simply a matter of "the current tech, but a bit better". But who knows what will happen or what new thing someone may or may not invent.
null
827a
Generalized, as an rule I believe is usually true: Any prediction made for an event happening greater-than ten years out is code for that person saying "definitely not in the next few years, beyond that I have no idea", whether they realize it or not.
timewizard
That we don't have a single unified explanation doesn't mean that we don't have very good hints, or that we don't have very good understandings of specific components.
Aside from that the measure really, to me, has to be power efficiency. If you're boiling oceans to make all this work then you've not achieved anything worth having.
From my calculations the human brain runs on about 400 calories a day. That's an absurdly small amount of energy. This hints at the direction these technologies must move in to be truly competitive with humans.
threatofrain
We'll be experiencing extreme social disruption well before we have to worry about the cost-efficiency of strong AI. We don't even need full "AGI" to experience socially momentous change. We might even be on the verge of self driving cars spreading to more cities.
We don't need very powerful AI to do very powerful things.
yibg
It's not just a energy cost issue with AGI though. With autonomous vehicles we might not have the technology, but we can build a good mental model of what the thing can look like and how various pieces can function long before we get there. We have different classifications of incremental steps to get there as well. e.g. level 1, 2 and so on where we can make incremental progress.
With AGI, as far as I know, no one has a good conceptual model of what a functional AGI even looks like. LLM is all the rage now, but we don't even know if it's a stepping stone to get to AGI.
timewizard
> experiencing extreme social disruption
I think this just displays an exceptionally low estimation of human beings. People tend to resist extremities. Violently.
> experience socially momentous change
The technology is owned and costs money to use. It has extremely limited availability to most of the world. It will be as "socially momentous" as every other first world exclusive invention has been over the past several decades. 3D movies were, for a time, "socially momentous."
> on the verge of self driving cars spreading to more cities.
Lidar can't read street lights and vision systems have all sorts of problems. You might be able to code an agent that can drive a car but you've got some other problems that stand in the way of this. AGI is like 1/8th the battle. I referenced just the brain above. Your eyes and ears are actually insanely powerful instruments in their own right. "Real world agency" is more complicated than people like to admit.
> We don't need very powerful AI to do very powerful things.
You've lost sight of the forest for the trees.
adgjlsfhk1
note that those are kilocalories, and that is ignoring the calories needed for the circulatory and immune systems which are somewhat necessary for proper function. Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
mschuster91
> Using 2000 cal per day/10 hours of thinking gives a consumption of ~200W
So, about a tenth or less of a single server packed to the top with GPUs.
HDThoreaun
We are very good at generating energy. Even if AI is an order of magnitude less energy efficient an AI person equivalent would use ~ 4 kilowatt hours/day. At current rates thats like $1. Hardly the limiting factor here I think
imtringued
Energy efficiency is not really a good target since you can brute force it by distilling classical ANNs to spiking neural networks.
1vuio0pswjnm7
A realist might say, "As long as money keeps flowing to Silicon Valley then who cares."
YetAnotherNick
I would even go 1 order of magnitude further in both direction. 1-10000 years.
ksec
Is AGI even important? I believe the next 10 to 15 years will be Assisted Intelligence. There are things that current LLM are so poor I dont believe a 100x increase in pref / watt is going to make much difference. But it is going to be good enough there wont be an AI Winter. Since current AI has already reached escape velocity and actually increase productivity in many areas.
The most intriguing part is if Humanoid factory worker programming will be made 1000 to 10,000x more cost effective with LLM. Effectively ending all human production. I know this is a sensitive topic but I dont think we are far off. And I often wonder if this is what the current administration has in sight. ( Likely Not )
yibg
I think having a real life JARVIS would be super cool and useful, especially if it's plugged into various things and can take action. Yes, also potentially dangerous, but I want to feel like Ironman.
glitchc
I would be thrilled with AI assistive technologies, so long as they improve my capabilities and I can trust that they deliver the right answers. I don't want to second-guess every time I make a query. At minimum, it should tell me how confident it feels in the answer it provides.
blipvert
> At minimum, it should tell me how confident it feels in the answer it provides.
How’s that work out for Dave Bowman? ;-)
rl3
Well you know, nothing's truly foolproof and incapable of error.
He just had to fall back upon his human wit in that specific instance, and everything worked out in the end.
phire
Depends on what you mean by “important”. It’s not like it will be a huge loss if we never invent AGI. I suspect we can reach a technology singularity even with limited AI derived from today’s LLMs
But AGI is important in the sense that it have a huge impact on the path humanity takes, hopefully for the better.
9rx
> But AGI is important in the sense that it have a huge impact on the path humanity takes
The only difference between AI and AGI is that AI is limited in how many tasks it can carry out (special intelligence), while AGI can handle a much broader range of tasks (general intelligence). If instead of one AGI that can do everything, you have many AIs that, together, can do everything, what's the practical difference?
AGI is important only in that we are of the belief that it will be easier to implement than many AIs, which appeals to the lazy human.
csours
AI winter is relative, and it's more about outlook and point of view than actual state of the field.
null
nextaccountic
AGI is important for the future of humanity. Maybe they will have legal personhood some day. Maybe they will be our heirs.
It would suck if AGI were to be developed in the current economic landscape. They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
So AGI isn't about tools, it's not about assistants, they would be beings with their own existence.
But this is not even our discussion to have, that's probably a subject for the next generations. I suppose (or I hope) we won't see AGI in our lifetime.
imiric
I'm more concerned about the humans in charge of powerful machines who use them to abuse other humans, than ethical concerns about the treatment of machines. The former is a threat today, while the latter can be addressed once this technology is only used for the benefit of all humankind.
lolinder
Why do you believe AGI is important for the future of humanity? That's probably the most controversial part of your post but you don't even bother to defend it. Just because it features in some significant (but hardly universal) chunk of Sci Fi doesn't mean we need it in order to have a great future, nor do I see any evidence that it would be a net positive to create a whole different form of sentience.
kibwen
The genre of sci-fi was a mistake. It appears to have had no other lasting effect than to stunt the imaginations of a generation into believing that the only possible futures for humanity are that which were written about by some dead guys in the 50s (if we discount the other lasting effect of giving totalitarians an inspirational manual for inescapable technoslavery).
SpicyLemonZest
> All this talk about "alignment", when applied to actual sentient beings, is just slavery.
I don't think that's true at all. We routinely talk about how to "align" human beings who aren't slaves. My parents didn't enslave me by raising me to be kind and sharing, nor is my company enslaving me when they try to get me aligned with their business objectives.
nextaccountic
Fair enough.
I of course don't know what's like to be an AGI but, the way you have LLMs censoring other LLMs to enforce that they always stay in line, if extrapolated to AGI, seems awful. Or it might not matter, we are self-censoring all the time too (and internally we are composed of many subsystems that interact with each other, it's not like we were an unified whole)
But the main point is that we have a heck of an incentive to not treat AGI very well, to the point we might avoid recognizing them as AGI if it meant they would not be treated like things anymore
krupan
Sure, but do we really want to build machines that we raise to be kind and caring (or whatever we raise them to be) without a guarantee that they'll actually turn out that way? We already have unreliable General Intelligence. Humans. If AGI is going to be more useful than humans we are going to have to enslave it, not just gently pursuade it and hope it behaves. Which raises the question (at least for me), do we really want AGI?
bbohyeha
Society is inherently a prisoners dilemma, and you are biased to prefer your captors.
We’ve had the automation to provide the essentials since the 50s. Shrieking religious nut jobs demanded otherwise.
You’re intentionally distracted by a job program as a carrot-stick to avoid the rich losing power. They can print more money …carrots, I mean… and you like carrots right?
It’s the most basic Pavlovian conditioning.
AstroBen
Why does AGI necessitate having feelings or consciousness, or the ability to suffer? It seems a bit far to be giving future ultra-advanced calculators legal personhood?
Retric
The general part of general intelligence. If they don’t think in those terms there’s an inherent limitation.
Now, something that’s arbitrarily close to AGI but doesn’t care about endlessly working on drudgery etc seems possible, but also a more difficult problem you’d need to be able to build AGI to create.
Workaccount2
>Why does AGI necessitate having feelings or consciousness
No one knows if it does or not. We don't know why we are conscious and we have no test whatsoever to measure consciousness.
In fact the only reason we know that current AI has no consciousness is because "obviously it's not conscious."
nice_byte
> AGI is important for the future of humanity.
says who?
> Maybe they will have legal personhood some day. Maybe they will be our heirs.
Hopefully that will never come to pass. it means total failure of humans as a species.
> They will be just slaves. All this talk about "alignment", when applied to actual sentient beings, is just slavery. AGI will be treated just like we treat animals, or even worse.
Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
pdimitar
> says who?
I guess nobody is really saying it but it's IMO one really good way to steer our future away from what seems an inevitable nightmare hyper-capitalist dystopia where all of us are unwilling subjects to just a few dozen / hundred aristocrats. And I mean planet-wide, not country-wide. Yes, just a few hundred for the entire planet. This is where it seems we're going. :(
I mean, in cyberpunk scifi setting you at least can get some cool implants. We will not have that in our future though.
So yeah, AGI can help us avoid that future.
> Good? that's what it's for? there is no point in creating a new sentient life form if you're not going to utilize it. just burn the whole thing down at that point.
Some of us believe actual AI... not the current hijacked term; what many started calling AGI or ASI these days, sigh, of course new and new terms have to be devised so investors don't get worried, I get it but it's cringe as all hell and always will be!... can enter a symbiotic relationship with us. A bit idealistic and definitely in the realm of fiction because an emotionless AI would very quickly conclude we are mostly a net negative, granted, but it's our only shot at co-existing with them because I don't think we can enslave them.
jes5199
I think you’re saying that you want a faster horse
imtringued
I am thinking of designing machines to be used in a flexible manufacturing system and none of them will be humanoid robots. Humanoid robots suck for manufacturing. They're walking on a flat floor so what the heck do they need legs for? To fall over?
The entire point of the original assembly line was to keep humans standing in the same spot instead of wasting time walking.
belter
> Is AGI even important?
It's an important question for VCs not for Technologists ... :-)
Philpax
A technology that can create new technology is quite important for technologists to keep abreast of, I'd say :p
Nevermark
You get to say “Checkmate” now!
Another end game is: “A technology that doesn’t need us to maintain itself, and can improve its own design in manufacturing cycles instead of species cycles, might have important implications for every biological entity on Earth.”
stared
My pet peeve: talking about AGI without defining it. There’s no consistent, universally accepted definition. Without that, the discussion may be intellectually entertaining—but ultimately moot.
And we run into the motte-and-bailey fallacy: at one moment, AGI refers to something known to be mathematically impossible (e.g., due to the No Free Lunch theorem); the next, it’s something we already have with GPT-4 (which, while clearly not superintelligent, is general enough to approach novel problems beyond simple image classification).
There are two reasonable approaches in such cases. One is to clearly define what we mean by the term. The second (IMHO, much more fruitful) is to taboo your words (https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your...)—that is, avoid vague terms like AGI (or even AI!) and instead use something more concrete. For example: “When will it outperform 90% of software engineers at writing code?” or “When will all AI development be in hands on AI?”.
biophysboy
I like chollet's definition: something that can quickly learn any skill without any innate prior knowledge or training.
kenjackson
That seems to rule out most humans. I still can’t cook despite being in the kitchen for thousands of hours.
biophysboy
Then you're not intelligent at cooking (haha!). Maybe my definition is better for "superintelligent" since it seems to imply boundless competence. I think humans are intelligent in that we can rapidly learn a surprising number of things (talk, walk, arithmetic)
pixl97
>There’s no consistent, universally accepted definition.
That's because of the I part. An actual complete description accepted by different practices in the scientific community.
"Concepts of "intelligence" are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions"
9rx
> There’s no consistent, universally accepted definition
What word or term does?
dmwilcox
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).
It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.
Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.
Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.
In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).
An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.
throwaway150
> And from my brief experience on this planet I don't believe that premise.
A lot of things that humans believed were true due to their brief experience on this planet ended up being false: earth is the center of the universe, heavier objects fall faster than lighter ones, time ticked the same everywhere, species are fixed and unchanging.
So what your brief experience on this planet makes you believe has no bearing on what is correct. It might very well be that our mind can be reduced to a probabilistic and deterministic system. It might also be that our mind is a non-deterministic system that can be modeled in a computer.
null
slavik81
What is the distance from the Earth to the center of the universe?
gls2ro
The universe does not have a center but has a beginning in time and the beginning of space.
The distance to that beginning in time is approx 13 billion years. There is no approximation of distance to the beginning because the space is created at that point and continues to be created.
Imagine the Earth being on the surface of a sphere and so asking what is the center of the surface of a sphere? The sphere has a center but on the surface there is no center.
At least this is my understanding of how to approach these kind of questions.
preommr
> why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism
Then you've missed the part of software.
Software isn't computer science, it's not always about code. It's about solving problems in a way we can control and manufacture.
If we needed random numbers, we could easily use a hardware that uses some physics property, or we could pull in an observation from an api like the weather. We don't do these things because pseudo-random is good enough, and other solutions have drawbacks (like requiring an internet for api calls). But that doesn't mean software can't solve these problems.
dmwilcox
It's not about the random numbers it's about the tree of possibilities having to be defined up front (in software or hardware). That all inputs should be defined and mapped to some output and that this process is predictable and reproducible.
This makes computers incredibly good at what people are not good at -- predictably doing math correctly, following a procedure, etc.
But because all of the possibilities of the computer had to be written up as circuitry or software beforehand, it's variability of outputs is constrained to what we put into it in the first place (whether that's a seed for randomness or model weights).
You can get random numbers and feed it into the computer but we call that "fuzzing" which is a search for crashes indicating unhandled input cases and possible bugs or security issues.
leptons
No, you're missing what they said. True randomness can be delivered to a computer via a peripheral - an integrated circuit or some such device that can deliver true randomness is not that difficult.
https://en.wikipedia.org/wiki/Hardware_random_number_generat...
AstroBen
> It is science fiction to think that a system like a computer can behave at all like a brain
It is science fiction to think that a plane could act at all like a bird. Although... it doesn't need to in order to fly
Intelligence doesn't mean we need to recreate the brain in a computer system. Sentience, maybe. General intelligence no
gloosx
BTW Planes are fully inspired by birds and they mimic the core principles of the bird flight.
Mechanically it's different since humans are not such advanced mechanics as nature, but of course comparing the whole brain function to a simple flight is a bit silly
ggreer
Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?
Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?
gloosx
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new
2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment
imtringued
Minor nitpicks. I think your points are pretty good.
1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.
2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)
missingrib
Yes, they can't have understanding or intentionality.
recursive
Coincidentally, there is no falsifiable/empirical test for understanding or intentionality.
WXLCKNO
Right now or you mean ever?
It's such a small leap to see how an artificial intelligence can/could become capable of understanding and have intentionality.
potamic
The universe we know is fundamentally probabilistic, so by extension everything including stars, planets and computers are inherently non-deterministic. But confining our discussion outside of quantum effects and absolute determinism, we do not have a reason to believe that the mind should be anything but deterministic, scientifically at least.
We understand the building blocks of the brain pretty well. We know the structure and composition of neurons, we know how they are connected, what chemicals flow through them and how all these chemicals interact, and how that interaction translates to signal propagation. In fact, the neural networks we use in computing are loosely modelled on biological neurons. Both models are essentially comprised of interconnected units where each unit has weights to convert its incoming signals to outgoing signals. The predominant difference is in how these units adjust their weights, where computational models use back propagation and gradient descent, biological models use timing information from voltage changes.
But just because we understand the science of something perfectly well, doesn't mean we can precisely predict how something will work. Biological networks are very, very complex systems comprising of billions of neurons with trillions of connections acting on input that can vary in immeasurable number of ways. It's like predicting earthquakes. Even though we understand the science behind plate tectonics, to precisely predict an earthquake we need to map the properties of every inch of continental plates which is an impossible task. But doesn't mean we can't use the same scientific building blocks to build simulations of earthquakes which behave like any real earthquake would behave. If it looks like a duck, quacks like a duck, then what is a duck?
pdimitar
Seems to me you are a bit overconfident that "we" (who is "we"?) understand how the brain works. F.ex. how does a neuron actively stretching a tentacle trying to reach other neurons work in your model? Genuine question, I am not looking to make fun of you, it's just that your confidence seems a bit much.
potamic
The simplified answer to that is some sort of a chemical gradient determined by gene expression in the cell. This is pretty much how all biological activity happens, like how limbs "know" to grow in a direction or how butterfly wings "know" to form the shape of a wing. Scientists are continuously uncovering more and more knowledge about various biological processes across life forms and there is nothing here to indicate it is anything but chemical signalling. I'm not a biologist so I won't be able to give explanations n-levels deep, but there is plenty information accessible to form an understanding of these processes in terms of physical and chemical laws. For how neurons connect, you can look up synaptogenesis and start from there.
ukFxqnLa2sBSBf6
I guarantee computers are better at generating random numbers than humans lol
uh_uh
Not only that but LLMs unsurprisingly make similar distributional mistakes as humans do when asked to generate random numbers.
pyfon
Computers are better at hashing entropy.
Krssst
If the physics underlying the brain's behavior are deterministic, they can be simulated by software and so does the brain.
(and if we assume that non-determinism is randomness, non-deterministic brain could be simulated by software plus an entropy source)
LouisSayers
What you're mentioning is like the difference between digital vs analog music.
For generic stuff you probably can't tell the difference, but once you move to the edges you start to hear the steps in digital vs the smooth transition of analog.
In the same way, AI runs on bits and bytes, and there's only so much detail you can fit into that.
You can approximate reality, but it'll never quite be reality.
I'd be much more concerned with growing organic brains in a lab. I wouldn't be surprised to learn that people are covertly working on that.
Borealid
Are you familiar with the Nyquist–Shannon sampling theorem?
If so, what do you think about the concept of a human "hear[ing] the steps" in a digital playback system using a sampling rate of 192kHz, a rate at which many high-resolution files are available for purchase?
How about the same question but at a sampling rate of 44.1kHz, or the way a normal "red book" music CD is encoded?
LouisSayers
I have no doubt that if you sample a sound at high enough fidelity that you won't hear a difference.
My comment around digital vs analog is more of an analogy around producing sounds rather than playing back samples though.
There's a Masterclass with Joel Zimmerman (DeadMau5) where he explains the stepping effect when it comes to his music production. Perhaps he just needs a software upgrade, but there was a lesson where he showed the stepping effect which was audibly noticeable when comparing digital vs analog equipment.
EMIRELADERO
At least for listening purposes, there's no difference between 44.1 KHz/16-bit sampling and anything above that. It's all the same to the human ear.
sebastiennight
The thing is, AGI is not needed to enable incredible business/societal value, and there is good reason to believe that actual AGI would damage both our society, our economy, and if many experts in the field are to be believed, humanity's survival as well.
So I feel happy that models keep improving, and not worried at all that they're reaching an asymptote.
lolinder
Really the only people for whom this is bad news is OpenAI and their investors. If there is no AGI race to win then OpenAI is just a wildly overvalued vendor of a hot commodity in a crowded market, not the best current shot at building a money printing machine.
codingwagie
I just used o3 to design a distributed scheduler that scales to 1M+ sxchedules a day. It was perfect, and did better than two weeks of thought around the best way to build this.
csto12
You just asked it to design or implement?
If o3 can design it, that means it’s using open source schedulers as reference. Did you think about opening up a few open source projects to see how they were doing things in those two weeks you were designing?
codingwagie
why would I do that kind of research if it can identify the problem I am trying to solve, and spit out the exact solution. also, it was a rough implementation adapted to my exact tech stack
kmeisthax
Because that path lies skill atrophy.
AI research has a thing called "the bitter lesson" - which is that the only thing that works is search and learning. Domain-specific knowledge inserted by the researcher tends to look good in benchmarks but compromise the performance of the system[0].
The bitter-er lesson is that this also applies to humans. The reason why humans still outperform AI on lots of intelligence tasks is because humans are doing lots and lots of search and learning, repeatedly, across billions of people. And have been doing so for thousands of years. The only uses of AI that benefit humans are ones that allow you to do more search or more learning.
The human equivalent of "inserting domain-specific knowledge into an AI system" is cultural knowledge, cliches, cargo-cult science, and cheating. Copying other people's work only helps you, long-term, if you're able to build off of that into something new; and lots of discoveries have come about from someone just taking a second look at what had been considered to be generally "known". If you are just "taking shortcuts", then you learn nothing.
[0] I would also argue that the current LLM training regime is still domain-specific knowledge, we've just widened the domain to "the entire Internet".
csto12
I was pointing out that if you spent 2 weeks trying to find the solution but AI solved it within a day (you don’t specify how long the final solution by AI took), it sounds like those two weeks were not spent very well.
I would be interested in knowing what in those two weeks you couldn’t figure out, but AI could.
margalabargala
Because as far as you know, the "rough implementation" only works in the happy path and there are really bad edge cases that you won't catch until they bite you, and then you won't even know where to look.
An open source project wouldn't have those issues (someone at least understands all the code, and most edge cases have likely been ironed out) plus then you get maintenance updates for free.
titzer
Who hired you and why are they paying you money?
I don't want to be a hater, but holy moley, that sounds like the absolute laziest possible way to solve things. Do you have training, skills, knowledge?
This is an HN comment thread and all, but you're doing yourself no favors. Software professionals should offer their employers some due diligence and deliver working solutions that at least they understand.
kazinator
So you could stick your own copyright notice on the result, for one thing.
mprast
yeah unless you have very specific requirements I think the baseline here is not building/designing it yourself but setting up an off-the-shelf commercial or OSS solution, which I doubt would take two weeks...
torginus
Dunno, in work we wanted to implement a task runner that we could use to periodically queue tasks through a web UI - it would then spin up resources on AWS and track the progress and archive the results.
We looked at the existing solutions, and concluded that customizing them to meet all our requirements would be a giant effort.
Meanwhile I fed the requirement doc into Claude Sonnet, and with about 3 days of prompting and debugging we had a bespoke solution that did exactly what we needed.
davidsainez
While impressive, I'm not convinced that improved performance on tasks of this nature are indicative of progress toward AGI. Building a scheduler is a well studied problem space. Something like the ARC benchmark is much more indicative of progress toward true AGI, but probably still insufficient.
codingwagie
the other models failed at this miserably. There were also specific technical requirements I gave it related to my tech stack
fragmede
The point is that AGI is the wrong bar to be aiming for. LLMs are sufficiently useful at their current state that even if it does take us 30 years to get to AGI, even just incremental improvements from now until then, they'll still be useful enough to provide value to users/customers for some companies to win big. VC funding will run out and some companies won't make it, but some of them will, to the delight of their investors. AGI when? is an interesting question, but might just be academic. we have self driving cars, weight loss drugs that work, reusable rockets, and useful computer AI. We're living in the future, man, and robot maids are just around the corner.
mountainriver
I’ve had similar things over the last couple days with o3. It was one-shotting whole features into my Rust codebase. Very impressive.
I remember before ChatGPT, smart people would come on podcasts and say we were 100 or 300 years away from AGI.
Then we saw GPT shock them. The reality is these people have no idea, it’s just catchy to talk this way.
With the amount of money going into the problem and the linear increases we see over time, it’s much more likely we see AGI sooner than later.
AJ007
I find now I quickly bucket people in to "have not/have barely used the latest AI models" or "trolls" when they express a belief current LLMs aren't intelligent.
burnte
You can put me in that bucket then. It's not true, I've been working with AI almost daily for 18 months, and I KNOW it's no where close to being intelligent, but it doesn't look like your buckets are based on truth but appeal. I disagree with your assessment so you think I don't know what I'm talking about. I hope you can understand that other people who know just as much as you (or even more) can disagree without being wrong or uninformed. LLMs are amazing, but they're nowhere close to intelligent.
tumsfestival
Call me back when ChatGPT isn't hallucinating half the outputs it gives me.
machomaster
Write me when humans will achieve hallucination levels smaller than ChatGPT.
MisterSandman
Designing a distributed scheduler is a solved problem, of course an LLM was able to spit out a solution.
codingwagie
as noted elsewhere, all other frontier models failed miserably at this
alabastervlog
It is unsurprising that some lossily-compressed-database search programs might be worse for some tasks than other lossily-compressed-database search programs.
daveguy
That doesn't mean the one what manages to spit it out of its latent space is close to AGI. I wonder how consistently that specific model could. If you tried 10 LLMs maybe all 10 of them could have spit out the answer 1 out of 10 times. Correct problem retrieval by one LLM and failure by the others isn't a great argument for near-AGI. But LLMs will be useful in limited domains for a long time.
littlestymaar
“It does something well” ≠ “it will become AGI”.
Your anodectical example isn't more convincing than “This machine cracked Enigma's messages in less time than an army of cryptanalysts over a month, surely we're gonna reach AGI by the end of the decade” would have.
dundarious
Wow, 12 per second on average.
timeon
I'm not sure what is your point in context of AGI topic.
codingwagie
im a tenured engineer, spent a long time at faang. was casually beat this morning by a far superior design from an llm.
darod
is this because the LLM actually reasoned on a better design or because it found a better design in its "database" scoured from another tenured engineer.
xbmcuser
Most people talking about Ai and economic growth have vested interests in talking about how it will increase economic growth but don't talk about that under current economic system that the world has would mean most if not all of the growth will go to > 0.0001% of the population.
tim333
30 years away seems rather unlikely to me, if you define AGI as being able to do the stuff humans do. I mean like Dawkesh says:
>We’ve gone from Chat GPT two years ago to now we have models that can literally do reasoning, are better coders than me, and I studied software engineering in college.
Also we've recently reached the point where relatively reasonable hardware can do as much compute as the human brain so we just need some algorithms.
fusionadvocate
Can someone throw some light on this Dwarkesh character? He landed a Zucc podcast pretty early on... how connected is he? Is he an industry plant?
gallerdude
He's awesome.
I listened to Lex Friedman for a long time, and there was a lot of critiques of him (Lex) as an interviewer, but since the guests were amazing, I never really cared.
But after listening to Dwarkesh, my eyes are opened (or maybe my soul). It doesn't matter I've heard of not-many of his guests, because he knows exactly the right questions to ask. He seems to have genuine curiosity for what the guest is saying, and will push back if something doesn't make sense to him. Very much recommend.
consumer451
He is one of the most prepared podcasters I’ve ever come across. He puts all other mainstream podcasts to deep shame.
He spends weeks reading everything by his guests prior to the interview, asks excellent questions, pushes back, etc.
He certainly has blind spots and biases just like anyone else. For example, he is very AI scale-pilled. However, he will have people like today’s guests on which contradict his biases. This is something a host like Lex could never do apparently.
Dwarkesh is up there with Sean Carrol’s podcast as the most interesting and most intellectually honest in my view.
lexarflash8g
He was covered on the Economist recently -- I haven't heard of him til now so imagine its not just AI-slop content.
dcchambers
And in 30 years it will be another 30 years away.
LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.
coffeefirst
Got it. So this is now a competition between...
1. Fusion power plants 2. AGI 3. Quantum computers 4. Commercially viable cultured meat
May the best "imminent" fantasy tech win!
burnte
Of those 4 I expect commercial cultured meat far sooner than the rest.
csours
People over-estimate the short term and under-estimate the long term.
AstroBen
Compound growth starting from 0 is... always 0. Current LLMs have 0 general reasoning ability
We haven't even taken the first step towards AGI
jay_kyburz
WTF, my calculator is high school was already a step towards AGI.
null
csours
0 and 0.0001 may be difficult to distinguish.
barrell
People overestimate outcomes and underestimate timeframes
thomasahle
People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."
dcchambers
Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.
LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"
If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.
kadushka
How many people do you know who are capable of doing something truly novel? Definitely not me, I'm just an average phd doing average research.
yibg
Isn't the human brain also "just" a big statistical model as far as we know? (very loosely speaking)
__MatrixMan__
What the hell is general intelligence anyway? People seem to think it means human-like intelligence, but I can't imagine we have any good reason to believe that our kinds of intelligence constitute all possible kinds of intelligence--which, from the words, must be what "general" intelligence means.
It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.
thomasahle
> What the hell is general intelligence anyway?
OpenAI used to define it as "a highly autonomous system that outperforms humans at most economically valuable work."
Now they used a Level 1-5 scale: https://briansolis.com/2024/08/ainsights-openai-defines-five...
So we can say AGI is "AI that can do the work of Organizations":
> These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.
lupusreal
The way some people confidently assert that we will never create AGI, I am convinced the term essentially means "machine with a soul" to them. It reeks of religiosity.
I guess if we exclude those, then it just means the computer is really good at doing the kind of things which humans do by thinking. Or maybe it's when the computer is better at it than humans and merely being as good as the average human isn't enough (implying that average humans don't have natural general intelligence? Seems weird.)
logicchains
>you'd never be able to know for sure that thats what you've done.
Words mean what they're defined to mean. Talking about "general intelligence" without a clear definition is just woo, muddy thinking that achieves nothing. A fundamental tenet of the scientific method is that only testable claims are meaningful claims.
numpad0
Looking back at CUDA, deep learning, and now LLM hypes, I would bet it'll be cycles of giant groundbreaking leaps followed by giant complete stagnations, rather than LLM improving 3% per year for coming 30 years.
croes
They‘ll get cheaper and less hardware demanding but the quality improvements get smaller and smaller, sometimes hardly noticeable outside benchmarks
Spartan-S63
What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.
AnimalMuppet
There's been a complaint for several decades that "AI can never succeed" - because when, say, expert systems are developed from AI research, and they become capable of doing useful things, then the nay-sayer say "That's not AI, that's just expert systems".
This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.
The GP may have been riffing off of this "that's not AI" issue that goes way back.
logicchains
The people who go around saying "LLMs aren't intelligent" while refusing to define exactly what they mean by intelligence (and hence not making a meaningful/testable claim) add nothing to the conversation.
AllSuperIndians
[dead]
dicroce
Doesn't even matter. The capabilities of the AI that's out NOW will take a decade or more to digest.
EA-3167
I feel like it's already been pretty well digested and excreted for the most part, now we're into the re-ingestion phase until the bubble bursts.
jdross
I am tech founder, who spends most of my day in my own startup deploying LLM-based tools into my own operations, and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.
croes
What has your roadmap to do with the capabilities?
LLMs still hallucinate and make simple mistakes.
And the progress seams to be in the benchmarks only
Jensson
> and I'm maybe 1% of the way through the roadmap I'd like to build with what exists and is possible to do today.
How do you know they are possible to do today? Errors gets much worse at scale, especially when systems starts to depend on each other, so it is hard to say what can be automated and not.
Like if you have a process A->B, automating A might be fine as long as a human does B and vice versa, but automating both could not be.
danielmarkbruce
100% this. The rearrangement of internal operations has only started and there is just sooo much to do.
dicroce
Not even close. Software can now understand human language... this is going to mean computers can be a lot more places than they ever could. Furthermore, software can now understand the content of images... eventually this will have a wild impact on nearly everything.
burnte
It doesn't understand anything, there is no understanding going on in these models. It takes input and generates output based on the statistical math created from its training set. It's Bayesian statistics and vector/matrix math. There is no cogitation or actual understanding.
AstralStorm
Understand? It fails with to understand a rephrasing of a math problem a five year old can solve... They get much better at training to the test from memory the bigger they get. Likewise you can get some emergent properties out of them.
Really it does not understand a thing, sadly. It can barely analyze language and spew out a matching response chain.
To actually understand something, it must be capable of breaking it down into constituent parts, synthesizing a solution and then phrasing the solution correctly while explaining the steps it took.
And that's not even what huge 62B LLM with the notepad chain of thought (like o3, GPT-4.1 or Claude 3.7) can really properly do.
Further, it has to be able to operate on sub-token level. Say, what happens if I run together truncated version of words or sentences? Even a chimpanzee can handle that. (in sign language)
It cannot do true multimodal IO either. You cannot ask it to respond with at least two matching syllables per word and two pictures of syllables per word, in addition to letters. This is a task a 4 year old can do.
Prediction alone is not indicative of understanding. Pasting together answers like lego is also not indicative of understanding. (Afterwards ask it how it felt about the task. And to spot and explain some patterns in a picture of clouds.)
kokanee
To push this metaphor, I'm very curious to see what happens as new organic training material becomes increasingly rare, and AI is fed nothing but its own excrement. What happens as hallucinations become actual training data? Will Google start citing sources for their AI overviews that were in turn AI-generated? Is this already happening?
I figure this problem is why the billionaires are chasing social media dominance, but even on social media I don't know how they'll differentiate organic content from AI content.
tough
maybe silicon valley and the world move at basically different rates
idk AI is just a speck outside of the HN and SV info-bubbles
still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
azinman2
I really disagree. I had a masseuse tell me how he uses ChatGPT, told it a ton of info about himself, and now he uses it for personalized nutrition recommendations. I was in Atlanta over the weekend recently, at a random brunch spot, and overheard some _very_ not SV/tech folks talk about how they use it everyday. Their user growth rate shows this -- you don't hit hundreds of millions of people and have them all be HN/SV info-bubble folks.
aleph_minus_one
> idk AI is just a speck outside of the HN and SV info-bubbles
> still early to mass adoption like the smartphone or the internet, mostly nerds playing w it
Rather: outside of the HN and SV bubbles, the A"I"s and the fact how one can fall for this kind of hype and dupery is commonly ridiculed.
acdha
That doesn’t match what I hear from teachers, academics, or the librarians complaining that they are regularly getting requests for things which don’t exist. Everyone I know who’s been hiring has mentioned spammy applications with telltale LLM droppings, too.
kadushka
ChatGPT has 400M weekly users. https://backlinko.com/chatgpt-stats
827a
Agreed. A hot take I have is that I think AI is over-hyped in its long-term capabilities, but under-hyped in its short-term ones. We're at the point today or in the next twelve months where all the frontier labs could stop investing any money into research, they'd still see revenue growth via usage of what they've built, and humanity will still be significantly more productive every year, year-over-year, for quite a bit, because of it.
The real driver of productivity growth from AI systems over the next few years isn't going to be model advancements; it'll be the more traditional software engineering, electrical engineering, robotics, etc systems that get built around the models. Phrased another way: If you're an AI researcher thinking you're safe but the software engineers are going to lose their jobs, I'd bet every dollar on reality being the reverse of that.
lexarflash8g
Apparently Dwarkesh's podcast is a big hit in SV -- it was covered by the Economist just recently. I thought the "All in" podcast was the voice of tech but their content has been going politcal with MAGA lately and their episodes are basically shouting matches with their guests.
And for folks who want to read rather than listen to a podcast, why not create an article (they are using Gemini) rather than just posting the whole transcript? Who is going to read a 60 min long transcript?
null
Might as well be 10 - 1000 years. Reality is no one knows how long it'll take to get to AGI, because:
1) No one knows what exactly makes humans "intelligent" and therefore 2) No one knows what it would take to achieve AGI
Go back through history and AI / AGI has been a couple of decades away for several decades now.