Skip to content(if available)orjump to list(if available)

I don't think AGI is right around the corner

raspasov

Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

andyfilms1

Thousands are being laid off, supposedly because they're "being replaced with AI," implying the AI is as good or better as humans at these jobs. Managers and execs are workers, too--so if the AI really is so good, surely they should recuse themselves and go live a peaceful life with the wealth they've accrued.

I don't know about you, but I can't imagine that ever happening. To me, that alone is a tip off that this tech, while amazing, can't live up to the hype in the long term.

sleepybrett

Every few weeks I give LLMs a chance to code something for me.

Friday I laid out a problem very cleanly. Take this datastructure and tranform it into this other datastructure in terraform. With examples of the data in both formats.

After the seventh round of back and forth where it would give me code that would not compile or code that gave me a totally different datastructure, giving it more examples and clarifications all the while I gave up. I gave the problem to a junior and they came back with the answer in about an hour.

Next time an AI bro tells you that AI can 'replace your juniors' tell him to go to hell.

Davidzheng

I agree with the last part but I think that criticism applies to many humans too so I don't find it compelling at all.

I also think by original definition (better than median human at almost all task) it's close and I think in the next 5 years it will be competitive with professionals at all tasks which are nonphysical (physical could be 5-10 years idk). I could be high on my own stories but not the rest.

LLMs are good at language yes but I think to be good at language requires some level of intelligence. I find this notion that they are bad at spatial reasoning extremely flawed. They are much better than all previous models, some of which are designed for spatial reasoning. Are they worse than humans? Yes but just the fact that you can put newer models on robots and they just work means that they are quite good by AI standards and rapidly improving.

richardw

They’re great at working with the lens on our reality that is our text output. They are not truth seekers, which is necessarily fundamental to every life form from worms to whales. If we get things wrong, we die. If they get them wrong, they earn 1000 generated tokens.

jhanschoo

Why do you say that LLMs are not truth seekers? If I express an informational query not very well, the LLM will infer what I mean by it and address the possible well-posed information queries that I may have intended that I did not express well.

Can that not be considered truth-seeking, with the agent-environment boundary being the prompt box?

chychiu

They are not intrinsically truth seekers, and any truth seeking behaviour is mostly tuned during the training process.

Unfortunately it also means it can be easily undone. E.g. just look at Grok in its current lobotomized version

sleepybrett

They keep giving me incorrect answers to verifiable questions. They clearly don't 'seek' anything.

Buttons840

I'll offer a definition of AGI:

An AI (a computer program) that is better at [almost] any task than 5% of the human specialists in that field has achieved AGI.

Or, stated another way, if 5% of humans are incapable of performing any intellectual job better than an AI can, then that AI has achieved AGI.

Note, I am not saying that an AI that is better than humans at one particular thing has achieved AGI, because it is not "general". I'm saying that if a single AI is better at all intellectual tasks than some humans, the AI has achieved AGI.

The 5th percentile of humans deserves the label of "intelligent", even if they are not the most intelligent, (I'd say all humans deserve the label "intelligent") and if an AI is able to perform all intellectual tasks better than such a person, the AI has achieved AGI.

djoldman

I like where this is going.

However, it's not sufficient. The actual tasks have to be written down, tests constructed, and the specialists tested.

A subset of this has been done with some rigor and AI/computers have surpassed this threshold for some tests. Some have then responded by saying that it isn't AGI, and that the tasks aren't sufficiently measuring of "intelligence" or some other word, and that more tests are warranted.

Buttons840

You're saying we need to write down all intellectual tasks? How would that help?

If an AI is better at some tasks (that happen to be written down), it doesn't mean it is better at all tasks.

Actually, I'd lower my threshold even further--I originally said 50%, then 20%, then 5%--but now I'll say if an AI is better than 0.1% of people at all intellectual tasks, then it is AGI, because it is "general" (being able to do all intellectual tasks), and it is "intelligent" (a label we ascribe to all humans).

But the AGI has to be better at all (not just some) intellectual tasks.

aydyn

I think your definition is flawed.

Take the Artificial out of AGI. What is GI, and do the majority of humans have it? If so, then why is your definition of AGI far stricter than the definition of Human GI?

timmg

Interesting. I think the key to what you wrote is "poorly definined".

I find LLMs to be generally intelligent. So I feel like "we are already there" -- by some definition of AGI. At least how I think of it.

Maybe a lot of people think of AGI as "superhuman". And by that definition, we are not there -- and may not get there.

But, for me, we are already at the era of AGI.

Incipient

I would call them "generally applicable". "intelligence" definitely implies leaning - and I'm not sure RAG, fine-tuning, or 6monthly updates counts - to split hairs.

Where I will say we have a massive gap, which makes the average person not consider it AGI, is in context. I can give a person my very modest codebase, and ask for a change, and they'll deliver - mostly coherently - to that style, files in the right place etc. Still to today with AI, I get inconsistent design, files in random spots, etc.

apsurd

that's the thing about language. we all kinda gotta agree on the meanings

rf15

There's definitely also people in the futurism and/or doom and gloom camps with absolutely no skin in the game that can't resist this topic.

giancarlostoro

Its right around the corner when you prove it as fact. Otherwise as suggested it is just hype to sell us on your LLM flavor.

JKCalhoun

Where does Eric Schmidt fit? Selling something?

raspasov

I think he's generally optimistic which is a net positive.

rvz

Already invested in the AI companies selling you something.

dathinab

I _hope_ AGI is not right around the corner, for social political reasons we are absolutely not ready for it and it might push the future of humanity into a dystopia abyss.

but also just taking what we have now with some major power usage reduction and minor improvements here and there already seems like something which can be very usable/useful in a lot of areas (and to some degree we aren't even really ready for that either, but I guess thats normal with major technological change)

it's just that for those companies creating foundational models it's quite unclear how they can recoup their already spend cost without either major break through or forcefully (or deceptively) pushing it into a lot more places then it fits into

twelve40

I agree and sincerely hope this bubble pops soon

> Meta Invests $100 Billion into Augmented Reality

that fool controls the board and he seems to be just desperately throwing insane ad money against the wall hoping that something sticks

for Altman there is no backing out either, need to make hay while the sun shines

for the rest of us, i really hope these clowns fail like it's 2000 and never get to their dystopian matrix crap.

pbreit

"that fool" created a $1.8 trillion company.

SoftTalker

He created a company that tracks and profiles people, psychologically manipulates them, and sells ads. And has zero ethical qualms about the massive social harm they have left in their wake.

That doesn't tell me anything about his ability to build "augmented reality" or otherwise use artificial intelligence in any way that people will want to pay for. We'll see.

Ford and GM have a century of experience building cars but they can't seem to figure out EVs despite trying for nearly two decades now.

Tesla hit the ball out of the park with EVs but can't figure out self-driving.

Being good at one thing does not mean you will be good at everything you try.

AaronAPU

I’m always fascinated when someone equates profit with intelligence. There are many very wealthy fools and there always have been. Plenty of ingredients to substitute for intelligence.

Neither necessary nor sufficient.

lowsong

No the thousands of people working at Facebook did, he just got rich from it.

the_gastropod

Aren't there enough examples of successful people who are complete buffoons to nuke this silly trope from orbit? Success is no proof of wisdom or intelligence or whatever.

twelve40

past performance does not guarantee future results

also, great for the Wall Street, mixed bag for us the people

kulahan

$1.8 trillion in investor hopes and dreams, but of course they make zero dollars in profit, don’t know how to turn a profit, don’t have a product anyone would pay a profitable amount for, and have yet to show any real-world use that isn’t kinda dumb because you can’t trust anything it says anyways.

Davidzheng

I think it's rather easy for them to recoup those costs, if you can disrupt some industry with a full AI company with almost no employees and outcompete everyone else, that's free money for you.

9cb14c1ec0

When it comes to recouping cost, a lot of people don't consider the insane amount of depreciation expense brought on by the up to 1 trillion (depending on which estimate) that has been invested in AI buildouts. That depreciation expense could easily be more than the combined revenue of all AI companies.

tshaddox

If birth rates are as much a cause for concern as many people seem to think, and we absolutely need to solve it (instead of solving, for instance, the fact that the economy purportedly requires exponential population growth forever), perhaps we should hope that AGI comes soon.

pbreit

Must "AGI" match human intelligence exactly or would outperforming in some functions and underpformin in others qualify?

crooked-v

For me, "AGI" would come in with being able to reliably perform simple open-ended tasks successfully without needing any specialized aid or tooling. Not necessarily very well, just being capable of it in the first place.

For a specific example of what I mean, there's Vending-Bench - even very 'dumb' humans could reliably succeed on that test indefinitely, at least until they got terminally bored of it. Current LLMs, by contrast, are just fundamentally incapable of that, despite seeming very 'smart' if all you pay attention to is their eloquence.

carefulfungi

If someone handed you an envelope containing a hidden question, and your life depended on a correct answer, would you rather pick a random person out of the phone book or an LLM to answer it?

On one hand, LLMs are often idiots. On the other hand, so are people.

null

[deleted]

saubeidl

Where would you draw the line? Any ol' computer outperforms me in doing basic arithmetic.

kulahan

This is a question of how we quantify intelligence, and there aren’t many great answers. Still, basic arithmetic is probably not the right guideline for intelligence. My guess has always been that it’ll lie somewhere in ability to think critically, which they still have not even attempted yet, because it doesn’t really work with LLMs as they’re structured today.

hkt

I'd suggest anything able to match a professional doing knowledge work. Original research from recognisably equivalent cognition, or equal abilities with a skilled practitioner of (eg) medicine.

This sets the bar high, though. I think there's something to the idea of being able to pass for human in the workplace though. That's the real, consequential outcome here: AGI genuinely replacing humans, without need for supervision. That's what will have consequences. At the moment we aren't there (pre-first-line-support doesn't count).

root_axis

At the very least, it needs to be able to collate training data, design, code, train, fine tune and "RLHF" a foundational model from scratch, on its own, and have it show improvements over the current SOTA models before we can even begin to have the conversation about whether we're approaching what could be AGI at some point in the future.

kadushka

I cannot do all that. Am I not generally intelligent?

OJFord

That would be human; I've always understood the General to mean 'as if it's any human', i.e. perhaps not absolute mastery, but trained expertise in any domain.

babuloseo

what social political reasons, can you name some of these? we are 100% ready for AGI.

kadushka

Are you ready to lose your job, permanently?

kiney

looking forward to it

izzydata

Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.

Waterluvian

I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

knome

>Maybe our first AGI is just a Petri dish brain with a half-decent python API.

https://www.oddee.com/australian-company-launches-worlds-fir...

the entire idea feels rather immoral to me, but it does exist.

Balgair

-- A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Sort of. The main issue is the energy requirements. We could theoretically reproduce a human brain in SW today, it's just that it would be a really big energy hog and run very slowly and probably become insane quickly like any person trapped in a sensory deprived tank.

The real key development for AI and AGI is down at the metal level of computers- the memristor.

https://en.m.wikipedia.org/wiki/Memristor

The synapse in a brain is essentially a memristive element, and it's a very taxing one on the neuron. The equations is (change in charge)/(change in flux). Yes, a flux capacitor, sorta. It's the missing piece in fundamental electronics.

Making simple 2 element memristors is somewhat possible these days, though I've not really been in the space recently. Please, if anyone knows where to buy them, a real one not a claimed to be one, let me know. I'm willing to pay good money.

In Terms of AI, a memristor would require a total redesign of how we architect computers ( goodbye busses and physically separate memory, for one). But, you'd get a huge energy and time savings benefit. As in, you can run an LLM on a watch battery or small solar cell and let the environment train them to a degree.

Hopefully AI will accelerate their discovery and facilitate their introduction into cheap processing and construction of chips.

josefx

> and fundamentally immeasurable about humans that leads to our general intelligence

Isn't AGI defined to mean "matches humans in virtually all fields"? I don't think there is a single human capable of this.

andy99

If by "something magical" you mean something we don't understand, that's trivially true. People like to give firm opinions or make completely unsupported statements they feel should be taken seriously ("how do we know humans intelligence doesn't work the same way as next token prediction") about something nobody understand.

Waterluvian

I mean something that’s fundamentally not understandable.

“What we don’t yet understand” is just a horizon.

somewhereoutth

Our silicon machines exist in a countable state space (you can easily assign a unique natural number to any state for a given machine). However, 'standard biological mechanisms' exist in an uncountable state space - you need real numbers to properly describe them. Cantor showed that the uncountable is infinitely more infinite (pardon the word tangle) than the countable. I posit that the 'special sauce' for sentience/intelligence/sapience exists beyond the countable, and so is unreachable with our silicon machines as currently envisaged.

I call this the 'Cardinality Barrier'

bakuninsbart

Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.

Waterluvian

That’s an interesting thought. It steps beyond my realm of confidence, but I’ll ask in ignorance: can a biological brain really have infinite state space if there’s a minimum divisible Planck length?

Infinite and “finite but very very big” seem like a meaningful distinction here.

I once wondered if digital intelligences might be possible but would require an entire planet’s precious metals and require whole stars to power. That is: the “finite but very very big” case.

But I think your idea is constrained to if we wanted a digital computer, is it not? Humans can make intelligent life by accident. Surely we could hypothetically construct our own biological computer (or borrow one…) and make it more ideal for digital interface?

richk449

It sounds like you are making a distinction between digital (silicon computers) and analog (biological brains).

As far as possible reasons that a computer can’t achieve AGI go, this seems like the best one (assuming computer means digital computer of course).

But in a philosophical sense, a computer obeys the same laws of physics that a brain does, and the transistors are analog devices that are being used to create a digital architecture. So whatever makes you brain have uncountable states would also make a real digital computer have uncountable states. Of course we can claim that only the digital layer on top matters, but why?

layer8

Physically speaking, we don’t know that the universe isn’t fundamentally discrete. But the more pertinent question is whether what the brain does couldn’t be approximated well enough with a finite state space. I’d argue that books, music, speech, video, and the like demonstrate that it could, since those don’t seem qualitatively much different from how other, analog inputs stimulate our intellect. Or otherwise you’d have to explain why an uncountable state space would be needed to deal with discrete finite inputs.

dwaltrip

Please describe in detail how biological mechanisms are uncountable.

And then you need to show how the same logic cannot apply to non-biological systems.

null

[deleted]

jandrewrogers

> 'standard biological mechanisms' exist in an uncountable state space

Everything in our universe is countable, which naturally includes biology. A bunch of physical laws are predicated on the universe being a countable substrate.

saubeidl

That is a really insightful take, thank you for sharing!

sandworm101

A brain in a jar, with wires so that we can communicate with it, already exists. Its called the internet. My brain is communicating with you now through wires. Replacing my keyboard with implanted electrodes may speed up the connection, but it wont fundimentally change the structure or capabilities of the machine.

Waterluvian

Wait, are we all just Servitors?!

frizlab

> if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence

It’s called a soul for the believers.

agumonkey

Then there's the other side of the issue. If your tool is smarter than you.. how do you handle it ?

People are joking online that some colleagues use chatgpt to answer questions from other teammates made by chatgpt, nobody knows what's going on anymore.

breuleux

I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.

airstrike

What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.

navels

why not?

izzydata

I'm not an expert by any means, but everything I've seen of LLMs / machine learning looks like mathematical computation no different than what computers have always been doing at a fundamental level. If computers weren't AI before than I don't think they are now just because the maths they are doing has changed.

Maybe something like the game of life is more in the right direction. Where you set up a system with just the right set of rules with input and output and then just turn it on and let it go and the AI is an emergent property of the system over time.

hackinthebochs

Why do you have a preconception of what an implementation of AGI should look like? LLMs are composed of the same operations that computers have always done. But they're organized in novel ways that have produced novel capabilities.

paulpauper

I agree. There is no define or agreed upon consensus of what AGI even means or implies. Instead, we will continue to see incremental improvements at the sort of things AI is good at, like text and image generation, generating code, etc. The utopia dream of AI solving all of humanity's problems as people just chill on a beach basking in infinite prosperity are unfounded.

colechristensen

>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem

baxtr

I think the same.

How do you call people like us? AI doomers? AI boomers?!

npteljes

izzydata

This article is about being skeptical that what people currently call AI that is actually LLMs is going to be a transformative technology.

Myself and many others are skeptical that LLMs are even AI.

LLMs / "AI" may very well be a transformative technology that changes the world forever. But that is a different matter.

paulpauper

There is a middle ground of people believe AI will lead to improvements in some respects of life, but will not liberate people from work or anything grandiose like that.

baxtr

I am big fan of AI tools.

I just don’t see how AGI is possible in the near future.

Mistletoe

Realists.

dinkumthinkum

I think you are very right to be skeptical. It’s refreshing to see another such take as it is so strange to see so many supposedly technical people just roll down the track of assuming this is happening when there are some fundamental problems with this idea. I understand why non-technical are ready to marry and worship it or whatever but for serious people I think we need to think more critically.

datatrashfire

Am I missing something? Predicts AGI through continuous learning in 2032? Feels right around the corner to me.

> But in all the other worlds, even if we stay sober about the current limitations of AI, we have to expect some truly crazy outcomes.

Also expresses the development as a nearly predetermined outcome? A bunch of fanciful handwaving if you ask me.

Nition

Yeah, my suspicion is that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below.

AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives. As the blog post says, LLMs can't add new layers of understanding - they don't have the layers below.

An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I'm not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I'll be proven wrong soon, or a whole new AI paradigm will happen that eclipses LLMs. In a way I hope not, because the potential ASI future is pretty scary.

vessenes

Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.

Davidzheng

I'm sure we'll have true test-time-learning soon (<5 years)but it will be more expensive. Alphaproof (for Deepmind's IMO attempt) already has this.

nicoburns

How long is "long"? Real humans have context windows measured in decades of realtime multimodal input.

kranke155

I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.

eikenberry

When someone says 10 years out in tech it means there are several needed breakthroughs that they think could possibly happen if things go just right. Being an expert doesn't make the 10 years more accurate, it makes the 'breakthroughs needed' part more meaningful.

merizian

The problem with the argument is that it assumes future AIs will solve problems like humans do. In this case, it’s that continuous learning is a big missing component.

In practice, continual learning has not been an important component of improvement in deep learning history thus far. Instead, large diverse datasets and scale have proven to work the best. I believe a good argument for continual learning being necessary needs to directly address why the massive cross-task learning paradigm will stop working, and ideally make concrete bets on what skills will be hard for AIs to achieve. I think generally, anthropomorphisms lack predictive power.

I think maybe a big real crux is the amount of acceleration you can achieve once you get very competent programming AIs spinning the RL flywheel. The author mentioned uncertainty about this, which is fair, and I share the uncertainty. But it leaves the rest of the piece feeling too overconfident.

Davidzheng

Well Alphaproof used test-time-training methods to generate similar problems (alphazero style) for each question it encounters.

827a

Continuous learning might not have been important in the history of deep learning evolution yet, but that might just be because the deep learning folks are measuring the wrong thing. If you want to build the most intelligent AI ever, based on whatever synthetic benchmark is hot this month, then you'd do exactly what the labs are doing. If you want to build the most productive and helpful AI; intelligence isn't always the best goal. Its usually helpful, but an idiot who learns from his mistakes is often more valuable than a egotistical genius.

Herring

Apparently 54% of American adults read at or below a sixth-grade level nationwide. I’d say AGI is kinda here already.

https://en.wikipedia.org/wiki/Literacy_in_the_United_States

yeasku

Does a country failed education system has anything to do with AGI?

Davidzheng

Yes if you measure AGI against median human.

ch4s3

The stat is skewed wildly by immigration. The literacy level of native born Americans is higher. The population of foreign born adults is nearly 20% of the total adult population, and as you can imagine many are actively learning English.

Herring

It’s not skewed much by immigration. This is because the native-born population is much larger.

See: https://www.migrationpolicy.org/sites/default/files/publicat...

51% of native-born adults scored at Level 3 or higher. This is considered the benchmark for being able to manage complex tasks and fully participate in a knowledge-based society. Only 28% of immigrant adults achieved this level. So yes immigrants are in trouble, but it’s still a huge problem with 49% native-born below Level 3.

dankwizard

The immigration is actually working to boost literacy levels. Americans have been falling off for a long time.

mopenstein

What percentage of those people could never read above a certain grade level? Could 100% of humans eventually, with infinite resources and time, all be geniuses? Could they read and comprehend all the works produced by mankind?

I'm curious.

kranke155

No but they could probably read better. Just look at the best education systems in the world and propagate that. Generally, all countries should be able to replicate that.

skybrian

From an economics perspective, a more relevant comparison would be to the workers that a business would normally hire to do a particular job.

For example, for a copy-editing job, they probably wouldn't hire people who can't read all that well, and never mind what the national average is. Other jobs require different skills.

Herring

Life is a lot bigger than just economics.

See here for example: https://data.worldhappiness.report/chart

The US economy has never been richer, but overall happiness just keeps dropping. So they vote for populists. Do you think more AI will help?

I think it’s wiser to support improving education.

skybrian

I don’t know whether it will or not. I seems like people are worrying about the economic impact of AI though?

korijn

The ability to read is all it takes to have AGI?

thousand_nights

very cool. now let's see the LLM do the laundry and wash my dishes

yes you're free to give it a physical body in the form of a robot. i don't think that will help.

dinkumthinkum

Yet, those illiterate people can still solve enormous amounts of challenges that LLMs cannot.

pu_pe

While most takes here are pessimist about AI, the author himself suggests he believes there is a 50% chance of AGI being achieved by the early 2030's, and says we should still prepare for the odd possibility of misaligned ASI by 2028. If anything, the author is bullish on AI.

goatlover

How would we prepare for misaligned ASI in 3 years? That happens and all bets are off.

babymetal

I've been confused with the AI discourse for a few years, because it seems to make assertions with strong philosophical implications for the relatively recent (Western) philosophical conversation around personal identity and consciousness.

I no longer think that this is really about what we immediately observe as our individual intellectual existence, and I don't want to criticize whatever it is these folks are talking about.

But FWIW, and in that vein, if we're really talking about artificial intelligence, i.e. "creative" and "spontaneous" thought, that we all as introspective thinkers can immediately observe, here are references I take seriously (Bernard Williams and John Searle from the 20th century):

https://archive.org/details/problemsofselfph0000will/page/n7...

https://archive.org/details/intentionalityes0000sear

Descartes, Hume, Kant and Wittgenstein are older sources that are relevant.

[edit] Clarified that Williams and Searle are 20th century.

A_D_E_P_T

See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

hackinthebochs

> "Shouldn’t we be expecting that kind of stuff?"

https://x.com/robertghrist/status/1841462507543949581

IAmGraydon

This is precisely the question I’ve been asking, and the lack of an answer makes me think that this entire thing is one very elaborate, very convincing magic trick. LLMs can be better thought of search engines with a very intuitive interface for all existing, publicly available human knowledge rather than actually intelligent. I think all of the big players know this, and are feeding the illusion to extract as much cash as possible before the farce becomes obvious.

luckydata

Well this statement is simply not true. Agent systems based on LLMs have made original discoveries on their own, see the work Deep Mind has done on pharmaceutical discovery.

A_D_E_P_T

What results have they delivered?

I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.

I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.

vessenes

I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.

IAmGraydon

No, it’s really not that rare. There are new scientific discoveries all time, and all from people who don’t have the advantage of having the entire corpus of human knowledge in their heads.

baobabKoodaa

Hey, we were featured in this article! How cool is that!

> I’m not going to be like one of those spoiled children on Hackernews who could be handed a golden-egg laying goose and still spend all their time complaining about how loud its quacks are.