Skip to content(if available)orjump to list(if available)

Sutskever and LeCun: Scaling LLMs Won't Yield More Useful Results

mvkel

All the frontier houses know this too. They also know it will be extremely difficult to raise more capital if their pitch is "we need to go back to research, which might return nothing at all."

Ilya did also acknowledge that these houses will still generate gobs of revenue, despite being at a dead end, so I'm not sure what the criticism is, exactly.

Everyone knows another breakthrough is required for agi to arrive; sama explicitly said this. Do you wait and sit on your hands until that breakthrough arrives? Or make a lot of money while skating to where the puck will be?

stephc_int13

I would say the issue is that most of the big AI players are burning a lot more cash than they earn, and the main thesis is that they are doing so because their product will be so huge that they will need 10x-100x infrastructure to support it.

But what we're seeing at the moment, is a deceleration, not an acceleration.

catigula

Source?

Everyone at anthropic is saying ASI is imminent…

N_Lens

The frenzy around AI is to do with growth fueled cocaine capitalism seeking 'more' where rational minds can see that we don't have that much more runway left with our current mode of operation.

While the tech is useful, the mass amounts of money being shoveled into AI has more to do with the ever escaping mirage of a promised land where there will be an infinite amount of 'more'. For some people that means post scarcity, for others it means a world dominating AGI that achieves escape velocity against the current gridlock of geopolitics, for still others it means ejecting the pesky labour class and replacing all labour needs with AI and robots. Varied needs, but all perceived as urgent and inescapable by their vested interests.

smollOrg

so the top dogs state the obvious, again?

every LLM easily misaligned, "deceived to deceive" and they want to focus on adding MORE ATTACK SURFACE???

This is glorious.

rishabhaiover

While I think there's obvious merit to their skepticism over the race towards agi, Sutskever's goal doesn't seem practical to me. As Dwarkesh also said, we reach to a safe and eventually perfect system by deploying it in public and iterating over it until optimal convergence dictated by users in a free market. Hence, I trust that Google, OpenAI or Anthropic will reach there, not SSI.

Closi

> we reach to a safe and eventually perfect system by deploying it in public and iterating over it until optimal convergence dictated by users in a free market

Possibly... but also a lot of the foundational AI advancements were actually done in skunkworks-like environments and with pure research rather than iterating in front of the public.

It's not 100% clear to me if the ultimate path to the end is iteration or something completely new.

stephc_int13

Some have been saying this for years now, but the consensus in the AI community and SV has been visibly shifting in the recent months.

Social contagion is astonishingly potent around ideas like this one, and this probably explains why the zeitgeist seems to be immovable for a time and then suddenly change.

gdiamos

I personally don’t think the scaling hypothesis is wrong, but it is running up against real limits

What high quality data sources are not already tapped?

Where does the next 1000x flops come from?

krackers

>What high quality data sources are not already tapped

Stick a microphone and camera outside on a robot and you can get unlimited data of perfect quality (because it by definition is the real world, not synthetic). Maybe the "AGI needs to be embodied" people will be right, because that's the only way to get enough coherent multimodal data to do things like long-range planning, navigation, game-playing, and visual tasks.

kfarr

Also self-driving cars which are hoovering this data already. Both alphabet and grok have an unusual advantage with those data sources.

catigula

NSA datasets. Did your eye catch on the “genesis” project?

junkaccount

Why is Yann Lecun in same article as Ilya?

meatmanek

I can't tell if you meant this as a slight on Ilya Sutskever or on Yann LeCun. Both are well-known names in AI.

scarmig

Pretty sure the article is AI slop, so it's kind of connect the dots

Ilya has appeared to shift to closer to Yann's position, though: he's been on the "scaling LLMs will fail to reach AGI" beat for a long time.