Skip to content(if available)orjump to list(if available)

A bear case: My predictions regarding AI progress

csomar

> LLMs still seem as terrible at this as they'd been in the GPT-3.5 age. Software agents break down once the codebase becomes complex enough, game-playing agents get stuck in loops out of which they break out only by accident, etc.

This has been my observation. I got into Github Copilot as early as it launched back when GPT-3 was the model. By that time (late 2021) copilot can already write tests for my Rust functions, and simple documentation. This was revolutionary. We didn't have another similar moment since then.

The Github copilot vim plugin is always on. As you keep typing, it keeps suggesting in faded text the rest of the context. Because it is always on, I kind of can read into the AI "mind". The more I coded, the more I realized it's just search with structured results. The results got better with 3.5/4 but after that only slightly and sometimes not quite (ie: 4o or o1).

I don't care what anyone says, as yesterday I made a comment that truth has essentially died: https://news.ycombinator.com/item?id=43308513 If you have a revolutionary intelligence product, why is it not working for me?

roncesvalles

The last line has been my experience as well. I only trust what I've verified firsthand now because the Internet is just so rife with people trying to influence your thoughts in a way that benefits them, over a good faith sharing of the truth.

I just recently heard this quote from a clip of Jeff Bezos: "When the data and the anecdotes disagree, the anecdotes are usually right.", and I was like... wow. That quote is the zeitgeist.

If it's so revolutionary, it should be immediately obvious to me. I knew Uber, Netflix, Spotify were revolutionary the first time I used them. With LLMs for coding, it's like I'm groping in the dark trying to find what others are seeing, and it's just not there.

roenxi

> I knew Uber, Netflix, Spotify were revolutionary the first time I used them.

Maybe re-tune your revolution sensor. None of those are revolutionary companies. Profitable and well executed, sure, but those turn up all the time.

Uber's entire business model was running over the legal system so quickly that taxi licenses didn't have time to catch up. Other than that it was a pretty obvious idea. It is a taxi service. The innovations they made were almost completely legal ones; figuring out how to skirt employment and taxi law.

Netflix was anticipated online by and is probably inferior to YouTube except for the fact that they have a pretty traditional content creator lab tacked on the side to do their own programs. And torrenting had been a thing for a long time already showing how to do online distribution of video content.

roncesvalles

They were revolutionary as product genres, not necessary individual companies. Ordering a cab without making a phone call was revolutionary. Netflix at least with its initial promise of having all the world's movies and TV was revolutionary, but it didn't live up to that. Spotify because of how cheap and easy it was to have access to all the music, this was the era when people were paying 99c per song on iTunes.

I've tried some AI code completion tools and none of them hit me that way. My first reaction was "nobody is actually going to use this stuff" and that opinion hasn't really changed.

And if you think those 3 companies weren't revolutionary then AI code completion is even less than that.

csomar

> None of those are revolutionary companies.

Not only Uber/Grab (or delivery app) were revolutionary, they are still revolutionary. I could live without LLMs and my life will be slightly impacted when coding. If delivery apps are not available, my life is severely degraded. The other day I was sick. I got medicine and dinner with Grab. Delivered to the condo lobby which is as far as I can get. That is revolutionary.

kiratp

You’re not using the best tools.

Claude Code, Cline, Cursor… all of them with Claude 3.7.

csomar

Nope. I try the latest models as they come and I have a self-made custom setup (as in a custom lua plugin) in Neovim. What I am not, is selling AI or AI-driven solutions.

hattmall

Similar experience, I try so hard to make AI useful, and there are some decent spots here and there. Overall though I see the fundamental problem being that people need information. Language isn't strictly information, and the LLMs are very good at language, but they aren't great at information. I think anything more than the novelty of "talking" to the AI is very over hyped.

There is some usefulness to be had for sure, but I don't know if the usefulness is there with the non-subsidized models.

cheevly

Perhaps we could help if you shared some real examples of what walls you’re hitting. But it sounds like you’ve already made up your mind.

colonCapitalDee

Yeah, I'd buy it. I've been using Claude pretty intensively as a coding assistant for the last couple months, and the limitations are obvious. When the path of least resistance happens to be a good solution, Claude excels. When the best solution is off the beaten track, Claude struggles. When all the good solutions lay off the beaten track, Claude falls flat on its face.

Talking with Claude about design feels like talking with that one coworker who's familiar with every trendy library and framework. Claude knows the general sentiment around each library and has gone through the quickstart, but when you start asking detailed technical questions Claude just nods along. I wouldn't bet money on it, but my gut feeling is that LLMs aren't going to be a straight or even curved shot to AGI. We're going to see plenty more development in LLMs, but it'll be just be that. Better LLMs that remain LLMs. There will be areas where progress is fast and we'll be able to get very high intelligence in certain situations, but there will also be many areas where progress is slow, and the slow areas will cripple the ability of LLMs to reach AGI. I think there's something fundamentally missing, and finding what that "something" is is going to take us decades.

Paradigma11

I am not so sure about that. Using Claude yesterday it gave me a correct function that returned an array. But the algorithm it used did not return the items sorted in one pass so it had run a separate sort at the end. The fascinating thing is that it realized that, commented on it and went on and returned a single pass function.

That seems a pretty human thought process and shows that fundamental improvements might not depend as much on the quality of the LLM itself but on the cognitive structure it is embedded.

randomNumber7

Yes, but on the other hand I don't understand why people think something that you can train something on pattern matching and it magically becomes intelligent.

danielbln

A tip: ask Claude to put a critical hat on. I find the output afterwards to be improved.

stego-tech

> At some point there might be massive layoffs due to ostensibly competent AI labor coming onto the scene, perhaps because OpenAI will start heavily propagandizing that these mass layoffs must happen. It will be an overreaction/mistake. The companies that act on that will crash and burn, and will be outcompeted by companies that didn't do the stupid.

We're already seeing this with tech doing RIFs and not backfilling domestically for developer roles (the whole, "we're not hiring devs in 202X" schtick), though the not-so-quiet secret is that a lot of those roles just got sent overseas to save on labor costs. The word from my developer friends is that they are sick and tired of having to force a (often junior/outsourced) colleague to explain their PR or code, only to be told "it works" and for management to overrule their concerns; this is embedding AI slopcode into products, which I'm sure won't have any lasting consequences.

My bet is that software devs who've been keeping up with their skills will have another year or two of tough times, then back into a cushy Aeron chair with a sparkling new laptop to do what they do best: write readable, functional, maintainable code, albeit in more targeted ways since - and I hate to be that dinosaur - LLMs produce passable code, provided a competent human is there to smooth out its rougher edges and rewrite it to suit the codebase and style guidelines (if any).

dartharva

One could argue that's not strictly "AI labor", just cheap (but real) labor using shortcuts because they're not paid enough to give a damn.

stego-tech

Oh, no, you’re 100% right. One of these days I will pen my essay on the realities of outsourced labor.

Spoiler alert: they are giving just barely enough to not get prematurely fired, because they know if you’re cheap enough to outsource in the first place, you’ll give the contract to whoever is cheapest at renewal anyway.

cglace

The thing I can't wrap my head around is that I work on extremely complex AI agents every day and I know how far they are from actually replacing anyone. But then I step away from my work and I'm constantly bombarded with “agents will replace us”.

I wasted a few days trying to incorporate aider and other tools into my workflow. I had a simple screen I was working on for configuring an AI Agent. I gave screenshots of the expected output. Gave a detailed description of how it should work. Hours later I was trying to tweak the code it came up with. I scrapped everything and did it all myself in an hour.

I just don't know what to believe.

hattmall

There are some fields though where they can replace humans in significant capacity. Software development is probably one of the least likely for anything more than entry level, but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices. A purpose built AI could develop many systems and back test them to complete the design. A lot of this is already handled or aided by software, but a main role of the engineer is to interface with the non-technical persons or other engineers. This is something where an agent could truly interface with the non-engineer to figure out what they want, then develop it and interact with the design software quite autonomously.

I think though there is a lot of focus on AI agents in software development though because that's just an early adopter market, just like how it's always been possible to find a lot of information on web development on the web!

arkh

> just

In my experience this word means you don't know whatever you're speaking about. "Just" almost always hide a ton of unknown unknowns. After being burned enough times nowadays when I'm going to use it I try to stop and start asking more questions.

drysine

>a main role of the engineer is to interface with the non-technical persons or other engineers

The main role of the engineer is being responsible for the building not collapsing.

tobr

I keep coming back to this point. Lots of jobs are fundamentally about taking responsibility. Even if AI were to replace most of the work involved, only a human can meaningfully take responsibility for the outcome.

randomNumber7

ChatGPT will probably take more responsibility than Boeing for their airplane software.

spaceman_2020

You’re biased because if you’re here, you’re likely an A-tier player used to working with other A-tier players.

But the vast majority of the world is not A players. They’re B and C players

I don’t think the people evaluating AI tools have ever worked in wholly mediocre organizations - or even know how many mediocre organizations exist

cheevly

I promise the amount of time, experiments and novel approaches you’ve tested are .0001% of what others have running in stealth projects. Ive spent an average of 10 hours per day constantly since 2022 working on LLMs, and I know that even what I’ve built pales in comparison to other labs. (And im well beyond agents at this point). Agentic AI is what’s popular in the mainstream, but it’s going to be trounced by at least 2 new paradigms this year.

cglace

So what is your prediction?

usaar333

Author also made a highly upvoted and controversial comment about o3 in the same vein that's worth reading: https://www.lesswrong.com/posts/Ao4enANjWNsYiSFqc/o3?comment...

Oh course lesswrong, being heavily AI doomers, may be slightly biased against near term AGI just from motivated reasoning.

Gotta love this part of the post no one has yet addressed:

> At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that

demaga

I would expect similar doom predictions in the era of nuclear weapon invention, but we've survived so far. Why do people assume AGI will be orders of magnitude more dangerous than what we already have?

amoss

Nuclear weapons are not self-improving or self-replicating.

usaar333

More ability to kill everyone. That's harder to do with nukes.

That said, the actual forecast odds on metaculus are pretty similar for nuclear and AI catastrophies: https://possibleworldstree.com/

randomNumber7

Most people are just ignorant and dumb, dont listen to it.

a-dub

> LLMs are not good in some domains and bad in others. Rather, they are incredibly good at some specific tasks and bad at other tasks. Even if both tasks are in the same domain, even if tasks A and B are very similar, even if any human that can do A will be able to do B.

i think this is true of ai/ml systems in general. we tend to anthropomorphise their capability curves to match the cumulative nature of human capabilities, where often times the capability curve of the machine is discontinuous and has surprising gaps.

andsoitis

This poetic statement by the author sums it up for me:

”People are extending LLMs a hand, hoping to pull them up to our level. But there's nothing reaching back.”

blitzar

When you (attempt to) save a person from drowning there is ridiculously high chance of them drowning you.

nakedneuron

Haha.

Shame on you for making me laugh. That was very inappropriate.

swazzy

I see no reason to believe the extraordinary progress we've seen recently will stop or even slow down. Personally, I've benefited so much from AI that it feels almost alien to hear people downplaying it. Given the excitement in the field and the sheer number of talented individuals actively pushing it forward, I'm quite optimistic that progress will continue, if not accelerate.

danielbln

I hear you, I feel constantly bewildered by comments like "LLMs haven't changed really since GPT3.5.", I mean really? It went from an exciting novelty to a core pillar of my daily work, it's allowed me and my entire (granted , quote senior) org to be incredibly more productive and creative with our solutions.

And the I stumble across a comment where some LLM hallucinated a library that means clearly AI is useless.

spaceman_2020

The impression I get from using all cutting edge AI tools:

1. Sonnet 3.7 is a mid-level web developer at least

2. DeepResearch is about as good an analyst as an MBA from a school ranked 50+ nationally. Not lower than that. EY, not McKinsey

3. Grok 3/GPT-4.5 are good enough as $0.05/word article writers

Its not replacing the A-players but its good enough to replace B players and definitely better than C and D players

id00

I'd expect mid-level developer to show more understanding and better reasoning. So far it looks like a junior dev who read a lot of books and good at copy pasting from stackoverflow.

(Based on my everyday experience with Sonet and Cursor)

tcoff91

A midlevel web developer should do a whole lot more than just respond to chat messages and do exactly what they are told to do and no more.

danielbln

When I use LLMs that what it does. Spawns commands, edits files, runs tests, evaluates outputs, iterates and solutions under my guidance.

tibbar

LLMs make it very easy to cheat, both academically and professionally. What this looks like in the workplace is a junior engineer not understanding their task or how to do it but stuffing everything into the LLM until lint passes. This breaks the trust model: there are many requirements that are a little hard to verify than an LLM might miss, and the junior engineer can now represent to you that they "did what you ask" without really certifying the work output. I believe that this kind of professional cheating is just as widespread as academic cheating, which is an epidemic.

What we really need is people who can certify that a task was done correctly, who can use LLMs as an aid. LLMs simply cannot be responsible for complex requirements. There is no way to hold them accountable.

roenxi

This seems to be ignoring the major force driving AI right now - hardware improvements. We've barely seen a new hardware generation since ChatGPT was released to the market, we'd certainly expect it to plateau fairly quickly on fixed hardware. My personal experience of AI models is going to be a series of step changes every time the VRAM on my graphics card doubles. Big companies are probably going to see something similar each time a new more powerful product hits the data centre. The algorithms here aren't all that impressive compared to the creeping FLOPS/$ metric.

Bear cases always welcome. This wouldn't be the first time in computing history that progress just falls off the exponential curve suddenly. Although I would bet money on there being a few years left and AGI is achieved.

notTooFarGone

hardware improvements don't strike me as the horse to bet on.

LLM Progression seems to be linear and compute needed exponential. And I don't see exponential hardware improvements besides some new technology (that we should not bet on coming ayntime soon).

bloomingkales

Let's imagine that we all had a trillion dollars. Then we would all sit around and go "well dang, we have everything, what should we do?". I think you'll find that just about everyone would agree, "we oughta see how far that LLM thing can go". We could be in nuclear fallout shelters for decades, and I think you'll still see us trying to push the LLM thing underground, through duress. We dream of this, so the bear case is wrong in spirit. There's no bear case when the spirit of the thing is that strong.

mola

Wdym all of us? I certainly would find much better usages for the money.

What about reforming democracy? Use the corrupt system to buy the votes, then abolish all laws allowing these kind of donations that allow buying votes.

I'll litigate the hell out of all the oligarchs now that they can't out pay justice.

This would pay off more than a moon shot. I would give a bit of money for the moon shot, why not, but not all of it.

null

[deleted]