A bear case: My predictions regarding AI progress
219 comments
·March 10, 2025csomar
_huayra_
Ultimately, every AI thing I've tried in this era seems to want to make me happy, even if it's wrong, instead of helping me.
I describe it like "an eager intern who can summarize a 20-min web search session instantly, but ultimately has insufficient insight to actually help you". (Note to current interns: I'm mostly describing myself some years ago; you may be fantastic so don't take it personally!)
Most of my interactions with it via text prompt or builtin code suggestions go like this:
1. Me: I want to do X in C++. Show me how to do it only using stdlib components (no external libraries).
2. LLM: Gladly! Here is solution X
3. Me: Remove the undefined behavior from foo() and fix the methods that call it
4. LLM: Sure! Here it is (produces solution X again)
5. Me: No you need to remove the use of uninitialized variables as the out parameters.
6. LLM: Oh certainly! Here is the correct solution (produces a completely different solution that also has issues)
7. Me: No go back to the first one
etc
For the ones that suggest code, it can at least suggest some very simple boilerplate very easily (e.g. gtest and gmock stuff for C++), but asking it to do anything more significant is a real gamble. Often I end up spending more time scrutinizing the suggested code than writing a version of it myself.
rchaud
The difference is that interns can learn, and can benefit from reference items like a prior report, whose format and structure they can follow when working on the revisions.
AI is just AI. You can upload a reference file for it to summarize, but it's not going to be able to look at the structure of the file and use that as a template for future reports. You'll still have to spoon-feed it constantly.
yifanl
7 is the worst part about trying to review my coworker's code that I'm 99% confident is copilot output - and to be clear, I don't really care how someone chooses to write their code, I'll still review it as evenly as I can.
I'll very rarely ask someone to completely rewrite a patch, but so often a few minor comments get addressed with an entire new block of code that forces me to do a full re-review, and I can't get it across to him that that's not what I'm asking for.
red-iron-pine
interns can generally also tell me "tbh i have no damn idea", while AI just talks out it's virtual ass, and I can't read from it's voice or behavior that maybe it's not sure.
interns can also be clever and think outside the box. this is mostly not good, but sometimes they will surprise you in a good way. the AI by definition can only copy what someone else has done.
roncesvalles
The last line has been my experience as well. I only trust what I've verified firsthand now because the Internet is just so rife with people trying to influence your thoughts in a way that benefits them, over a good faith sharing of the truth.
I just recently heard this quote from a clip of Jeff Bezos: "When the data and the anecdotes disagree, the anecdotes are usually right.", and I was like... wow. That quote is the zeitgeist.
If it's so revolutionary, it should be immediately obvious to me. I knew Uber, Netflix, Spotify were revolutionary the first time I used them. With LLMs for coding, it's like I'm groping in the dark trying to find what others are seeing, and it's just not there.
roenxi
> I knew Uber, Netflix, Spotify were revolutionary the first time I used them.
Maybe re-tune your revolution sensor. None of those are revolutionary companies. Profitable and well executed, sure, but those turn up all the time.
Uber's entire business model was running over the legal system so quickly that taxi licenses didn't have time to catch up. Other than that it was a pretty obvious idea. It is a taxi service. The innovations they made were almost completely legal ones; figuring out how to skirt employment and taxi law.
Netflix was anticipated online by and is probably inferior to YouTube except for the fact that they have a pretty traditional content creator lab tacked on the side to do their own programs. And torrenting had been a thing for a long time already showing how to do online distribution of video content.
roncesvalles
They were revolutionary as product genres, not necessary individual companies. Ordering a cab without making a phone call was revolutionary. Netflix at least with its initial promise of having all the world's movies and TV was revolutionary, but it didn't live up to that. Spotify because of how cheap and easy it was to have access to all the music, this was the era when people were paying 99c per song on iTunes.
I've tried some AI code completion tools and none of them hit me that way. My first reaction was "nobody is actually going to use this stuff" and that opinion hasn't really changed.
And if you think those 3 companies weren't revolutionary then AI code completion is even less than that.
jimbokun
> The innovations they made were almost completely legal ones; figuring out how to skirt employment and taxi law.
The impact of this was quite revolutionary.
> except for the fact that they have a pretty traditional content creator lab tacked on the side to do their own programs.
The way in which they did this was quite innovative, if not "revolutionary". They used the data they had from the watching habits of their large user base to decide what kinds of content to invest in creating.
Breza
I strongly disagree about Netflix. It came out when I was in high school without a car. Being able to get whatever DVD I wanted without having to bum a ride from my parents--and also never have to pay late fees--was a major game changer.
csomar
> None of those are revolutionary companies.
Not only Uber/Grab (or delivery app) were revolutionary, they are still revolutionary. I could live without LLMs and my life will be slightly impacted when coding. If delivery apps are not available, my life is severely degraded. The other day I was sick. I got medicine and dinner with Grab. Delivered to the condo lobby which is as far as I can get. That is revolutionary.
mlsu
Revolutionary things are things that change how society actually works at a fundamental level. I can think of four technologies of the past 40 years that fit that bill:
the personal computer
the internet
the internet connected phone
social media
those technologies are revolutionary, because they caused fundamental changes to how people behave. People who behaved differently in the "old world" were forced to adapt to a "new world" with those technologies, whether they wanted to or not. A newer more convenient way of ordering a taxicab or watching a movie or music are great consumer product stories, and certainly big money makers. They don't cause complex and not fully understood changes to way people work, play, interact, self-identify, etc. the way that revolutionary technologies do.
Language models feel like they have the potential to be a full blown sociotechnological phenomenon like the above four. They don't have a convenient consumer product story beyond ChatGPT today. But they are slowly seeping into the fabric of things, especially on social media, and changing the way people apply to jobs, draft emails, do homework, maybe eventually communicate and self-identify at a basic level.
I'd almost say that the lack of a smash bang consumer product story is even more evidence that the technology is diffusing all over the place.
fragmede
> it's just not there
Build the much maligned Todo app with Aider and Claude for yourself. give it one sentence and have it spit out working, if imperfect code. iterate. add a graph for completion or something and watch it pick and find a library without you having to know the details of that library. fine, sure, it's just a Todo app, and it'll never work for a "real" codebase, whatever that means, but holy shit, just how much programming did you need to get down and dirty with to build that "simple" Todo app? Obviously building a Todo app before LLMs was possible, but abstracted out, the fact that it can be generated like that's not a game changer?
namaria
How are you surprise that getting an LLM to spit out a clone of a very common starter project is evidence of it being able to generate non trivial and valuable code - as in not a clone of overabundant codebases - on demand?
grumbel
While I don't disagree with that observation, it falls into the "well, duh!"-category for me. The models are build with no mechanism for long term memory and thus suck at tasks that require long term memory. There is nothing surprising here. There was never any expectation that LLMs magically develop long term memory, as that's impossible given the architecture. They predict the next word and once the old text moves out of the context window, it's gone. The models neither learn as they work nor can they remember the past.
It's not even like humans are all that different here. Strip a human of their tools (pen&paper, keyboard, monitor, etc.) and have them try solving problems with nothing but the power of their brain and they'll struggle a hell of a lot too, since our memory ain't exactly perfect either. We don't have perfect recall, we look things up when we need to, a large part of our "memory" is out there in the world around us, not in our head.
The open question is how to move forward. But calling AI progress a dead end before we even started exploring long term memory, tool use and on-the-fly learning is a tad little premature. It's like calling quits on the development of the car before you put the wheels on.
edanm
> If you have a revolutionary intelligence product, why is it not working for me?
Is programming itself revolutionary? Yes. Does it work for most people? I don't even know how to parse that question, most people aren't programmers and need to spend a lot of effort to be able to harness a tool like programming. Especially in the early days of software dev, when programming was much harder.
Your position of "I'll only trust things I see with my own eyes" is not a very good one, IMO. I mean, for sure the internet is full of hype and tricksters, but your comment yesterday was on a Tweet by Steve Yegge, a famous and influential software developer and software blogger, who some of us have been reading for twenty years and has taught us tons.
He's not a trickster, not a fraud, and if he says "this technology is actually useful for me, in practice" then I believe he has definitely found an actual use of the technology. Whether I can find a similar use for that technology is a question - it's not always immediate. He might be working in a different field, with different constraints, etc. But most likely, he's just doing something he's learned how to do and I don't, meaning I want to learn it.
bdangubic
If you have a revolutionary intelligence product, why is it not working for me?
This is kind of like if I said when the first dumbbell was invented “why don’t I look like arnold schwarzenegger…
kiratp
You’re not using the best tools.
Claude Code, Cline, Cursor… all of them with Claude 3.7.
csomar
Nope. I try the latest models as they come and I have a self-made custom setup (as in a custom lua plugin) in Neovim. What I am not, is selling AI or AI-driven solutions.
hattmall
Similar experience, I try so hard to make AI useful, and there are some decent spots here and there. Overall though I see the fundamental problem being that people need information. Language isn't strictly information, and the LLMs are very good at language, but they aren't great at information. I think anything more than the novelty of "talking" to the AI is very over hyped.
There is some usefulness to be had for sure, but I don't know if the usefulness is there with the non-subsidized models.
cheevly
Perhaps we could help if you shared some real examples of what walls you’re hitting. But it sounds like you’ve already made up your mind.
demosthanos
It's worth actually trying Cursor, because it is a valuable step change over previous products and you might find it's better in some ways than your custom setup. The processes they use for creating the context seems to be really good. And their autocomplete is far better than Copilot's in ways that could provide inspiration.
That said, you're right that it's not as overwhelmingly revolutionary as the internet would lead you to believe. It's a step change over Copilot.
RamtinJ95
Do you mean that you have successfully managed to get the same experience in cursor but in neovim? I have been looking for something like that to move back to my neovim setup instead of using cursor. Any hints would be greatly appreciated!
kiratp
The entire wrapped package of tested prompts, context management etc. is a whole step change from what you can build yourself.
There is a reason Cursor is the fastest startup to $100M in revenue, ever.
kledru
github copilot is a bit outdated technology to be fair...
stego-tech
> At some point there might be massive layoffs due to ostensibly competent AI labor coming onto the scene, perhaps because OpenAI will start heavily propagandizing that these mass layoffs must happen. It will be an overreaction/mistake. The companies that act on that will crash and burn, and will be outcompeted by companies that didn't do the stupid.
We're already seeing this with tech doing RIFs and not backfilling domestically for developer roles (the whole, "we're not hiring devs in 202X" schtick), though the not-so-quiet secret is that a lot of those roles just got sent overseas to save on labor costs. The word from my developer friends is that they are sick and tired of having to force a (often junior/outsourced) colleague to explain their PR or code, only to be told "it works" and for management to overrule their concerns; this is embedding AI slopcode into products, which I'm sure won't have any lasting consequences.
My bet is that software devs who've been keeping up with their skills will have another year or two of tough times, then back into a cushy Aeron chair with a sparkling new laptop to do what they do best: write readable, functional, maintainable code, albeit in more targeted ways since - and I hate to be that dinosaur - LLMs produce passable code, provided a competent human is there to smooth out its rougher edges and rewrite it to suit the codebase and style guidelines (if any).
dartharva
One could argue that's not strictly "AI labor", just cheap (but real) labor using shortcuts because they're not paid enough to give a damn.
stego-tech
Oh, no, you’re 100% right. One of these days I will pen my essay on the realities of outsourced labor.
Spoiler alert: they are giving just barely enough to not get prematurely fired, because they know if you’re cheap enough to outsource in the first place, you’ll give the contract to whoever is cheapest at renewal anyway.
fragmede
What lasting consequences? Crowdstrike and the 2017 Equifax hack that leaked all our data didn't stop them. The shares of crowdstrike after it happened I bought are up more than the SP500. Elon went through Twitter and fired everybody but it hasn't collapsed. A carpenter has a lot of opinions about the woodworking used on cheap IKEA cabinets, but mass manufacturing and plastic means that building a good solid high quality chair is no longer the craft it used to be.
carlosdp
I'll take that bet, easily.
There's absolutely no way that we're not going to see a massive reduction in the need for "humans writing code" moving forward, given how good LLMs are getting at writing code.
That doesn't mean people won't need devs! I think there's a real case where increased capabilities from LLMs leads to bigger demand for people that know how to direct the tools effectively, of which most would probably be devs. But thinking we're going back to humans "writing readable, functional, maintainable code" in two years is cope.
crote
> There's absolutely no way that we're not going to see a massive reduction in the need for "humans writing code" moving forward, given how good LLMs are getting at writing code.
Sure, but in the same way that Squarespace and Wix killed web development. LLMs are going to replace a decent bunch of low-hanging fruit, but those jobs were always at risk of being outsourced to the lowest bidder over in India anyways.
The real question is, what's going to happen to the interns and the junior developers? If 10 juniors can create the same output as a single average developer equipped with a LLM, who's going to hire the juniors? And if nobody is hiring juniors, how are we supposed to get the next generation of seniors?
Similarly, what's going to happen to outsourcing? Will it be able to compete on quality and price? Will it secretly turn into nothing more than a proxy to some LLM?
TeMPOraL
> And if nobody is hiring juniors, how are we supposed to get the next generation of seniors?
Maybe stop tasking seniors with training juniors, and put them back on writing production code? That will give you one generation and vastly improve products across the board :).
The concern about entry-level jobs is valid, but I think it's good to remember that in the past years, almost all coding is done at entry-level, because if you do it long enough to become moderately competent, you tend to get asked to stop doing it, and train up a bunch of new hires instead.
rahimnathwani
increased capabilities from LLMs leads to bigger demand for people that know how to direct the tools effectively
This is the key thing.torginus
Hate to be the guy to bring it up but Jevons paradox - in my experience, people are much more eager to build software in the LLM age, and projects are getting started (and done!) that were considered 'too expensive to build' or people didn't have the necessary subject matter expertise to build them.
Just a simple crud-ish project needs frontend, backend, infra, cloud, ci/cd experience, and people who could build that as one man shows were like unicorns - a lot of people had a general how most of this stuff worked, but lacked the hands on familiarity with them. LLMs made that knowledge easy and accessible. They certainly did for me.
I've shipped more software in the past 1-2 years than the 5 years before that. And gained tons of experience doing it. LLMs helped me figure out the necessary software, and helped me gain a ton of experience, I gained all those skills, and I feel quite confident in that I could rebuild all these apps, but this time without the help of these LLMs, so even the fearmongering that LLMs will ;make people forget how to code' doesn't seem to ring true.
namaria
I think the blind spot here is that, while LLMs may decrease the developer-time cost of software, it will increase the lifetime ownership cost. And since this is a time delayed signal, it will cause a bullwhip effect. If hiring managers were mad at the 2020 market, 2030 will be a doozy. There will be increased liability in the form of over engineered and hard to maintain code bases, and a dearth of talent able to undo the slopcode.
mark_l_watson
I have used neural networks for engineering problems since the 1980s. I say this as context for my opinion: I cringe at most applications of LLMs that attempt mostly autonomous behavior, but I love using LLMs as ‘side kicks’ as I work. If I have a bug in my code, I will add a few printout statements where I think my misunderstanding of my code is, show an LLM my code and output, explain the error: I very often get useful feedback.
I also like practical tools like NotebookLM where I can pose some questions, upload PDFs, and get a summary based in what my questions.
My point is: my brain and experience are often augmented in efficient ways by LLMs.
So far I have addressed practical aspects of LLMs. I am retired so I can spend time on non practical things: currently I am trying to learn how to effectively use code generated by gemini 2.0 flash at runtime; the gemini SDK supports this fairly well so I am just trying to understand what is possible (before this I spent two months experimenting with writing my own tools/functions in Common Lisp and Python.)
I “wasted” close to two decades of my professional life on old fashioned symbolic AI (but I was well paid for the work) but I am interested in probabilistic approaches, such as in a book I bought yesterday “Causal AI” that was just published.
Lastly, I think some of the recent open source implementations of new ideas from China are worth carefully studying.
hangonhn
I'll add this in case it's helpful to anyone else: LLMs are really good at regex and undoing various encodings/escaping, especially nested ones. I would go so far to say that it's better than a human at the latter.
I once spend over an hour trying to unescape JSON containing UTF8 values that's been escaped prior to being written to AWS's Cloudwatch Logs for MySQL audit logs. It was a horrific level of pain until I just asked ChatGPT to do it and it figured out all the series of escapes and encoding immediately and gave me the step to reverse them all.
LLM as a sidekick has saved me so much time. I don't really use it to generate code but for some odd tasks or API look up, it's a huge time saver.
arrakark
> LLMs are really good at regex
Maybe that's changed recently, but I have struggled to get all but the most basic regex working from GPT-4o-mini
cglace
The thing I can't wrap my head around is that I work on extremely complex AI agents every day and I know how far they are from actually replacing anyone. But then I step away from my work and I'm constantly bombarded with “agents will replace us”.
I wasted a few days trying to incorporate aider and other tools into my workflow. I had a simple screen I was working on for configuring an AI Agent. I gave screenshots of the expected output. Gave a detailed description of how it should work. Hours later I was trying to tweak the code it came up with. I scrapped everything and did it all myself in an hour.
I just don't know what to believe.
aaronbaugher
It kind of reminds me of the Y2K scare. Leading up to that, there were a lot of people in groups like comp.software.year-2000 who claimed to be doing Y2K fixes at places like the IRS and big corporations. They said they were just doing triage on the most critical systems, and that most things wouldn't get fixed, so there would be all sorts of failures. The "experts" who were closest to the situation, working on it in person, turned out to be completely wrong.
I try to keep that in mind when I hear people who work with LLMs, who usually have an emotional investment in AI and often a financial one, speak about them in glowing terms that just don't match up with my own small experiments.
TeMPOraL
I used to believe that until, over a decade later, I read stories from those ""experts" who were closest to the situation", and it turns out Y2K was serious and it was a close call.
jmholla
I just want to pile on here. Y2K was avoided due to a Herculean effort across the world to update systems. It was not an imaginary problem. You'll see it again in the lead up to 2038 [0].
spaceman_2020
You’re biased because if you’re here, you’re likely an A-tier player used to working with other A-tier players.
But the vast majority of the world is not A players. They’re B and C players
I don’t think the people evaluating AI tools have ever worked in wholly mediocre organizations - or even know how many mediocre organizations exist
code_for_monkey
wish this didnt resonate with me so much. Im far from a 10x developer, and im in an organization that feels like a giant, half dead whale. Sometimes people here seem like they work on a different planet.
naasking
> But then I step away from my work and I'm constantly bombarded with “agents will replace us”.
An assembly language programmer might have said the same about C programming at one point. I think the point is, that once you depend on a more abstract interface that permits you to ignore certain details, that permits decades of improvements to that backend without you having to do anything. People are still experimenting with what this abstract interface is and how it will work with AI, but they've already come leaps and bounds from where they were only a couple of years ago, and it's only going to get better.
hattmall
There are some fields though where they can replace humans in significant capacity. Software development is probably one of the least likely for anything more than entry level, but A LOT of engineering has a very very real existential threat. Think about designing buildings. You basically just need to know a lot of rules / tables and how things interact to know what's possible and the best practices. A purpose built AI could develop many systems and back test them to complete the design. A lot of this is already handled or aided by software, but a main role of the engineer is to interface with the non-technical persons or other engineers. This is something where an agent could truly interface with the non-engineer to figure out what they want, then develop it and interact with the design software quite autonomously.
I think though there is a lot of focus on AI agents in software development though because that's just an early adopter market, just like how it's always been possible to find a lot of information on web development on the web!
ForHackernews
Good freaking luck! The inconsistencies of the software world pale in comparison to trying to construct any real world building: http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
seanhunter
> "you basically just need to know a lot of rules..."
This comment commits one of the most common fallacies that I see really often in technical people, which is to assume that any subject you don't know anything about must be really simple.I have no idea where this comment comes from, but my father was a chemical engineer and his father was mechanical engineer. A family friend is a structural engineer. I don't have a perspective about AI replacing people's jobs in general that is any more valuable than anyone elses, but I can say with a great deal of confidence that in those three engineering disciplines specifically literally none of any of their jobs are about knowing a bunch of rules and best practices.
Don't make the mistake of thinking that just because you don't know what someone does, that their job is easy and/or unnecessary or you could pick it up quickly. It may or may not be true but assuming it to be the case is unlikely to take you anywhere good.
hattmall
It's not simple at all, that's a huge reduction to the underlying premise. The complexity is the reason that AI is a threat. That complexity revolves around a tremendous amount of data and how that data interacts. The very nature of the field makes it non-experimental but ripe for advanced automation based on machine learning. The science of engineering from a practical standpoint, where most demand for employees comes from, is very much algorithmic.
arkh
> just
In my experience this word means you don't know whatever you're speaking about. "Just" almost always hide a ton of unknown unknowns. After being burned enough times nowadays when I'm going to use it I try to stop and start asking more questions.
fragmede
It's a trick of human psychology. Asking "why don't you just..." leads to one reaction, when asking "what are the road blocks to completing..." leads to a different but same answer. But thinking "just" is good when you see it as a learning opportunity.
hattmall
I mean, perhaps, but in this case "just" isn't offering any cover. It is only part of the sentence for alliterative purposes, you could "just" remove it and the meaning remains.
drysine
>a main role of the engineer is to interface with the non-technical persons or other engineers
The main role of the engineer is being responsible for the building not collapsing.
tobr
I keep coming back to this point. Lots of jobs are fundamentally about taking responsibility. Even if AI were to replace most of the work involved, only a human can meaningfully take responsibility for the outcome.
hattmall
At a high level yes, but there are multiple levels of teams below that. There are many cases where senior engineers spend all their time reviewing plans from outsourced engineers.
randomNumber7
ChatGPT will probably take more responsibility than Boeing for their airplane software.
gerikson
Most engineering fields are de jure professional, which means they can and probably will enforce limitations on the use of GenAI or its successor tech before giving up that kind of job security. Same goes for the legal profession.
Software development does not have that kind of protection.
hattmall
Sure and people thought taxi medallions were one of the strongest appreciating asset classes. I'm certain they will try but market inefficiencies typically only last if they are the most profitable scenario. Private equity is already buying up professional and trade businesses at a record pace to exploit inefficiencies caused by licensing. Dentists, vets, Urgent Care, HVAC, plumbing, pest control, etc. Engineering firms are no exception. Can a licensed engineer stamp one million AI generated plans a day? That's the person PE will find and run with that. My neighbor was a licensed HVAC contractor for 18 yrs with a 4-5 person crew. He got bought out and now has 200+ techs operating under his license. Buy some vans, make some shirts, throw up a billboard, advertise during the local news. They can hire anyone as an apprentice, 90% of the calls are change the filter, flip the breaker, check refrigerant, recommend a new unit.
red-iron-pine
for ~3 decades IT could pretend it didn't need unions because wages and opportunities were good. now the pendulum is swinging back -- maybe they do need those kinds of protections.
and professional orgs are more than just union-ish cartels, they exist to ensure standards, and enforce responsibility on their members. you do shitty unethical stuff as a lawyer and you get disbarred; doctors lose medical licenses, etc.
cheevly
I promise the amount of time, experiments and novel approaches you’ve tested are .0001% of what others have running in stealth projects. Ive spent an average of 10 hours per day constantly since 2022 working on LLMs, and I know that even what I’ve built pales in comparison to other labs. (And im well beyond agents at this point). Agentic AI is what’s popular in the mainstream, but it’s going to be trounced by at least 2 new paradigms this year.
cglace
So what is your prediction?
colonCapitalDee
Yeah, I'd buy it. I've been using Claude pretty intensively as a coding assistant for the last couple months, and the limitations are obvious. When the path of least resistance happens to be a good solution, Claude excels. When the best solution is off the beaten track, Claude struggles. When all the good solutions lay off the beaten track, Claude falls flat on its face.
Talking with Claude about design feels like talking with that one coworker who's familiar with every trendy library and framework. Claude knows the general sentiment around each library and has gone through the quickstart, but when you start asking detailed technical questions Claude just nods along. I wouldn't bet money on it, but my gut feeling is that LLMs aren't going to be a straight or even curved shot to AGI. We're going to see plenty more development in LLMs, but it'll be just be that. Better LLMs that remain LLMs. There will be areas where progress is fast and we'll be able to get very high intelligence in certain situations, but there will also be many areas where progress is slow, and the slow areas will cripple the ability of LLMs to reach AGI. I think there's something fundamentally missing, and finding what that "something" is is going to take us decades.
randomNumber7
Yes, but on the other hand I don't understand why people think something that you can train something on pattern matching and it magically becomes intelligent.
throw4847285
This is the difference between the scientific approach and the engineering approach. Engineers just need results. If humans had to mathematically model gravity first, there would be no pyramids. Plus, look up how many psychiatric medications are demonstrated to be very effective, but the action mechanisms are poorly understood. The flip side is Newton doing alchemy or Tesla claiming to have built an earthquake machine.
Sometimes technology far predates science and other times you need a scientific revolution to develop new technology. In this case, I have serious doubts that we can develop "intelligent" machines without understanding the scientific and even philosophical underpinnings of human intelligence. But sometimes enough messing around yields results. I guess we'll see.
danielbln
We don't know what exactly makes us humans as intelligent as we are. And while I don't think that LLMs will be general intelligent without some other advancements, I don't get the confident statements that "clearly pattern matching can't lead to intelligence" when we don't really know what leads to intelligence to begin with.
nyrikki
We can't even define what intelligence is.
We know or have strong hints at the limits of math/computation related to LLMs + CoT
Note how PARITY and MEDIAN is hard here:
https://arxiv.org/abs/2502.02393
We also know HALT == open frame == symbol grounding == system identification problems.
The definition of AGI is also not well defined, but given the following:
> Strong AI, also called artificial general intelligence, refers to machines possessing generalized intelligence and capabilities on par with human cognition.
We know enough for any mechanical methods with either current machines or even quantum machines, what is needed is impossible with the above definition.
Walter Pitts drank himself to death, in part because of the failure of the perceptron model.
Humans and machines are better at different things, and while ANNs are inspired by biology, they are very different.
There are some hints that the way biological neurons work is incompatible with math as we know it.
https://arxiv.org/abs/2311.00061
Computation and machine learning are incredibly powerful and useful, but are fundamentally different, and that different is both a benefit and a limit.
There are dozens of 'no effective procedure', 'no approximation', etc .. results that demonstrate that ML as we know it today is possible of most definitions of AGI.
That is why particular C* types shift the goal post, because we know that the traditional definition of strong AI is equivalent to solving HALT.
https://philarchive.org/rec/DIEEOT-2
There is another path following PAC Learning as compression an NP being about finding parsimonious reductions (P being in NP)
Paradigma11
I am not so sure about that. Using Claude yesterday it gave me a correct function that returned an array. But the algorithm it used did not return the items sorted in one pass so it had run a separate sort at the end. The fascinating thing is that it realized that, commented on it and went on and returned a single pass function.
That seems a pretty human thought process and shows that fundamental improvements might not depend as much on the quality of the LLM itself but on the cognitive structure it is embedded.
jemmyw
I've been writing code that implements tournament algorithms for games. You'd think an LLM would excel at this because it can explain the algorithms to me. I've been using cline on lots of other tasks to varying success. But it just totally failed with this one: it kept writing edge cases instead of a generic implementation. It couldn't write coherent enough tests across a whole tournament.
So I wrote tests thinking it could implement the code from the tests, and it couldn't do that either. At one point it went so far with the edge cases that it just imported the test runner into the code so it could check the test name to output the expected result. It's like working with a VW engineer.
Edit: I ended up writing the code and it wasn't that hard, I don't know why it struggled with this one task so badly. I wasted far more time trying to make the LLM work than just doing it myself.
usaar333
Author also made a highly upvoted and controversial comment about o3 in the same vein that's worth reading: https://www.lesswrong.com/posts/Ao4enANjWNsYiSFqc/o3?comment...
Oh course lesswrong, being heavily AI doomers, may be slightly biased against near term AGI just from motivated reasoning.
Gotta love this part of the post no one has yet addressed:
> At some unknown point – probably in 2030s, possibly tomorrow (but likely not tomorrow) – someone will figure out a different approach to AI. Maybe a slight tweak to the LLM architecture, maybe a completely novel neurosymbolic approach. Maybe it will happen in a major AGI lab, maybe in some new startup. By default, everyone will die in <1 year after that
gwern
I never thought I'd see the day that LessWrong would be accused of being biased against near-term AGI forecasts (and for none of the 5 replies to question this description either). But here we are. Indeed do many things come to pass.
TeMPOraL
Yup. I was surprised to see this article on LW in the first place - it goes against what you'd expect there in the first place. But to see HN comments dissing an LW article for being biased against near-term AGI forecasts? That made me wonder if I'm not dreaming.
(Sadly, I'm not.)
achierius
> Oh course lesswrong, being heavily AI doomers, may be slightly biased against near term AGI just from motivated reasoning.
LessWrong was predicting AI doom within decades back when people thought it wouldn't happen in our lifetimes; even as recently as 2018~2020, people there were talking about 2030-2040 while the rest of the world laughed at the very idea. I struggle to accept an argument that they're somehow under-estimating the likelihood of doom given all the historical evidence to the contrary.
demaga
I would expect similar doom predictions in the era of nuclear weapon invention, but we've survived so far. Why do people assume AGI will be orders of magnitude more dangerous than what we already have?
amoss
Nuclear weapons are not self-improving or self-replicating.
colonial
Self-improvement (in the "hard takeoff" sense) is hardly a given, and hostile self-replication is nothing special in the software realm (see: worms.)
Any technically competent human knows the foolproof strategy for malware removal - pull the plug, scour the platter clean, and restore from backup. What makes an out-of-control pile of matrix math any different from WannaCry?
AI doom scenarios seem scary, but most are premised on the idea that we can create an uncontainable, undefeatable "god in a box." I reject such premises. The whole idea is silly - Skynet Claude or whatever is not going to last very long once I start taking an axe to the nearest power pole.
kragen
Because they've thought about the question to a deeper extent than just a strained simile.
usaar333
More ability to kill everyone. That's harder to do with nukes.
That said, the actual forecast odds on metaculus are pretty similar for nuclear and AI catastrophies: https://possibleworldstree.com/
kragen
Prediction markets should not be expected to provide useful results for existential risks, because there is no incentive for human players to bet on human extinction; if they happen to be right, they won't be able to collect their winnings, because they'll personally be too dead.
randomNumber7
Most people are just ignorant and dumb, dont listen to it.
HelloMcFly
Was that comment intended seriously? I thought it was a wry joke.
usaar333
I think so. Thane is aligned with the high p doom folks.
1 year may be slightly exaggerated, but it aligns with his view
spaceman_2020
The impression I get from using all cutting edge AI tools:
1. Sonnet 3.7 is a mid-level web developer at least
2. DeepResearch is about as good an analyst as an MBA from a school ranked 50+ nationally. Not lower than that. EY, not McKinsey
3. Grok 3/GPT-4.5 are good enough as $0.05/word article writers
Its not replacing the A-players but its good enough to replace B players and definitely better than C and D players
id00
I'd expect mid-level developer to show more understanding and better reasoning. So far it looks like a junior dev who read a lot of books and good at copy pasting from stackoverflow.
(Based on my everyday experience with Sonet and Cursor)
tcoff91
A midlevel web developer should do a whole lot more than just respond to chat messages and do exactly what they are told to do and no more.
danielbln
When I use LLMs that what it does. Spawns commands, edits files, runs tests, evaluates outputs, iterates and solutions under my guidance.
weweersdfsd
The key here is "under your guidance". LLM's are a major productivity boost for many kinds of jobs, but can LLM-based agents be trusted to act fully autonomously for tasks with real world consequence? I think the answer is still no, and will be for a long time. I wouldn't trust LLM to even order my groceries without review, let alone push code into production.
To reach anything close to definition of AGI, LLM agents should be able to independently talk to customers, iteratively develop requirements, produce and test solutions, and push them to production once customers are happy. After that, they should be able to fix any issues arising in production. All this without babysitting / review / guidance from human devs, reliably
a-dub
> LLMs are not good in some domains and bad in others. Rather, they are incredibly good at some specific tasks and bad at other tasks. Even if both tasks are in the same domain, even if tasks A and B are very similar, even if any human that can do A will be able to do B.
i think this is true of ai/ml systems in general. we tend to anthropomorphise their capability curves to match the cumulative nature of human capabilities, where often times the capability curve of the machine is discontinuous and has surprising gaps.
orangebread
I think the author provides an interesting perspective to the AI hype, however, I think he is really downplaying the effectiveness of what you can do with the current models we have.
If you've been using LLMs effectively to build agents or AI-driven workflows you understand the true power of what these models can do. So in some ways the author is being a little selective with his confirmation bias.
I promise you that if you do your due diligence in exploring the horizon of what LLMs can do you will understand what I'm saying. If ya'll want a more detailed post I can get into the AI systems I have been building. Don't sleep on AI.
aetherson
I don't think he is downplaying the effectiveness of what you can do with the current models. Rather, he's in a milieu (LessWrong), which is laser-focused on "transformative" AI, AGI, and ASI.
Current AI is clearly economically valuable, but if we freeze everything at the capabilities it has today it is also clearly not going to result in mass transformation of the economy from "basically being about humans working" to "humans are irrelevant to the economy." Lots of LW people believe that in the next 2-5 years humans will become irrelevant to the economy. He's arguing against that belief.
mbil
I agree with you. I recently wrote up my perspective here: https://news.ycombinator.com/item?id=43308912
andsoitis
This poetic statement by the author sums it up for me:
”People are extending LLMs a hand, hoping to pull them up to our level. But there's nothing reaching back.”
blitzar
When you (attempt to) save a person from drowning there is ridiculously high chance of them drowning you.
nakedneuron
Haha.
Shame on you for making me laugh. That was very inappropriate.
swazzy
I see no reason to believe the extraordinary progress we've seen recently will stop or even slow down. Personally, I've benefited so much from AI that it feels almost alien to hear people downplaying it. Given the excitement in the field and the sheer number of talented individuals actively pushing it forward, I'm quite optimistic that progress will continue, if not accelerate.
Workaccount2
If LLM's are bumpers on a bowling lane, HN is a forum of pro bowlers.
Bumpers are not gonna make you a pro bowler. You aren't going to be hitting tons of strikes. Most pro bowlers won't notice any help from bumpers, except in some edge cases.
If you are an average joe however, and you need to knock over pins with some level of consistency, then those bumpers are a total revolution.
esafak
That is not a good analogy. They are closer to assistants to me. If you know how and what to delegate, you can increase your productivity.
danielbln
I hear you, I feel constantly bewildered by comments like "LLMs haven't changed really since GPT3.5.", I mean really? It went from an exciting novelty to a core pillar of my daily work, it's allowed me and my entire (granted , quote senior) org to be incredibly more productive and creative with our solutions.
And the I stumble across a comment where some LLM hallucinated a library that means clearly AI is useless.
dartharva
>At some point there might be massive layoffs due to ostensibly competent AI labor coming onto the scene, perhaps because OpenAI will start heavily propagandizing that these mass layoffs must happen. It will be an overreaction/mistake. The companies that act on that will crash and burn, and will be outcompeted by companies that didn't do the stupid.
(IMO) Apart from programmer assistance (which is already happening), AI agents will find the most use in secretarial, ghostwriting and customer support roles, which generally have a large labor surplus and won't immediately "crash and burn" companies even if there are failures. Perhaps if it's a new startup or a small, unstable business on shaky grounds this could become a "last straw" kind of a factor, but for traditional corporations with good leeway I don't think just a few mistakes about AI deployment can do too much harm. The potential benefits, on the other hand, far outmatch the risk taken.
hattmall
I see engineering, not software, but the other technical areas that have the biggest threat. High paid, knowledge based fields, but not reliant on interpersonal communication. Secretarial and customer support less so, they aren't terribly high paid and anything that relies on interacting with people is going to meet a lot of pushback. US based call centers is already a big selling point for a lot of companies and chat bots have been around for years in customer support and people hate them and there's a long way to go to change that perception.
> LLMs still seem as terrible at this as they'd been in the GPT-3.5 age. Software agents break down once the codebase becomes complex enough, game-playing agents get stuck in loops out of which they break out only by accident, etc.
This has been my observation. I got into Github Copilot as early as it launched back when GPT-3 was the model. By that time (late 2021) copilot can already write tests for my Rust functions, and simple documentation. This was revolutionary. We didn't have another similar moment since then.
The Github copilot vim plugin is always on. As you keep typing, it keeps suggesting in faded text the rest of the context. Because it is always on, I kind of can read into the AI "mind". The more I coded, the more I realized it's just search with structured results. The results got better with 3.5/4 but after that only slightly and sometimes not quite (ie: 4o or o1).
I don't care what anyone says, as yesterday I made a comment that truth has essentially died: https://news.ycombinator.com/item?id=43308513 If you have a revolutionary intelligence product, why is it not working for me?