AI is turning us into glue
59 comments
·April 17, 2025rglover
mooreds
Reminds me of these stories (the Asimov one I've posted before):
- "Profession", by Isaac Asimov: http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...
- "Pump Six" by Paolo Bacigalupi (the story of that title)
sarreph
On your second point - I don’t agree that humans in general will plateau. I think instead the _gap_ between humans who crave to create and learn, and those who are ostensibly potatoes, will be magnified.
I see it a bit like the creator economy, where you have these maker vs consumer tranches of people.
abdullahkhalids
Humans are fundamentally creators of tools and art and imaginary world - this is one of the factors that distinguishes us from animals. For most of human history, most humans spent a lot of their time creating. The very recent phenomena of a large fraction of humans creating almost nothing in their adult life is caused by modern economic systems, almost all of which force people to work so much every day that all their creative energies are sapped.
Go to a 4 day 24 hour work week and you will find almost everyone creating again.
hnhn34
Throughout all stages of my schooling from kindergarten to college, there were always artists, tinkerers and builders, but they were never the majority.
I can't think of any time in history, under any economic system, where humans who create tools and art were the majority. Most people just want to have families and enjoy life.
Havoc
> eventually the inputs to the LLM become stale.
Seems plausible to me that they could just keep writing python 3.13 till the end of time.
If you take say assembly - we didn’t stop writing it because it stopped working.
As a functional building block programming seems feature complete
neom
"As a functional building block programming seems feature complete"
This might be one of the more fascinating things I've read in a long time. Care to expand upon? Would be genuinely curious.
Havoc
Afraid there is no deep revelation lurking there.
All I meant is that programming seems reasonably protected against going LLM stale by virtue of being low level and malleable
abletonlive
This feels like a "if i say it enough, people will agree and it will be true" kind of comment. Almost none of these propositions check out or even make sense. I literally can't distinguish between reddit commenters and HN commenters. An unoriginal HN complaint but frustrating to witness over time.
1. Plateau != Regress. Why point to regressions as evidence of plateau? Why only look at a single model and minor version? we are clearly still at AI infancy, regressions are to be expected from time to time.
2. Where's the evidence of this? Humans are using AI to branch out and dip their toes into things that they wouldn't have fathomed doing before. How would that lead you to "disincentivized"?
> Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways
So in this fantasy everybody is vibe coding resilient code/systems that lasts for 10+ years and everybody stops learning how to code, and after a decade or so, they start breaking and everybody is in trouble? This world you're creating wouldn't stand up to the critique of sci-fi readers.
I'm sorry but if we can vibe code systems that last 10+ years and nobody is learning anything because they are performing so well, then that's a job well done by OpenAI and co. We're probably set as a civilization.
eacapeisfutuile
That’s an uninformed perspective. We can’t be ”set as a civilization” by locking in whatever our current progress is. ML models don’t inherently progress anything. So yes if we stop doing that then in 10 years people will not have stopped learning how to code, but possibly stopped actually thinking for themselves which in turn would lead to not progressing neither our ML models or civilization.
rglover
Like I said, in the short-term this will sound false, but in the long-term I expect it to be frighteningly accurate.
> I literally can't distinguish between reddit commenters and HN commenters.
No need to condescend. I have a fair amount of experience building with and using these tools daily. I'm not just some "reddit idiot."
> So in this fantasy everybody is vibe coding code that lasts for 10+ years and everybody stops learning how to code
I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today, and instead, rely on fast-and-cheap products or services made by or offered by other people. Hence the panic over China and tariffs. The AI-ification of everything is just another, modern version of a similar thing.
I could absolutely be wrong (and hope I am). But when you track human laziness over time, it leads to deterioration and incompetence. I view this as a gradually than all of a sudden type of problem. One that will be incredibly difficult to dig ourselves out of later.
abletonlive
> I'm extrapolating. Look at what happened in the wake of the industrial revolution. Most people don't know how to fix or create anything today
Most people didn't know how to fix or create anything back then either. Except now we have more productivity than ever, more people working than ever, more output than ever.
People are fixing and creating out the ass in this society. We might not all be factory workers but people are making a ton of things in general. There is more music being made than ever, more movies being made than ever, more small businesses etc.
There is more information about how to fix things disseminated to the general population now than ever. It's just that what we build now is often so incredibly complex that fixing it is non-trivial or impractical. That's not a regression of society or our abilities or interests. There are videos on tiktok about fixing electric toothbrushes that have over 100k views and over a thousand comments. https://www.tiktok.com/@thetruestreviews/video/7458130570321...
None of what you say checks out and starting off with what is basically "it doesn't make any sense now but i predict in 10 years it will make sense" is a lazy way to defend your point.
> But when you track human laziness over time, it leads to deterioration and incompetence.
Again, none of this tracks with reality. Who is tracking laziness? In your world the general population is lazy and incompetent yet we are generally producing more and still advancing STEM
eacapeisfutuile
You are not wrong and that is a rational and a plausible forecast.
7speter
The thing about being an “AI mess fixer” will be that you’ll still need experience that fuels the creativity to solve problems generated by the AI.
rglover
Yup. The people who fit this role well will be the types that do this work for fun anyways, purely out of enjoyment or curiosity. I don't expect those types to completely disappear, they'll just be incredibly rare (i.e., the Pareto Principle or some bastardization of it).
myhf
Why do articles like this always say things like "I've used LLMs to get some stuff done faster" and then go on to describe how LLMs get them to spend more time and money to do a worse job? You don't need LLMs to frustrate you into lowering your standards, the power to do that was within you all along.
arctek
Has anyone actually measured this yet?
Much of this feels like when they did studies on people who take mushrooms for example feel like they are more productive, but when you actually measure it they aren't. It's just their perception.
To me the biggest issue is that search has been gutted out and so for many questions the best results come from asking an LLM. But this is far different from using it to generate entire codebases.
eacapeisfutuile
No, there’s been a couple of attempts but no one would call those outcomes conclusive.
Animats
The "glue" comment here reflects a view from someone who does mostly software work. That's been the situation since mechanized production lines were first built. The job of the humans is not direct labor. It's to monitor the machinery, restart it, and fix it.
Power looms were probably the first devices like this. Somebody has to thread the loom, but then it mostly runs by itself.[1] Production lines with lots of stations will have shutdowns, where a drill bit broke or there's dirt on a lens or some consumable ran out. Exceptions are hard to automate, and factory design focuses on minimizing exceptions and bypassing stuck cells.
It's helpful to understand how a factory works when watching how software development is changing. There's commonality.
So the phrase "vibe coding" is only two months old.[2] How widespread will it be in two years?
spacebanana7
AI is unlikely to take away jobs from software engineers. There’s no natural upper bound on the amount of software people can consume - unlike cars, food or houses.
Software engineers ultimately are people with “will to build”. Just as hedge fund people have a “will to trade”. The code or tooling is just a means to an end.
spencerflem
Huh, I have the opposite feeling- that people already have most of the software they want a this point.
spacebanana7
Think about your browsing history over the last year. You’ve probably consumed an obscene amount of React components, maybe millions.
Your car has way more code than a decade ago and so does your TV.
These things might make you miserable but it’s still “demand” in the economic sense of the term. It keeps developers employed.
hnhn34
>Think about your browsing history over the last year. You’ve probably consumed an obscene amount of React components, maybe millions.
Yeah, most of which are rehashes of the same thing, and most of them on the same ~10 websites.
eacapeisfutuile
640k should be enough for anybody
turtlebits
"I like fixing thorny bugs". Not me. Any tool that can get me to the solution faster is always welcome. IME, AI does well handling the boring parts.
minimaxir
It depends on the thorny bug. I like fixing bugs where the solution is to implement something clever and I learn something in the process. I don't like fixing bugs where I forget a comma or do a subtle one-off error.
Most thorny bugs fall into the latter in my experience.
felipeccastro
I’ve been having a different experience. Asking Claude to fix the bug again and again is annoying, so I’m still working on “pull pieces at a time, understanding each” so I do fix the bug myself when it’s faster to do so. In fact, the majority of times I’ve been using the LLM to build tiny libraries for me to avoid the need for the LLM in the running app. Kind of like StackOverflow on steroids. I don’t feel as the glue, but only having a superior tooling to get info I need fast.
eximius
I'm still pretty pessimistic on all this. Just today, I had what should have been an obvious win for an LLM coding assistant to help me. I was writing a go function that converts one very long struct into a second very long struct. The transformation was almost entirely wrapping the fields of the first struct in a wrapper in a completely rote way. If FieldA was an int on src, I wanted a dest{ FieldA: Wrapper{ Value: src.FieldA, Ratio: src.FieldA/Constant }, ... }.
It couldn't do it. I prefilled in all the fields (hundreds) and told it just to populate them, but it tried to hallucinate new fields, it would do one or two then both delete the fields I had added and add a comment saying 'then do the rest'. I tried a bunch of different prompts.
I can see how some vibe coders could make useful things, but most of my attempts to use LLMs in anything not-from-scratch are exercises in frustration.
dinfinity
Which one?
Can we please make it a convention that whenever anybody posts anything about some LLM experience they had, that they include which model and UI driving it they used?
Parent's post is like saying: I tried to send an email with a new email program and it didn't work.
eximius
VS Code + Github Copilot (Editor Inline Chat) + GPT-4o
m4rtink
There is a story about this by Stanislaw Lem: "Elsewhere Tichy meets a race of aliens (called "Indioci" in the Polish original, "Phools" in the English translation) who, desiring perfect harmony in their lives, entrust themselves to a machine, which converts them into shiny discs to be arranged in pleasant patterns across their planet." - https://en.m.wikipedia.org/wiki/Ijon_Tichy#Stories
(Not glue, but close enough.)
cadamsdotcom
Nothing stopping anyone fixing thorny bugs for fun! And hobby computing is more accessible now than ever.
If you build stuff for others AI (mostly) removes typing and debugging from the equation; that frees you to think harder about what you’re building and how to make it most useful. And because you’re generally done sooner you can get the thing into your users’ hands sooner, increasing the iterations.
It’s win-win.
Havoc
> I don't see a future where a lot of jobs don't cease to exist.
And the complete lack of a game plan of a societal level is starting to get worrying.
If we’re going to UBI this then we’re going to need a bit more of a plan than some toy studies
thomastraum
these well articulated articles will soon turn into pure despair. Happened to me.
PorterBHall
But I thought it was going to turn us into paper clips.
Really enjoyed this post.
> Putting aside existential risks, I don't see a future where a lot of jobs don't cease to exist.
I'm personally betting on the plateau effect with LLMs. There are two plateaus I see coming that will require humans to fix no matter what we do:
1. The LLMs themselves plateau. We're already seeing new models get worse, not better at writing code (e.g., Sonnet 3.5 seems to be better than 3.7 at coding). This could be a temporary fluke, or, an inherent reality of how LLMs work (where I tend to land).
2. Humans will plateau. First, humans themselves will see their skills atrophy as they defer more and more to AI than struggling to solve problems (and by extension, learn new things). Second, humans will be disincentivized to create new forms of programming and write about them, so eventually the inputs to the LLM become stale.
Short-term, this won't appear to be true, but long-term (on the author's 10+ year scale), it will be frightening. Doubly so when systems that were primarily or entirely "vibe coded" start to break in ways that the few remaining humans responsible for maintaining them don't understand (and can't prompt their way out of).
And that's where I think the future work will be: in fixing or replacing systems unintentionally being broken by the use of AI. So, you'll either be an "AI mess fixer" or more entrepreneurial doing "artisan, hand-crafted software."
Either of those I expect to be fairly lucrative.