The copilot delusion
213 comments
·May 23, 2025padolsey
throwaway314155
I don't mind your rebuttal of the article, but to suggest that this particular article is AI generated is foolish. The style the author presents is vivid, uses powerful imagery and metaphor and finally, at times, is genuinely funny. More qualitatively, the author incorporates a unique identity that persists throughout the entirety of a long form essay.
All of that is still difficult to get an LLM to do. This isn't AI generated. It's just good writing. Whether you buy the premise or not.
chrismorgan
Yeah, it feels a very different style of unhinged to LLMs. I can’t yet imagine an LLM producing such a beautiful and contextually-appropriate sentence as “They’ll be out there trying to duct-tape horses to an engine block, wondering why it doesn’t fly.”
nick3443
This brought tears to my eyes:
But you—at your most frazzled, sleep-deprived, raccoon-eyed best—you can try. You can squint at the layers of abstraction and see through them. Peel back the nice ergonomic type-safe, pure, lazy, immutable syntactic sugar and imagine the mess of assembly the compiler pukes up.
Amazing
a0123
> The style the author presents is vivid, uses powerful imagery and metaphor and finally, at times, is genuinely funny. More qualitatively, the author incorporates a unique identity that persists throughout the entirety of a long form essay.
This is incredible you would say that because you'll never guess what it reads like.
petemir
I'm glad I'm not the only one to have thought that...
queenkjuul
I got slightly LLM vibes for the first few paragraphs ngl. It became very clear it wasn't very fast, though.
yerushalayim
This rebuttal of the rebuttal does feel earily AI. Perhaps an injection of cynicism?
boxed
Maybe he meant it was long. Some people seem to think that long walls of text is how you spot AI slop.
nozzlegear
And god forbid you use an emdash these days.
queenkjuul
Fundamentally, mission critical low level code isn't the kind of software i want to write anyway. I don't find AI tools super useful for most of the same reasons as the author, but i do kind of get tired of the idea that if you're not writing systems in C you're not really programming.
I like writing front end code. I'm probably never going to have a job where i need or would even want to write a low level graphics library from scratch. Fine, I'm not red-eyed 3am hacker brained, but I'm passionate and good at what i do. I don't think a world where every person working in software has the author's mentality is realistic or desirable.
sgarland
> I like writing front end code. I'm probably never going to have a job where i need or would even want to write a low level graphics library from scratch. Fine, I'm not red-eyed 3am hacker brained, but I'm passionate and good at what i do.
Keep that spirit! All I want from coworkers is genuine interest and curiosity. Not everyone is going to find investigating Linux’s networking stack interesting, just as not everyone is going to find making beautiful pure CSS animations interesting. I think one of the greatest mistakes the tech industry did was to create “full stack,” as though someone would have interest and skill in frontend, backend, and infra. Bring back specialists; we’re all better for it.
_rutinerad
I think it’s a fine label. I have skill and interest in all of those areas. Some things requires specialists, most jobs out there don’t.
Fuhrmanator
I sometimes imagine, even before LLMs, how much human-written code still matters, like the DNA that has survived evolution. It's got to be a super low percentage.
The trend with Autocomplete Industrialization (AI) is just speeding up the creation of shanty towns of code, as opposed to architecturally robust foundations. The survivability of code is dropping because of the Copilotz. But perhaps this rapidity of creating crude solutions will increase the chances of something truly significant?
p.p.s there are too many times "it's" should be "its" in that blog post to have been AI-generated. That's a different kind of irony IMO, especially since that's just tricky English syntax (the thing AI is supposed to be good at). Maybe the author used AI to come up with the snarky metaphors. I asked ChatGPT for a sarcastic meaning for AI that starts with Autocomplete :)
agos
from TFA:
> Maybe you’ll never write the code that keeps a plane in the sky. Maybe you’ll never push bits that hold a human life in the balance. Fine. Most don’t. But even if you're just slapping together another CRUD app for some bloated enterprise, you still owe your users respect. You owe them dignity.
Ferret7446
> you still owe your users respect. You owe them dignity.
This is moral grandstanding. You owe your customers a good product at a low cost. If you don't use a tool that can lower costs, you are wronging your users and will go out of business.
Handcrafted CRUD will go the same way as handcrafted anything; an expensive niche hobby.
WarOnPrivacy
> This is moral grandstanding.
I disagree. For the kind of relationships I want with other humans, respect and dignity are my end of the bargain.
jonaustin
> you still owe your users respect. You owe them dignity.
what does that even mean?
Users don't care if code is written by a human or AI; they care that the code gives them what they need, hopefully in a fairly pleasant manner.
anon7000
Yeah, and the article talks about those ways in which AI is useful. Overall, the author doesn’t have a problem with experts using AI to help them. The main argument is that we’re calling AI a copilot, and many newbies may be trusting it or leaning on it too much, when in reality, it’s still a shitty coworker half the time. Real copilots are actually your peers and experts at what they do.
> Now? We’re building a world where that curiosity gets lobotomized at the door. Some poor bastard—born to be great—is going to get told to "review this AI-generated patchset" for eight hours a day, until all that wonder calcifies into apathy. The terminal will become a spreadsheet. The debugger a coffin.
On the other hand, one could argue that AI is just another abstraction. After all, some folks may complain that over-reliance on garbage collectors means that newbies never learn how to properly manage memory. While memory management is useful knowledge for most programmers, it rarely practically comes up for many modern professional tasks. That said, at least knowing about it means you have a deeper level of understanding and mastery of programming. Over time, all those small, rare details add up, and you may become an expert.
I think AI is in a different class because it’s an extremely leaky abstraction.
We use many abstractions every day. A web developer really doesn’t need to know how deeper levels of the stack work — the abstractions are very strong. Sure, you’ll want to know about networking and how browsers work to operate at a very high level, but you can absolutely write very nice, scalable websites and products with more limited knowledge. The key thing is that you know what you’re building on, and you know where to go learn about things if you need to. (Kind of like how a web developer should know the fundamental basics of HTML/CSS/JS before really using a web framework. And that doesn’t take much effort.)
AI is different — you can potentially get away with not knowing the fundamental basics of programming… to a point. You can get away with not knowing where to look for answers and how to learn. After all, AIs would be fucking great at completing basic programming assignments at the college level.
But at some point, the abstraction gets very leaky. Your code will break in unexpected ways. And the core worry for many is that fewer and fewer new developers will be learning the debugging, thinking, and self-learning skills which are honestly CRITICAL to becoming an expert in this field.
You get skills like that by doing things yourself and banging your head against the wall and trying again until it works, and by being exposed to a wide variety of projects and challenges. Honestly, that’s just how learning works — repetition and practice!
But if we’re abstracting away the very act of learning, it is fair to wonder how much that will hurt the long-term skills of many developers.
Of course, I’m not saying AI causes everyone to become clueless. There are still smart, driven people who will pick up core skills along the way. But it seems pretty plausible that the % of people who do that will decrease. You don’t get those skills unless you’re challenged, and with AI, those beginner level “learn how to program” challenges become trivial. Which means people will have to challenge themselves.
And ultimately, the abstraction is just leaky. AI might look like it solves your problems for you to a novice, but once you see through the mirage, you realize that you cannot abstract away your core programming & debugging skills. You actually have to rely on those skills to fix the issues AI creates for you — so you better be learning them along the way!!
Btw, I say this as someone who does use AI coding assistants. I don’t think it’s all bad or all good. But we can’t just wave away the downsides just because it’s useful
seanmcdirmid
> Btw, I say this as someone who does use AI coding assistants. I don’t think it’s all bad or all good. But we can’t just wave away the downsides just because it’s useful
Isn't this just the rehashed argument against interactive terminals in the 60s/70s (no longer need to think very carefully about what you enter into your punch cards!), debuggers (no longer spending time looking carefully at code to find bugs), Intellisense/code completion (no need to remember APIs!) from the late 90s, or stackoverflow (no need to sift to answer questions that others have had before!) from the 00s? I feel like we've been here before and moved on from it (hardly anyone complains about these anymore, no one is suggesting we go back to programming by rewiring the computer), I wonder if this time it will be any different? Kids will just learn new ways of doing things on top of the new abstractions just like they've done for the last 70 years of programming history.
sgarland
Interactive terminals didn’t write code for you, and also unlocked entirely new paradigms of programs. Debuggers, if anything, enabled deeper understanding. Intellisense is in fact a plague and should not exist. Stack Overflow, when abused, is nearly as bad as AI.
jgraettinger1
> On the other hand, one could argue that AI is just another abstraction
I, as a user of a library abstraction, get a well defined boundary and interface contract — plus assurance it’s been put through paces by others. I can be pretty confident it will honor that contract, freeing me up to not have to know the details myself or second guess the author.
ivape
It’s funny because I made a few funny clips (to my taste) on Google Whisk and figured, hey why not, let’s make a TikTok. Did you know that all of TikTok is full of millions of ai generated stuff or other people just copying each others stuff? I really thought there was something to this “original creation” stuff.
We are all so simply reproducible. No one’s making anything special, anywhere, for the most part. If we all uploaded a TikTok video of daily coding, it would be the same fucking app over and over, just like the rest of TikTok.
Elon may have be right all along, there’s literally nothing left to do but goto Mars. Some of us were telling many of you that the LLMs don’t hallucinate as much as you think just two years ago, and I think the late to the party crowd need to hear us again - we humans are not really necessary anymore.
!RemindMe in 2 years
whattheheckheck
There's literally genocide and war going on, solve that
queenkjuul
Don't encourage them. They'll just build an AI to do the genocide faster.
ivape
The genocide is televised, it appears no one cares.
felubra
> The machine is real. The silicon is real. The DRAM, the L1, the false sharing, the branch predictor flipping a coin—it’s all real. And if you care, you can work with it.
This is one of the most beautiful pieces of writing I’ve come across in a while.
bwfan123
Same, The author writes like Dave Barry. I burst out laughing more than once. He was able to articulate with a lot of humor exactly what I think of co-pilot.
lunarcave
The things that's most often missed in these discussions that "writing code" is the end artefact. It doesn't take into account the endless tradeoffs made in producing the said artefact - the journey to get there.
Just try implementing a feature with a junior, in a mildly complex codebase and you'd catch all the unconscious tradeoffs that you're making as an experienced developer. AI has some concept of what these tradeoffs are, but that's mostly by observation.
AI _does_ help with writing code. Keyword there being - "help".
But thinking is the human's job. LLMs can't/don't "think". Thinking how to get the AI to produce the output you want is also your job. You'd think less and less if models get better.
bulhi
Exactly. I think the key point from the article is: "When you outsource the thinking, you outsource the learning."
jboggan
This is the crux of the piece to me:
"We'll enshrine this current bloated, sluggish, over-abstracted hellscape as the pinnacle of software—and the idea of squeezing every last drop of performance out of a system, or building something lean and wild and precise, will sound like folklore."
This somewhat lines up with my concerns about libraries and patterns before 2023 getting frozen in stone once we pass over the event horizon where most new code to train on is generated by LLMs. We aren't innovating, we are going to forever reinforce the screwed up dependency stack and terrible kludges of the last 30 years of development. Javascript is going to live forever.
CGamesPlay
This resonates with me, for sure; both the benefits and the drawbacks of copilot. But while I think kids and hackers were artisans, engineers were always just engineers. The amazing technical challenges they had to solve to create some of the foundational technologies we have today, exist because they had to solve those challenges. Looking at only these and saying "that's how things used to be" is survivorship bias.
gleenn
I feel privileged to be able to say that as a software engineer who's been doing it the hard way for 20+ years, I relish the hard problems. The CRUD app updates are unbearable without the random in-between challenges that bend my mind. The rare recursive algorithm, the application of some esoteric knowledge I actually learned in college, actually having to do big-o estimates. These are the gems of my career that keep me sane. I hope the next flock of AI-driven SWEs appreciates these things even more given the AI can spout off answers which sometimes are right and sometimes are horribly wrong. Challenges like these will always have to have someone who actually knows what to do when the AIs start hallucinating or the context of the situation is beyond the context window.
hollowsunsets
If someone commits to doing it the hard way, I wonder how much they risk being left behind. And will anyone out there really appreciate what it is they're trying to do or their commitment to their integrity? It seems like a deeply dying art. I hope it is the case.
malfist
I feel this in my bones. Every day I'm getting challenged by leadership that we're not using AI enough, told that I should halve my estimates because "we'll use AI", and being told that there's a new AI tool that I have to adopt because someone is tracking KPIs related to adoption and if our team doesn't adopt enough AI tools we're going to be fired to give more headcount to those that do.
It's like the world has lost it's goddamn mind.
AI is always being touted as the tool to replace the other guy's job. But in reality it only appears to do a good job because you don't understand the other guy's job.
Management has an AI shaped hammer and they're hitting everything to see if it's a nail.
bluefirebrand
> Management has an AI shaped hammer and they're hitting everything to see if it's a nail.
I really think we need to figure out how to cut back on management so we can get back to the business of actually doing work
ghaff
Coordinating teams, talking to stakeholders/customers (including spending a lot of time with them), having someone manage individual contributors at some level, etc. is work that can't just be ignored at a company of any size. The only way to avoid (a lot of) it is to be very small and that has its own set of issues.
bluefirebrand
Sure, but do we really need four layers of people to do all of that?
It's really common to see just layers and layers of management at companies that get big enough
lyu07282
I mean don't bite the boot that you can lick amirite
toomuchtodo
Unions. What other way is there?
https://www.epi.org/blog/americans-favor-labor-unions-over-b...
cjbgkagh
Well how hard would it be to replace management with AI? Perhaps a developer could use AI to recreate the other tasks of the company without all of the overhead of actual people.
queenkjuul
Yeah i can't wait to discuss product with a sycophantic chatbot instead of the people who actually have a stake in the product.
Management can and usually does suck but i can reason with a person, for now. And sadly only the product people actually know what they want, usually right when you've built it the way they used to want it lol.
yencabulator
Clearly the answer is to replace them with AI.
sanderjd
Yeah, I mean, this is just the current phase of the hype cycle. It'll settle down. Some of the tools and techniques will have staying power, most won't. If you can figure out which is which and influence others, you'll be in good shape.
nyarlathotep_
> I feel this in my bones. Every day I'm getting challenged by leadership that we're not using AI enough, told that I should halve my estimates because "we'll use AI", and being told that there's a new AI tool that I have to adopt because someone is tracking KPIs related to adoption and if our team doesn't adopt enough AI tools we're going to be fired to give more headcount to those that do.
This--all of this--seems exactly antithetical to computing/development/design/"engineering"/architecture/whatever-the-hell people call this profession as I understood it.
Typically, I labored under the delusion that competent technical decision makers would integrate tooling or choose to use a language, "service", platform, whatever, if they saw benefits and if they could make a "case" for why something was the correct approach, i.e how it met some product's needs, addressed some shortcomings, made things more efficient.
Like "here's my design doc, I chose $THING for caching for $REASON and $DATASTORE as it offers blah blah"
"Please provide feedback and questions"
This is totally alien to that approach.
Ideally, "hey we're going to use CoPilot/other LLM thingy, let us know if it aids your workflow, give us some report in a month and we'll go from there to determine if we want to keep paying for it"
lamename
> AI is always being touted as the tool to replace the other guy's job. But in reality it only appears to do a good job because you don't understand the other guy's job.
This is a well considered point that not enough of us admit. Yes many jobs are rote or repetitive, but many more jobs, of all flavors, done well have subtleties that will be lost when things are automated. And no I do not think that some "80% done by AI is good enough" because errors propagate through a system (even if that system is a company or society), AND the people evaluating that "good enough" are not necessarily going to be those experienced in that same domain.
ivape
But, management is the one to go soon. The other shoe is going to drop dear brother, this I promise you. Stay strong.
abletonlive
Well when you have a hammer big enough everything is indeed a nail.
Have you considered that instead of resisting you should do more to figure out why you're not getting the results that a lot of us are talking about? If nothing has changed for you in the past 2 years in your productivity the problem is most likely you. Don't you think it's your responsibility as an engineer to figure out what you're doing wrong when there are a lot of people telling you that it's a life changing tool? Or did you just assume that everybody was lying and you were doing everything correctly?
Sorry to say it. It's an unpopular opinion but I think it's pretty much a harsh truth.
dpistole
> why you're not getting the results that a lot of us are talking about?
IMO the problem occurs when "the results" are hyped up linkedIn posts not based in reality, AI is a boon but it's not lived up to the "IDEs are a thing of the past, youre all prompt engineers now" expectations that we hear from executives
MeetingsBrowser
Kind of brutal, but if LLMs drastically improved your productivity I think it speaks more to your baseline productivity than the power of LLMs.
abletonlive
What's more likely
A) all of this money being funneled into tech to build out trillions of dollars worth of infrastructure, a month over month increasing user base buying subscriptions for these llm services, every company buying seats for LLM because of the value that it provides - these people are wrong
B) yappers on hackernews that claim they derive no productivity boost out of llms while showing absolutely nothing about their workflow or method when the interface is basically a chat box with no guardrails - these people are wrong
Sorry I'm going to be it's B and you just suck at it
sanderjd
People keep saying this kind of thing, but sorry, it's nonsense.
Many of my colleagues that I most admire are benefiting greatly and increasingly from LLM tooling.
oh_my_goodness
"Well when you have a hammer big enough everything is indeed a nail."
I think this pretty much speaks for itself.
malfist
Where did you see that I didn't use AI and that _nothing_ has changed for me?
aucisson_masque
I think the difference between A.I. fake intelligence and us, humans, can be summed up to that single quote from Oscar Wilde.
"I have spent most of the day putting in a comma and the rest of the day taking it out."
No A.I. would ever think more than a millisecond about a comma, it's pure data retrieval for it. "how many percent of text there is a coma after this word, how many didn't ? ok, done."
Glyptodon
My point of comparison of choice is overseas contractors, not pair programming.
Copilot or Cursor or whatnot is basically a better experience because you do not have to get on Zoom calls (after Slack has failed) to ask why some chunk of your system that cares about root nodes has mysteriously gained a function called isChild (not hasChildren) that returns a boolean based on whether or not the node has children and not whether it has a parent. Or to figure out why a bunch a API parameters that used to accept arrays now don't. Or why an ask to not show a constant literal in a menu resulted in algorithmic derivation of ordinals rather than using i18n.
With AI you probably don't have those kinds of things happen, but if you do, you can instantly tell it, sorry, that's wrong, this is why, and have it changed in a minute. Whereas with contractors, you waste a lot of time on things like communication and understanding gaps and language barriers that are mostly gone with AI.
The second you can interact really easily w/ AI from Jira Tickets, most engineers are going to turn into ticket writers and overseers for 80% of their work. (And yes, you'll still need engineers, because Product can't actually write decent engineering tickets, though telling the AI to write engineering tickets will probably get close, and because somebody with a clue needs to be in the loop, though many organizations will try to forget this and have things they don't understand go terribly wrong.)
OnionBlender
The author is clearly a C++ programmer. I've been noticing that these AI tools are worse at C++ than other languages, especially scripting languages. Whenever I try to learn from people that are using these tools successfully, they always seem to be using a scripting language and working on some CRUD app.
sgarland
They seem to be a game dev judging by their other posts. I imagine there’s a lot less content online about that for LLMs to scrape than yet another CRUD app.
heddycrow
I couldn't help but read parts of this in Bertram Gilfoyle's voice.
Someone tell me I'm not alone.
Aziell
I used to work with someone like this. At first, he really wanted to do things properly. Over time, he gave up. Not because he was lazy, but because he felt like effort didn’t really matter.
Copilot’s fine for boilerplate. But lean on it too much, and you stop thinking. Stop thinking long enough, and you stop growing. That’s the real cost.
airstrike
I think all arguments pro and against AI assistants for coding should include a preface that describes the programing language, the domain of the app, the model being used and the chosen interface for interacting with the assistant.
Otherwise everyone's just talking past each other.
joshstrange
That’s probably asking for too much but I agree.
Here are some terms/aspects of LLMs that people _regularly_ use, yet 10 people have 10 definitions of what it means (to them)
- Vibe Coding
- Boilerplate
- Copilot
- Cursor/Aider/Claude Code/Codex/OpenHands/etc
- LLM Autocomplete and/or inline code suggestion
- LLM Agent
I’m happy to explain or expand on any of those if it’s not clear what I mean.
> if you want to sculpt the kind of software that gets embedded in pacemakers and missile guidance systems and M1 tanks—you better throw that bot out the airlock and learn.
But the bulk of us aren't doing that... We're making CRUD apps for endless incoming streams of near identical user needs, just with slightly different integrations, schemas, and lipstick.
Let's be honest. For most software there is nothing new under the sun. It's been seen before thousands of times, and so why not recall and use those old nuggets? For me coding agents are just code-reuse on steroids.
Ps. Ironically, the article feels AI generated.