The Bitter Prediction
172 comments
·April 12, 2025kassner
pards
> code output is rarely the reason why projects that I worked on are delayed
This is very true at large enterprises. The pre-coding tasks [0] and the post-coding tasks [1] account for the majority of elapsed time that it takes for a feature to go from inception to production.
The theory of constraints says that optimizations made to a step that's not the bottleneck will only make the actual bottleneck worse.
AI is no match for a well-established bureaucracy.
[0]: architecture reviews, requirements gathering, story-writing
[1]: infrastructure, multiple phases of testing, ops docs, sign-offs
xen2xen1
Interesting point, does that mean AI with favor startup or startup like places? New tools often seem to favor less established and smaller places.
mountainriver
Disagree it’s normally the integration and alignment of systems that takes a long time e.g. you are forced to use X product but their missing a feature you need to wait on
szundi
[dead]
CM30
Yeah, something like 95% of project issues are management and planning issues, not programming or tech ones. So often projects start out without anyone on the team researching the original problem or what their users would actually need, then hastily rejigging the whole thing to fix that midway through development.
inerte
aka https://en.wikipedia.org/wiki/No_Silver_Bullet
And it's also interesting to think that PMs are also using AI - in my company for example we allow users to submit feedback, then there's an AI summary report sent to PMs. Which them put the report into ChatGPT with the organizational goals and the key players and previous meeting transcripts, and then they ask the AI to weave everything together into a PRD, or even a 10 slide presentation.
doug_durham
I agree with you that traditionally that is the bottleneck. Think about why poor specifications are a problem. It's a problem because software is so costly and time consuming to create. Many times the stakeholders don't know that something isn't right until they can actually use it. What if it takes 50% less time to create code? Code becomes less precious. Throwing away failed ideas isn't as big an issue. Of course it is trivially easy to think of cases where this could also lead to never shipping your code.
d0liver
I feel this. As a dev, most of my time is spent thinking and asking questions.
api
For most software jobs, knowing what to build is harder than building it.
I’m working hard on building something right now that I’ve had several false starts on, mostly because it’s taken years for us to totally get our heads around what to build. Code output isn’t the problem.
hedgew
>Why bother playing when I knew there was an easier way to win? This is the exact same feeling I’m left with after a few days of using Claude Code. I don’t enjoy using the tool as much as I enjoy writing code.
My experience has been the opposite. I've enjoyed working on hobby projects more than ever, because so many of the boring and often blocking aspects of programming are sped up. You get to focus more on higher level choices and overall design and code quality, rather than searching specific usages of libraries or applying other minutiae. Learning is accelerated and the loop of making choices and seeing code generated for them, is a bit addictive.
I'm mostly worried that it might not take long for me to be a hindrance in the loop more than anything. For now I still have better overall design sense than AI, but it's already much better than I am at producing code for many common tasks. If AI develops more overall insight and sense, and the ability to handle larger code bases, it's not hard to imagine a world where I no longer even look at or know what code is written.
siffin
Everyone has different objective and subjective experiences, and I suspect some form of selection will promote those who more often feel excited and relieved by using AI than those who feel it more often a negative, like it challenges some core aspect of self.
It might challenge us, and maybe those of us who feel challenged in that way need to rise to it, for there are always harder problems to solve
If this new tool seems to make things so easy it's like "cheating", then make the game harder. Can't cheat reality.
palata
Without AI, I have been in a company where the general mentality was to "ship bad software but quickly". Without going into the debate of whether it was profitable in the long term or not (spoiler: it was not), my problem was the following:
I would try to build something "good" (not "perfect", just "good", like modular or future-proof or just not downright malpractice). But while I was doing this, others would build crap. They would do it so fast I couldn't keep up. So they would "solve" the problems much faster. Except that over the years, they just accumulated legacy and had to redo stuff over and over again (at some point you can't throw crap on top of crap, so you just rebuild from scratch and start with new crap, right?).
All that to say, I don't think that AIs will help with that. If anything, AIs will help more people behave like this and produce a lot of crap very quickly.
palata
The calculator made it less important to be relatively good with arithmetic. Many people just cannot add or subtract two numbers without one. And it feels like they lose intuition, somehow: if numbers don't "speak" to you at all, can you ever realize that 17 is roughly a third of 50? The only way you realise it with a calculator is if you actually look for it. Whereas if you can count, it just appears to you.
Similar with GPS and navigation. When you read a map, you learn how to localise yourself based on landmarks you see. You tend to get an understanding of where you are, where you want to go and how to go there. But if you follow the navigation system that tells you "turn right", "continue straight", "turn right", then again you lose intuition. I have seen people following their navigation system around two blocks to finally end up right next to where they started. The navigation system was inefficient, and with some intuition they could have said "oh actually it's right behind us, this navigation is bad".
Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase. Or that instead of writing a complex task in your codebase, you could contribute a patch to a dependency and it would make it much simpler (e.g. because the dependency already has this logic internally and you could just expose it instead of rewriting it). But it requires an understanding of those dependencies: do you have access to their code in the first place (either because they are open source or belong to your company)?
Those AIs obviously help writing code. But do they help getting an understanding of the codebase to the point where you build intuition that can be leveraged to improve the project? Not sure.
Is it necessary, though? I don't think so: the tendency is that software becomes more and more profitable by becoming worse and worse. AI may just help writing more profitable worse code, but faster. If we can screw the consumers faster and get more money from them, that's a win, I guess.
nthingtohide
> Back to coding: if you have a deep understanding of your codebases and dependencies, you may end up finding that you could actually extract some part of one codebase into a library and reuse it in another codebase.
I understand the point you are making. But what makes you think refactoring won't be AI's forte. Maybe you could explicitly ask for it. Maybe you could ask it to minify while being human-understandable and that will achieve the refactoring objectives you have in mind.
palata
I don't get why you're being downvoted here.
I don't know that AI won't be able to do that, just like I don't know that AGI won't be a thing.
It just feels like it's harder to have the AI detect your dependencies, maybe browse the web for the sources (?) and offer to make a contribution upstream. Or would you envision downloading all the sources of all the dependencies (transitive included) and telling the AI where to find them? And to give it access to all the private repositories of your company?
And then, upstreaming something is a bit "strategic", I would say: you have to be able to say "I think it makes sense to have this logic in the dependency instead of in my project". Not sure if AIs can do that at all.
To me, it feels like it's at the same level of abstraction as something like "I will go with CMake because my coworkers are familiar with it", or "I will use C++ instead of Rust because the community in this field is bigger". Does an AI know that?
fragmede
With Google announcing that they'll let customers run Gemini in their own datacenters, the privacy issue goes away. I'd love it if there was an AI trained on my work's proprietary code.
fallingknife
Perhaps it will, but right now I find it much better at generating code from scratch than refactoring.
vertnerd
I'm a little older now, over 60. I'm writing a spaceflight simulator for fun and (possible) profit. From game assets to coding, it seems like AI could help. But every time I try it out, I just end up feeling drained by the process of guiding it to good outcomes. It's like I have an assistant to work for me, who gets to have all the fun, but needs constant hand holding and guidance. It isn't fun at all, and for me, coding and designing a system architecture is tremendously satisfying.
I also have a large collection of handwritten family letters going back over 100 years. I've scanned many of them, but I want to transcribe them to text. The job is daunting, so I ran them through some GPT apps for handwriting recognition. GPT did an astonishing job and at first blush, I thought the problem was solved. But on deeper inspection I found that while the transcriptions sounded reasonable and accurate, significant portions were hallucinated or missing. Ok, I said, I just have to review each transcription for accuracy. Well, reading two documents side by side while looking for errors is much more draining than just reading the original letter and typing it in. I'm a very fast typist and the process doesn't take long. Plus, I get to read every letter from beginning to end while I'm working. It's fun.
So after several years of periodically experimenting with the latest LLM tools, I still haven't found a use for them in my personal life and hobbies. I'm not sure what the future world of engineering and art will look like, but I suspect it will be very different.
My wife spins wool to make yarn, then knits it into clothing. She doesn't worry much about how the clothing is styled because it's the physical process of working intimately with her hands and the raw materials that she finds satisfying. She is staying close to the fundamental process of building clothing. Now that there are machines for manufacturing fibers, fabrics and garments, her skill isn't required, but our society has grown dependent on the machines and the infrastructure needed to keep them operating. We would be helpless and naked if those were lost.
Likewise, with LLM coding, developers will no longer develop the skills needed to design or "architect" complex information processing systems, just as no one bothers to learn assembly language anymore. But those are things that someone or something must still know about. Relegating that essential role to a LLM seems like a risky move for the future of our technological civilization.
palata
I can relate to that.
Personally, right now I find it difficult to imagine saying "I made this" if I got an AI to generate all the code of a project. If I go to a bookstore, ask for some kind of book ("I want it to be with a hard cover, and talk about X, and be written in language Y, ..."), I don't think that at the end I will feel like I "made the book". I merely chose it, someone else made it (actually it's multiple jobs, between whoever wrote it and whoever actually printed and distributed it).
Now if I can describe a program to an AI and it results in a functioning program, can I say that I made it?
Of course it's more efficient to use knitting machines, but if I actually knit a piece of clothing, then I can say I made it. And that's what I like: I like to make things.
6510
I accidentally questioned out loud if the daughter created the video. I assure you, you've made it! If you bring into existence the proverbial PalataOS in a 6 word prompt we should blame and praise you for it.
thwarted
Editing and proofreading, of code and prose, are work themselves, which is often not appreciated enough to be recognized as work, and I think this is the basis for the perspective that if you can get the LLMs to do the coding/writing and all you need to do is just proof the result as if that's somehow easier because proofing is not the real work.
OgsyedIE
I think this particular anxiety was explored rather well in the anonymous short story 'The End of Creative Scarcity':
https://www.fictionpress.com/s/3353977/1/The-End-of-Creative...
Some existential objections occur; how sure are we that there isn't an infinite regress of ever deeper games to explore? Can we claim that every game has an enjoyment-nullifying hack yet to discover with no exceptions? If pampered pet animals don't appear to experience the boredom we anticipate is coming for us, is the expectation completely wrong?
nemo1618
Thank you for sharing this :)
bogrollben
This was great - thank you!
01HNNWZ0MV43FF
Loved it, thank you for sharing
zem
thanks, that was wonderful
xg15
As far as hobby projects are concerned, I'd agree: A bit more "thinking like your boss" could be helpful. You can now focus more on the things you want your project be able to do instead of the specific details of its code structure. (In the end, nothing keeps you from still manually writing/editing parts of the code if you want some things specifically done in a certain way. There are also projects where the code structure legitimately is the feature, I.e. if you want to explore some new style of API or architecture design for its own sake)
The one part that I believe will still be essential is understanding the code. It's one thing to use Claude as a (self-driving) car, where you delegate the actual driving but still understand the roads being taken. (Both for learning and for validating that the route is in fact correct)
It's another thing to treat it like a teleporter, where you tell it a destination and then are magically beamed to a location that sort of looks like that destination, with no way to understand how you got there or if this is really the right place.
davidanekstein
I think AI is posing a challenge to people like the person in TFA because programming is their hobby and one that they’re good at. They aren’t used to knowing someone or something can do it better and knowing that now makes them wonder what the point is. I argue that amateur artists and musicians have dealt with this feeling of “someone can always do it better” for a very long time. You can have fun while knowing someone else can make it better than you, faster, without as much struggle. Programmers aren’t as used to this feeling because, even though we know people like John Carmack exist, it doesn’t fly in your face quite like a beautiful live performace or painted masterpiece does. Learning to enjoy your own process is what I think is key to continuing what you love. Or, use it as an opportunity to try something else — but you’ll eventually discover the same thing no matter what you do. It’s very rare to be the best at something.
palata
> can make it better than you, faster, without as much struggle
Still need to prove that AI-generated code is "better", though.
"More profitable", in a world where software generally becomes worse (for the consumers) and more profitable (for the companies), sure.
doug_durham
I don't see that as a likely outcome. I think it will make software better for consumers. There can be more bespoke interfaces instead of making consumers cram in to the solution space dictated by the expensive to change software as it is today.
palata
That doesn't make sense: they could already spend more resources to make the software better, but they don't, because that is more profitable.
If AI makes doing the same thing cheaper, why would they suddenly say "actually instead of increasing our profit, we will invest it into better software"?
dbalatero
I'm both relatively experienced as a musician and software engineer so I kinda see both sides. If musicians want to get better, they have to go to the practice room and work. There's a satisfaction to doing this work and coming out the other side with that hard-won growth.
Prior to AI, this was also true with software engineering. Now, at least for the time being, programmers can increase productivity and output, which seems good on the surface. However, with AI, one trades the hard work and brain cells created by actively practicing and struggling with craft for this productivity gain. In the long run, is this worth it?
To me, this is the bummer.
mjburgess
All articles of this class, whether positive or negative, begin "I was working on a hobby project" or some variation thereof.
The purpose of hobbies is to be a hobby, archetypical tech projects are about self-mastery. You cannot improve your mastery with a "tool" that robs you of most of the minor and major creative and technical decisions of the task. Building IKEA furniture will not make you a better carpenter.
Why be a better carpenter? Because software engineering is not about hobby projects. It's about research and development at the fringes of a business (, orgs, projects...) requirements -- to evolve their software towards solving them.
Carpentry ("programming craft") will always (modulo 100+ years) be essential here. Powertools do not reduce the essential craft, they increase the "time to craft being required" -- they mean we run into walls of required expertise faster.
AI as applied to non-hobby projects -- R&D programming in the large -- where requirements aren't specified already as prior art programs (of func & non-func variety, etc.) ---- just accelerates the time to hitting the wall where you're going to shoot yourself in the foot if you're not an expert.
I have not seen a single take by an experienced software engineer have a "sky is falling" take, ie., those operating at typical "in the large" programming scales, in typical R&D projects (revision to legacy, or greenfield -- just reqs are new).
mnky9800n
I think it also misses the way you can automate non-trivial tasks. For example, I am working on a project where there is tens of thousands of different data sets each with their own meta data and structure but the underlying data is mostly the same. But because the meta data and structure are all different, it’s really impossible to combine all this data into one big data set without a team of engineers going through each data set and meticulously restructuring and conforming said metadata to a new monolithic schema. However I don’t have any money to hire that team of engineers. But I can massage LLMs to do that work for me. These are ideal tasks for AI type algorithms to solve. It makes me quite excited for the future as many of these kind of tasks could be given to ai agents that would otherwise be impossible to do yourself.
MattJ100
I agree, but only for situations where the probabilistic nature is acceptable. It would be the same if you had a large team of humans doing the same work. Inevitably misclassifications would occur on an ongoing basis.
Compare this to the situation where you have a team develop schemas for your datasets which can be tested and verified, and fixed in the event of errors. You can't really "fix" an LLM or human agent in that way.
So I feel like traditionally computing excelled at many tasks that humans couldn't do - computers are crazy fast and don't make mistakes, as a rule. LLMs remove this speed and accuracy, becoming something more like scalable humans (their "intelligence" is debateable, but possibly a moving target - I've yet to see an LLM that I would trust more than a very junior developer). LLMs (and ML generally) will always have higher error margins, it's how they can do what they do.
mnky9800n
Yes but i see it as multiple steps. Like perhaps the llm solution has some probabilistic issues that only get you 80% of the way there. But that probably already has given you some ideas how to better solve the problem. And this case the problem is somewhat intractable because of the size and complexity of the way the data is stored. So like in my example the first step is LLMs but the second step would be to use what they do as structure for building a deterministic pipeline. This is because the problem isn’t that there are ten thousand different meta data, but that the structure of those metadata are diffuse. The llm solution will first help identify the main points of what needs to be conformed to the monolithic schema. Then I will build more production ready and deterministic pipelines. At least that is the plan. I’ll write a substack about it eventually if this plan works haha.
xg15
I'm reminded of the game Factorio: Essentially the entire game loop is "Do a thing manually, then automate it, then do the higher-level thing the automation enables you to do manually, then automate that, etc etc"
So if you want to translate that, there is value in doing a processing step manually to learn how it works - but when you understood that, automation can actually benefit you, because only then are you even able to do larger, higher-level processing steps "manually", that would take an infeasible amount of time and energy otherwise.
Where I'd agree though is that you should never lose the basic understanding and transparency of the lower-level steps if you can avoid that in any way.
skerit
I've used Claude-Code & Roo-Code plenty of times with my hobby projects.
I understand what the article means, but sometimes I've got the broad scopes of a feature in my head, and I just want it to work. Sometimes programming isn't like "solving a puzzle", sometimes it's just a huge grind. And if I can let an LLM do it 10 times faster, I'm quite happy with that.
I've always had to fix up the code one way or another though. And most of the times, the code is quite bad (even from Claude Sonnet 3.7 or Gemini Pro 2.5), but it _did_ point me in the right direction.
About the cost: I'm only using Gemini Pro 2.5 Experimental the past few weeks. I get to retry things so many times for free, it's great. But if I had to actually pay for all the millions upon millions of used tokens, it would have cost me *a lot* of money, and I don't want to pay that. (Though I think token usage can be improved a lot, tools like Roo-Code seem very wasteful on that front)
fhd2
> I have not seen a single take by an experienced software engineer have a "sky is falling" take,
Let me save everybody some time:
1. They're not saying it because they don't want to think of themselves as obsolete.
2. You're not using AI right, programmers who do will take your job.
3. What model/version/prompt did you use? Works For Me.
But seriously: It does not matter _that_ much what experienced engineers think. If the end result looks good enough for laymen and there's no short term negative outcomes, the most idiotic things can build up steam for a long time. There is usually an inevitable correction, but it can take decades. I personally accept that, the world is a bit mad sometimes, but we deal with it.
My personal opinion is pretty chill: I don't know if what I can do will still be needed n years from now. It might be that I need to change my approach, learn something new, or whatever. But I don't spend all that much time worrying about what was, or what will be. I have problems to solve right now, and I solve them with the best options available to me right now.
People spending their days solving problems probably generally don't have much time to create science fiction.
mjburgess
> You're not using AI right
I use AI heavily, it's my field.
fhd2
The part before "But seriously" was sarcasm. I find it very odd to assume that a professional developer (even if it's not what they would describe as their field) is using it wrong. But it's a pretty standard reply to measured comments about LLMs.
exfalso
I'm more and more confident I must be doing something wrong. I (re)tried using Claude about a month ago and I simply stopped using it after about two weeks because on one hand productivity did not increase(perhaps even decreased), but on the other hand it made me angry because of the time wasted on its mistakes. I was also mostly using it on Rust code, so I'm even more surprised about the article. What am I doing wrong? I've been mostly using the chat functionality and auto-complete, is there some kind of secret feature I'm missing?
creata
I'd love to watch a video of someone using these tools well, because I am not getting much out of it. They save some time, sometimes, but they're nowhere near the 5x boost that some people claim.
qingcharles
I don't know what everyone is doing. Mine is like a 10X-100X force multiplier. I enjoy coding enormously more now that all the drudgery is removed.
And I might not be the best coder, by far, but I've got over 40 years experience at this crap in practically every language going.
fragmede
We can quibble about the exact number; 1.2x vs 5x vs 10x, but there's clearly something there.
null
whiplash451
The thing is: the industry does not need people who are good at (or enjoy) programming, it needs people who are good at (and enjoy) generating value for customers through code.
So the OP was in a bad place without Claude anyways (in industry at least).
This realization is the true bitter one for many engineers.
blackbear_
Productivity at work is well correlated with enjoyment of work, so the industry better look for people who enjoy programming.
The realization that productive workers aren't just replaceable cogs in the machine is also a bitter lesson for businessmen.
xg15
I think the lifelong dream of many businesspeople is to create the perfect "cog in the machine" or ideally run a business without workers at all. (Tony Stark, Elon Musk's role model, is a good example of that. As far as the movies are concerned, he builds all his most important inventions himself, or with the help of AI, no workers involved)
Independent of what AI can do today, I suspect this was a reason why so many resources were poured into its development in the first place. Because this was the ultimate vision behind it.
null
eru
You say it like it's a bad thing.
constantcrying
>so the industry better look for people who enjoy programming
Why? Both AI and outsourcing provide a much cheaper way to get programming done. Why would you pay someone 100k because he likes doing what an AI or an Indian dev Team can do for much cheaper?
xg15
> generating value for customers through code.
Generating value for the shareholders and/or investors, not the customers. I suspect this is the next bitter lesson for developers.
whiplash451
Investors don’t make money if the customers don’t
null
keybored
Yes, there you go. The users are just a propaganda proxy.
The bitter lesson is that making profit is the only directive.
disgruntledphd2
I find it odd that this was ever forgotten.
constantcrying
Writing software will never again be a skill worth 100k a year.
I am sure Software developers are here to stay, but nobody who just writes software is worth anywhere close to 100k a year. Either AI or outsourcing is making sure of that.
jannesan
That’s a good point. I do think there still is some space to focus on just the coding as an engineer, but with AI the space is getting smaller.
null
xg15
A question that came up in discussions recently and that I found interesting: How will new APIs, libraries or tooling be introduced in the future?
The models all have their specific innate knowledge of the programming ecosystem from the point in time where their last training data was collected. However, unlike humans, they cannot update that knowledge unless a new finetuning is performed - and even then, they can only learn about new libraries that are already in widespread use.
So if everyone now shifts to Vibe Coding, will this now mean that software ecosystems effectively become frozen? New libraries cannot gain popularity because AIs won't use them in code and AIs won't start to use them because they aren't popular.
benoau
I guess the counter-question is does it matter if nobody is building tools optimized for humans, when humans aren't being paid to write software?
I saw a submission earlier today that really illustrated perfectly why AI is eating people who write code:
> You could spend a day debating your architecture: slices, layers, shapes, vegetables, or smalltalk. You could spend several days eliminating the biggest risks by building proofs-of-concept to eliminate unknowns. You could spend a week figuring out how you’ll store, search, and cache data and which third–party integrations you’ll need.
$5k/person/week to have an informed opinion of how to store your data! AI going to look at the billion times we already asked these questions and make an instant decision and the really, really important part is it doesn't really matter what we choose anyway because there are dozens of right answers.
mckn1ght
There will still be people who care to go deeper and learn what an API is and how to design a good one. They will be able to build the services and clients faster and go deeper using AI code assistants.
And then, yes, you’ll have the legions of vibe coders living in Plato’s cave and churning out tinker toys.
fragmede
That’s it then isn’t it? We are at the level where we’re making tinker toys. What is the tinker toy industry like? Instead of expensive start up Google office. Do I at least get a workshop in the back of the garden? How much does it pay?
mike_hearn
It's not an issue. Claude routinely uses internal APIs and frameworks on one of my projects that aren't public. The context windows are big enough now that it can learn from a mix of summarized docs and surrounding examples and get it nearly right, nearly all the time.
There is an interesting aspect to this whereby there's maybe more incentive to open source stuff now just to get usage examples in the training set. But if context windows keep expanding it may also just not matter.
The trick is to have good docs. If you don't then step one is to work with the model to write some. It can then write its own summaries based on what it found 'surprising' and those can be loaded into the context when needed.
c7b
Not sure this is going to be a big issue practice. Tools like ChatGPT regularly get new knowledge cutoffs and those seem to work well in my experience. I haven't tested it with programming features specifically, but you could simply do a small experiment: take the tool of your choice and a programming feature that was introduced after it first launched and see whether you can get it to use it correctly.
fragmede
> unless a new finetuning is performed
That's where we're at. The LLM needs to be told about the brand new API by feeding it new docs, which just uses up tokens in its context window.
zkmon
It's not true that coding would no longer be fun because of AI. Arithmetic did not stop being fun because of calculators. Travel did not stop being fun because of cars and planes. Life did not stop being fun because of lack of old challenges.
New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
fire_lake
> For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
Well this sounds delightful! Glad to be free of the thinking and creativity!
mckn1ght
When you’re churning out many times more code per unit time, you had better think good and hard about how to organize it.
Everyone wanted to be an architect. Well, here’s our chance!
wizzwizz4
I find legacy systems fun because you're looking at an artefact built over the years by people. I can get a lot of insight into how a system's design and requirements changed over time, by studying legacy code. All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model.
ThrowawayR2
> "All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model."
The fun part though is that future coding LLMs will eventually be poisoned by ingesting past LLM generated slop code if unrestricted. The most valuable code bases to improve LLM quality in the future will be the ones written by humans with high quality coding skills that are not reliant or minimally reliant on LLMs, making the humans who write them more valuable.
Think about it: A new, even better programming language is created like Sapphire on Skates or whatever. How does a LLM know how to output high quality idiomatically correct code for that hot new language? The answer is that _it doesn't_. Not until 1) somebody writes good code for that language for the LLM to absorb and 2) in a large enough quantity for patterns to emerge that the LLM can reliably identify as idiomatic.
It'll be pretty much like the end of Asimov's "Feeling of Power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power) or his almost exactly LLM relevant novella "Profession" ( https://en.wikipedia.org/wiki/Profession_(novella) ).
eMPee584
thanks to git repositories stored away in arctic tunnels our common legacy code heritage might outlast most other human artifacts.. (unless ASI choses to erase that of course)
mckn1ght
That’s fine if you find that fun, but legacy archeology is a means to an end, not an end itself.
wizzwizz4
Legacy archaeology in a 60MiB codebase far easier than digging through email archives, requirements docs, and old PowerPoint files that Microsoft Office won't even open properly any more (though LibreOffice can, if you're lucky). Handwritten code actually expresses something about the requirements and design decisions, whereas AI slop buries that signal in so much noise and makes "archaeology" almost impossible.
When insight from a long-departed dev is needed right now to explain why these rules work in this precise order, but fail when the order is changed, do you have time to git bisect to get an approximate date, then start trawling through chat logs in the hopes you'll happen to find an explanation?
keybored
> New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
You’re gonna work on captcha puzzles and you’re gonna like it.
> I've never been more productive
Maybe it’s because my approach is much closer to a Product Engineer than a Software Engineer, but code output is rarely the reason why projects that I worked on are delayed. All my productivity issues can attributed to poor specifications, or problems that someone just threw over the wall. Every time I’m blocked is because someone didn’t make a decision on something, or no one has thought further enough to see this decision was needed.
It irks me so much when I see the managers of adjacent teams pushing for AI coding tools when the only thing the developers know about the project is what was written in the current JIRA ticket.