Firing programmers for AI is a mistake
885 comments
·February 11, 2025dham
csmpltn
I think that LLMs are only going to make people with real tech/programming skills much more in demand, as younger programmers skip straight into prompt engineering and never develop themselves technically beyond the bare minimum needed to glue things together.
The gap between people with deep, hands-on experience that understand how a computer works and prompt engineers will become so insanely deep.
Somebody needs to write that operating system the LLM runs on. Or your bank's backend system that securely stores your money. Or the mission critical systems powering this airplane you're flying next week... to pretend like this will all be handled by LLMs is so insanely out of touch with reality.
hombre_fatal
I think we who are already in tech have this gleeful fantasy that new tools impair newcomers in a way that will somehow serve us, the incumbents, in some way.
But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates. And then they backfill their knowledge over time.
My first serious foray into software wasn't even Ruby. It was Ruby on Rails. I built some popular services without knowing how anything worked. There was always a gem (lib) for it. And Rails especially insulated the workings of anything.
An S3 avatar upload system was `gem install carrierwave` and then `mount_uploader :avatar, AvatarUploader`. It added an avatar <input type="file"> control to the User form.
But it's not satisfying to stay at that level of ignorance very long, especially once you've built a few things, and you keep learning new things. And you keep wanting to build different things.
Why wouldn't this be the case for people using LLM like it was for everyone else?
It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that. You get better, you learn more, and you become the question-answerer. And one day you sheepishly look at your question history in amazement at how far you've come.
lolinder
> Why wouldn't this be the case for people using LLM like it was for everyone else?
I feel like it's a bit different this time because LLMs aren't just an abstraction.
To make an analogy: Ruby on Rails serves a similar role as highways—it's a quick path to get where you're going, but once you learn the major highways in a metro area you can very easily break out and explore and learn the surface streets.
LLMs are a GPS, not a highway. They tell you what to do and where to go, and if you follow them blindly you will not learn the layout of the city, you'll just learn how to use the GPS. I find myself unable to navigate a city by myself until I consciously force myself off of Google Maps, and I don't find that having used GPS directions gives me a leg up in understanding the city—I'm starting from scratch no matter how many GPS-assisted trips I've taken.
I think the analogy helps both in that the weaknesses in LLM coding are similar and also that it's not the end of the world. I don't need to know how to navigate most cities by memory, so most of the time Google Maps is exactly what I need. But I need to recognize that leaning on it too much for cities that I really do benefit from knowing by heart is a problem, and intentionally force myself to do it the old-fashioned way in those cases.
askonomm
Difference here being that you actually learned the information about Ruby on Rails, whereas the modern programmer doesn't learn anything. They are but a clipboard-like vessel that passes information from an LLM onto a text editor, rarely ever actually reading and understanding the code. And if something doesn't work, they don't debug the code, they debug the LLM for not getting it right. The actual knowledge here never gets stored in the brain, making any future learning or evolving impossible.
I've had to work with developers that are over dependent on LLM's, one didn't even know how to undo code, they had to ask an LLM to undo. Almost as if the person is a zombie or something. It's scary to witness. And as soon as you ask them to explain their rationale for the solution they came up with - dead silence. They can't, because they never actually _thought_.
thechao
I think you're right; I can see it in the accelerating growth curve of my good Junior devs; I see grandOP's vision in my bad Junior devs. Optimistically, I think this gives more jr devs more runway to advance deeper into more sophisticated tech stacks. I think we're gonna need more SW devs, not fewer, as these tools get better: things that were previously impossible will be possible.
csmpltn
> "But it's not satisfying to stay at that level of ignorance very long"
It's not about satisfaction: it's literally dangerous and can bankrupt your employer, cause immense harm to your customers and people at home, and make you unhirable as an engineer.
Let's take your example of "an S3 avatar upload system", which you consider finished after writing 2 lines of code and a couple of packages installed. What makes sure this can't be abused by an attacker to DDOS your system, leading to massive bills from AWS? What happens after an attacker abuses this system and takes control of your machines? What makes sure those avatars are "safe-for-work" and legal to host in your S3 bucket?
People using LLMs and feeling all confident about it are the equivalent of hobby carpenters after watching a DIY video on YouTube and building a garden shed over the weekend. You're telling me they're now qualified to go build buildings and bridges?
> "It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that."
I meet people like this during job interviews all of the time, if I'm hiring for a position. Can't tell you how many people with 10+ years of industry experience I met recently that can't explain how to read data from a local file, from the machine's file system.
sterlind
At present, LLMs are basically Stack Overflow with infinite answers on demand... of Stack Overflow quality and relevance. Prompting is the new Googling. It's a critical base skill, but it's not sufficient.
The models I've tried aren't that great at algorithm design. They're abysmal at generating highly specific, correct code (e.g. kernel drivers, consensus protocols, locking constructs.) They're good plumbers. A lot of programming is plumbing, so I'm happy to have the help, but they have trouble doing actual computer science.
And most relevantly, they currently don't scale to large codebases. They're not autonomous enough to pull a work item off the queue, make changes across a 100kloc codebase, debug and iterate, and submit a PR. But they can help a lot with each individual part of that workflow when focused, so we end up in the perverse situation where junior devs act as the machine's secretary, while the model does most of the actual programming.
So we end up de-skilling the junior devs, but the models still can't replace the principal devs and researchers, so where are the principal devs going to come from?
zahlman
>Why wouldn't this be the case for people using LLM like it was for everyone else?
Because of the mode of interaction.
When you dive into a framework that provides a ton of scaffolding, and "backfill your knowledge over time" (guilty! using Nikola as a SSG has been my entry point to relearn modern CSS, for example), you're forced to proceed by creating your own loop of experimentation and research.
When you interact with an LLM, and use forums to figure out problems the LLM didn't successfully explain to you (about its own output), you're in chat mode the whole time. Even if people are willing to teach you to fish, they won't voluntarily start the lesson, because you haven't shown any interest in it. And the fish are all over the place - for now - so why would you want to learn?
>It's like presuming that StackOverflow will keep you as a question-asker your whole life when nobody here would relate to that.
Of course nobody on HN would relate to that first-hand. But as someone with extensive experience curating Stack Overflow, I can assure you I have seen it second-hand many times.
unyttigfjelltol
> But in reality pretty much anyone who enters software starts off cutting corners just to build things instead of working their way up from nand gates.
The article is right in a zoomed-in view (fundamental skills will be rare and essential), but in the big picture the critique in the comment is better (folks rarely start on nand gates). Programmers of the future will have less need to know code syntax the same way current programmers don't have to fuss with hardware-specific machine code.
The people who still do hardware-specific code, are they currently in demand? The marketplace is smaller, so results will vary and probably like the article suggests, be less satisfactory for the participant with the time-critical need or demand.
geodel
Great points. I see my journey from an offshore application support contractor to full time engineer and learning a lot along the way. Along the journey I've seen folks who held good/senior engineering roles just stagnated or moved to management role.
Industry is now large enough to have all sort of people. Growing, stagnating, moving out, moving in, laid off, retiring early, or just plain retiring etc.
whynotminot
Isn’t this kind of thing the story of tech though?
Languages like Python and Java come around, and old-school C engineers grouse that the kids these days don’t really understand how things work, because they’re not managing memory.
Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.
I actually sort of agree with the old C hands to some extent. I think people don’t understand how a lot of things actually work. And it also doesn’t really seem to matter 95% of the time.
HarHarVeryFunny
I don't think the value of senior developers is so much in knowing how more things work, but rather that they've learnt (over many projects of increasing complexity) how to design and build larger more complex systems, and this knowledge mostly isn't documented for LLMs to learn from. An LLM can do the LLM thing and copy designs it has seen, but this is cargo-cult behavior - copy the surface form of something without understanding why it was built that way, and when a different design would have been better for a myriad of reasons.
This is really an issue for all jobs, not just software development, where there is a large planning and reasoning component. Most of the artifacts available to train an LLM on are the end result of reasoning, not the reasoning process themselves (the day by day, hour by hour, diary of the thought process of someone exercising their journeyman skills). As far as software is concerned, even the end result of reasoning is going to have very limited availability when it comes to large projects since there are relatively few large projects that are open source (things like Linux, gcc, etc). Most large software projects are commercial and proprietary.
This is really one of the major weaknesses of LLM-as-AGI, or LLM-as-human-worker-replacement - their lack of ability to learn on the job and pick up a skill for themselves as opposed to needing to have been pre-trained on it (with the corresponding need for training data). In-context learning is ephemeral and anyways no substitute for weight updates where new knowledge and capabilities have been integrated with existing knowledge into a consistent whole.
shafyy
Just because there are these abstractions layers that happened in the past does not mean that it will continue to happen that way. For example, many no-code tools promised just that, but they never caught on.
I believe that there's a "optimal" level of abstraction, which, for the web, seems to be something like the modern web stack of HTML, JavaScript and some server-side language like Python, Ruby, Java, JavaScript.
Now, there might be tools that make a developer's life easier, like a nice IDE, debugging tools, linters, autocomplete and also LLMs to a certain degree (which, for me, still is a fancy autocomplete), but they are not abstraction layers in that sense.
AnthonyMouse
> Modern web-dev comes around and now the old Java hands are annoyed that these new kids are just slamming NPM packages together and polyfills everywhere and no one understands Real Software Design.
The real issue here is that a lot of the modern tech stacks are crap, but won for other reasons, e.g. JavaScript is a terrible language but became popular because it was the only one available in browsers. Then you got a lot of people who knew JavaScript so they started putting it in places outside the browser because they didn't want to learn another language.
You get a similar story with Python. It's essentially a scripting language and poorly suited to large projects, but sometimes large projects start out as small ones, or people (especially e.g. mathematicians in machine learning) choose a language for their initial small projects and then lean on it again because it's what they know even when the project size exceeds what the language is suitable for.
To slay these beasts we need to get languages that are actually good in general but also good at the things that cause languages to become popular, e.g. to get something better than JavaScript to be able to run in browsers, and to make languages with good support for large projects to be easier to use for novices and small ones, so people don't keep starting out in a place they don't want to end up.
commandlinefan
My son is a CS major right now, and since I've been programming my whole adult life, I've been keeping an eye on his curriculum. They do still teach CS majors from the "ground up" - he took system architecture, assembly language and operating systems classes. While I kind of get the sense that most of them memorize enough to pass the tests and get their degree, I have to believe that they end up retaining some of it.
fuy
And also these old C hands don't seem to get paid (significantly) more than a regular web-dev who doesn't care about hardware, memory, performance etc. Go figure.
bee_rider
The real hardcore experts should be writing libraries anyway, to fully take advantage of their expertise in a tiny niche and to amortize the cost of studying their subproblem across many projects. It has never been easier to get people to call your C library, right? As long as somebody can write the Python interface…
Numpy has delivered so many FLOPs for BLAS libraries to work on.
Does anyone really care if you call their optimized library from C or Python? It seems like a sophomoric concern.
bdhcuidbebe
Yea, every progeammer should write at least a cpu emulator in their language of choice, its such a undervalued exercise that will teach you so much about how stuff really works.
sanderjd
Notably, I don't think there was a mass disemployment of "old C hands". They just work on different things.
weatherlite
There's no need for tens of millions of OS Kernel devs , most of us are writing business logic CRUD apps.
Also, it's not entirely clear to me why LLMs should get extremely good in web app development but not OS development, as far as I can see it's the amount and quality of training data that counts.
wesselbindt
> as far as I can see it's the amount and quality of training data that counts
Well there's your reason. OS code is not as in demand or prevalent as crud web app code, so there's less relevant data to train your models on.
trod1234
It is far more likely that everything, and not just IT, but everything collapses than we make it to the point you mention.
LLMs replace entry level people who invested in education. They would have the beginning knowledge, but there's no means to become better because opportunities are non-existent because they replaced these positions. Its a sequential pipeline failure of talent development. In the meantime you have the mid and senior level people who cannot pass their knowledge on, they age out, and die.
What happens when you hit a criticality point where production which is dependent on these systems, and it can no longer continue.
The knowledge implicit in production is lost, the economic incentives have been poisoned. The distribution systems are destroyed.
How do you bootstrap recovery for something that effectively took several centuries to build in the first place, but not in centuries but in weeks/months.
If this isn't sufficient enough to explain the core of the issue. Check out the Atari/Nintendo crash, which isn't nearly as large as this but goes into the dangers of destroying your distributor networks.
If you pay attention to the details, you'll see Atari's crash was fueled by debt financing, and in the process they destroyed their distributor networks with catastrophic losses. After that crash, Nintendo couldn't get shelf-space; no distributor would risk the loss without a guarantee. They couldn't advertise as video games. They had to trojan horse the perception of what they were selling, and guarantee it. There is a documentary on Amazon which covers this, playing with power. Check it out.
brightball
One of my first bosses was a big Perl guy. I checked on what he was doing 15 years later and he was one of 3 people at Windstream handling backbone packet management rules.
You just don’t run into many people comfortable with that technology anymore. It’s one of the big reasons I go out of my way to recruit talks on “old” languages to be included at the Carolina Code Conference every year.
SoftTalker
We've been in this world for decades.
Most developers couldn't write an operating system to save their life. Most could not write more than a simple SQL query. They sling code in some opinionated dev stack that abstracts the database and don't think too hard about the low-level details.
aussieguy1234
They'll probably go a step further and use an ORM instead of writing queries.
Since ORMs generally write crap unoptimized sql for all but the simplest of queries, this will lead to performance issues once things scale up.
eitally
I agree. It's the current generation's version of what happened with the advent of Javascript frameworks about 15 years ago, when suddenly web devs stopped learning how computers actually work. There will always be high demand for software engineers who actually know what they're doing, can debug complex code bases, and can make appropriate decisions about how to apply technology to business problems.
That said, AI agents are absolutely going to put a bunch of lower end devs out of work in the near term. I wouldn't want to be entering the job market in the next couple of years....
mixmastamyk
> There will always be high demand for software engineers who actually know what they're doing
Unfortunately they won’t be found due to horrible tech interviews focused on “culture” (*-isms), leetcode under the gun, or resume thrown in trash at first sight from lack of full degree. AMHIK.
chasd00
> I wouldn't want to be entering the job market in the next couple of years....
I bet there's a software dev employment boom about 5 years away once it becomes obvious competent people are needed to unwind and rework all the llm generated code.
InDubioProRubio
The "prompt" engineering is also going to create a ton of cargocult tips and tricks- endless shell scripts, that do nothing but look spectacular, with one or two important commands at the end. Fractal classes, that nobody knows why they exist. Endless boilerplate.
And the ai will be trained on this- and thus cluelessness reinforced and baked in. Omnissiah, hear our prayers in the terminal for we are but -h less man (bashes keyboard with a ritual wrench).
lolinder
Part of the problem is that many working developers are still in companies that don't allow experimentation with the bleeding edge of AI on their code base, so their experiences come from headlines and from playing around on personal projects.
And on the first 10,000 lines of code, the best in class tools are actually pretty good. Since they can help define the structure of the code, it ends up shaped in a way that works well for the models, and it still basically all fits in the useful context window.
What developers who can't use it on large warty codebases don't see is how poorly even the best tools do on the kinds of projects that software engineers typically work on for pay. So they're faced with headlines that oversell AI capabilities and positive experiences with their own small projects and they buy the hype.
Jcampuzano2
My company allowed us to use it but most developers around me didn't reach out to the correct people to be able to use it.
Yes I find it incredibly helpful and try to tell them.
But it's only helpful in small contexts, auto completing things, small snippets, generating small functions.
Any large scale changes like most of these AI companies try to push them being capable of doing it just falls straight on its face. I've tried many times, and with every new model. It can't do it well enough to trust in any codebase that's bigger than a few 10000 lines of code.
nyarlathotep_
I've found it very easy to end up "generating" yourself into a corner with a total mess with no clear control flow that ends up more convoluted than need be, by a mile.
If you're in mostly (or totally) unfamiliar territory, you can end up in a mess, fast.
I was playing around with writing a dead-simple websocket server in go the other evening and it generated some monstrosity with multiple channels (some unused?) and a tangle of goroutines etc.
Quite literally copying the example from Gorilla's source tree and making small changes would have gotten me 90% of the way there, instead I ended up with a mostly opaque pile of code that *looks good* from a distance, but is barely functional.
(This wasn't a serious exercise, I just wanted to see how "far" I could get with Copilot and minimal intervention)
ragle
In a similar situation at my workplace.
What models are you using that you feel comfortable trusting it to understand and operate on 10-20k LOC?
Using the latest and greatest from OpenAI, I've seen output become unreliable with as little as ~300 LOC on a pretty simple personal project. It will drop features as new ones are added, make obvious mistakes, refuse to follow instructions no matter how many different ways I try to tell it to fix a bug, etc.
Tried taking those 300 LOC (generated by o3-mini-high) to cursor and didn't fare much better with the variety of models it offers.
I haven't tried OpenAI's APIs yet - I think I read that they accommodate quite a bit more context than the web interface.
I do find OpenAI's web-based offerings extremely useful for generating short 50-200 LOC support scripts, generating boilerplate, creating short single-purpose functions, etc.
Anything beyond this just hasn't worked all that well for me. Maybe I just need better or different tools though?
menaerus
Did you have to do any preparation steps before you asked from a model to do the large scale change or there were no steps involved? For example, did you simply ask for the change or did you give a model a chance to learn about the codebase. I am genuinely asking, I'm curious because I haven't had a chance to use those models at work.
throwaway0123_5
Some codebases grown with AI assistance must be getting pretty large now, I think an interesting metric to track would be percent of code that is AI generated over time. Still isn't a perfect proxy for how much work the AI is replacing though, because of course it isn't the case that all lines of code would take the same amount of time to write by hand.
lolinder
Yeah, that would be very helpful to track. Anecdotally, I have found in my own projects that the larger they get the less I can lean on agent/chat models to generate new code that works (without needing enough tweaks that I may as well have just written it myself). Having been written with models does seem to help, but it doesn't get over the problem that eventually you run out of useful context window.
What I have seen is that autocomplete scales fine (and Cursor's autocomplete is amazing), but autocomplete supplements a software engineer, it doesn't replace them. So right now I can see a world where one engineer can do a lot more than before, but it's not clear that that will actually reduce engineering jobs in the long term as opposed to just creating a teller effect.
WillPostForFood
the kinds of projects that software engineers typically work on for pay
This assumes a typical project is fairly big and complex. Maybe I'm biased the other way, but I'd guess 90% of software engineers are writing boilerplate code today that could be greatly assisted by LLM tools. E.g., PHP is still one of the top languages, which means a lot of basic WordPress stuff that LLMs are great at.
lolinder
The question isn't whether the code is complex algorithmically, the question is whether the code is:
* Too large to fit in the useful context window of the model,
* Filled with bunch of warts and landmines, and
* Connected to external systems that are not self-documenting in the code.
Most stuff that most of us are working on meets all three of these criteria. Even microservices don't help, if anything they make things worse by pulling the necessary context outside of the code altogether.
And note that I'm not saying that the tools aren't useful, I'm saying that they're nowhere near good enough to be threatening to anyone's job.
RivieraKid
I'm surprised to see a huge disconnect between how I perceive things and the vast majority of comments here.
AI is obviously not good enough to replace programmers today. But I'm worried that it will get much better at real-world programming tasks within years or months. If you follow AI closely, how can you be dismissive of this threat? OpenAI will probably release a reasoning-based software engineering agent this year.
We have a system that is similar to top humans at competitive programming. This wasn't true 1 year ago. Who knows what will happen in 1 year.
cejast
Nobody can tell you whether progress will continue at current, faster or slower rates - humans have a pretty terrible track record at extrapolating current events into the future. It's like how movies in the 80's made predictions about where we'll be in 30 years time. Back to the Future promised me hoverboards in 2015 - I'm still waiting!
tmnvdb
Compute power increases and algorithmic efficiency improvements have been rapid and regular. I'm not sure why you thought that Back to the Future was a documentary film.
layer8
When I see stuff like https://news.ycombinator.com/item?id=42994610 (continued in https://news.ycombinator.com/item?id=42996895), I think the field still has fundamental hurdles to overcome.
tmnvdb
Why do you think this is a fundamental hurdle, rather than just one more problem that can be solved? I dont have strong evidence either way, but I've seen a lot of 'fundamental unsurmountable problems' fall by the wayside over the past few years. So I'm not sure we can be that confident that a problem like this, for which we have very good classic algorithms, is a fundamental issue.
lordswork
This kind of error doesn't really matter in programming where the output can be verified with a feedback loop.
johnnyanmac
It's the opposite. I don't think it'll replace programmers legitimately within a decade. I DO think that companies will try a lot in the months and years anyway and that programmers will be the only ones suffering the consequences of such actions.
tmnvdb
People somehow have expectations that are both too high and too low at the same time. They expect (demand) current language models completely replace a human engineer in any field without making mistakes (this is obviously way too optimistic) while at the same time they are ignoring how rapid the progress has been and how much these models can now do that seemed impossible just 2 years ago, delivering huge value when used well, and they assume no further progress (this seems too pessimistic, even if progres is not guaranteed to continue at the same rate).
mirsadm
ChatGPT 4 was released 2 years ago. Personally I don't think things have moved on significantly since then.
lurking_swe
depends on what you work on in the software field. Many of these LLM’s have pretty small context windows. In the real world when my company wants to develop a new feature, or change the business logic, that is a cross-cutting change (many repos/services). I work at a large org for background. No LLM will be automating this for a long time to come. Especially if you’re in a specific domain that is niche.
If your project is very small, and it’s possible to feed your entire code base into an LLM in the near future, then you’re in trouble.
Also the problem is the LLM output is only as good as the prompt. 99% of the time the LLM won’t be thinking of how to make your API change backwards compatible for existing clients, how to help you do a zero-downtime migration, following security best practices, or handling a high volume of API traffic. Etc.
Not to mention, what the product team _thinks_ they want (business logic) is usually not what they really want. Happens ALL THE TIME friend. :) It’s like the offshoring challenge all over again. Communication with humans is hard. Communication with an LLM is even harder. Writing the code is the easiest part of my job!
I think some software development jobs will definitely be at risk in the next 10-15 years. Thinking this will happen in 1 years time is myopic in my opinion.
thiht
> If you follow AI closely, how can you be dismissive of this threat?
Just use a state of the art LLM to write actual code. Not just a PoC or an MVP, actual production ready code on an actual code base.
It’s nowhere close to being useful, let alone replacing developers. I agree with another comment that LLMs don’t cut it, another breakthrough is necessary.
Imanari
We will see, maybe models do get good enough but I think we are underestimating these last few percent of improvement.
nitwit005
It's a bit paradoxical. A smart enough AI, and there is no point in worrying, because almost everyone will be out of a job.
The problem case is the somewhat odd scenario where there is an AI that's excellent at software dev, but not most other work, and we all have to go off and learn some other trade.
bwfan123
A "causal model" is needed to fix bugs ie, to "root-cause" a bug.
LLMs yet dont have the idea of a causal-model of how something works built-in. What they do have is pattern matching from a large index and generation of plausible answers from that index. (aside: the plausible snippets are of questionable licensing lineage as the indexes could contain public code with restrictive licensing)
Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.
Hence, while they are a great productivity boost as a semantic search engine, and a plausible snippet generator, they are not capable of building (or fixing bugs in) a machine which requires causal modeling.
fiso64
>Causal models require machinery which is symbolic, which is able to generate hypotheses and test and prove statements about a world. LLMs are not yet capable of this and the fundamental architecture of the llm machine is not built for it.
Prove that the human brain does symbolic computation.
bwfan123
We dont know what the human brain does, but we know it can produce symbolic theories or models of abstract worlds (in the case of math) or real worlds (in the case of science). It can also produce the "symbolic" turing machine which serves as an abstraction for all computation we use (cpu/gpu/etc)
necovek
Agreed, and I haven't yet seen any single instance of a company firing software engineers because AI is replacing them (even if by increasing productivity of another set of software engineers): I've asked this a number of times, and while it's a common refrain, I haven't really seen any concrete news report saying it in so many words.
And to be honest, if any company is firing software engineers hoping AI replaces their production, that is good news since that company will soon stop existing and treating engineers like shit which it probably did :)
pgm8705
Yes. I think part of the problem is how good it is at starting from a blank slate and putting together an MVP type app. As a developer, I have been thoroughly impressed by this. Then non-devs see this and must think software engineers are doomed. What they don't see is how terrible LLMs are at working with complex, mature codebases and the hallucinations and endless feedback loops that go with that.
idle_zealot
The tech to quickly spin up MVP apps has been around for a while now. It gets you from a troubling blank slate to something with structure, something you can shape and build on.
I am of course talking about
npx create-{template name}
Or your language of choice's equivalent (or git clone template-repo).tiborsaas
Yes, but the LLM driven MVP-s are not only builerplates but actual functioning apps. The "create-" is somewhat good, but it's usually throwaway code and do it properly later. While my LLM made boilerplate is the actual first few steps to get the boring parts done. It also needs refactoring and polishing, but it's an order of magnitude better than the "MVP helper tooling" before.
ge96
A friend of mine reached out with some code ChatGPT wrote for him to trade crypto. It had so much random crap in it and lines would say "AI enhanced trading algo" and it was just an np.randomint line. It was pulling in random deps not even used.
I get it though like I'm terrible working with IMUs and I want to just get something going but I can't there's that wall I need to overcome/learn eg. the math behind it. Same with programming helps to have the background knowing how to read code and how it works.
HDThoreaun
I used claude to help write a crypto trading bot. It helped me push out thousands of lines a day. What wouldve taken months took a couple weeks. Obviously you still need experienced pilots but unless we find an absolute fuckload of new work to do(not unlikely looking at history) its hard for me to see anything other than way less developers being needed.
DanielHB
The only thing I use it for is for small self-contained snippets of code on problems that require use of APIs I don't quite remember out of the top of my head. The LLM spits out the calls I need to make or attributes/config I need to set and I go check the docs to confirm.
Like "How to truncate text with CSS alone" or "How to set an AWS EC2 instance RAM to 2GB using terraform"
nerder92
This article is entirely built on 2 big and wrong assumptions:
1. AI code ability will be the same as is today
2. Companies will replace people for AI en masse at a given moment in time
Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves. And is not even just the model itself. The tooling, the Agentic capabilities and workflow will entirely change to adapt to this. (Already doing)
The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs. At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.
This article, in my opinion, is just doomerism storytelling for nostalgic programmers, that see programming only as some kind of magical artistic craft and AI as the villain arrived to remove all the fun from it. You can still switch off Cursor and write donut.c if you enjoy doing it.
y-c-o-m-b
> The second assumption is also wrong, intelligent companies will not layoff en masse to use AI only, they will most likely slow hiring devs because their existing enhanced devs using AI will suffice enough to their coding related needs
After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company. All of them make poor and irrational decisions regularly. I think you over-estimate the intelligence of leadership whilst simultaneously under-estimating their greed and eventual ability to self-destruct.
EDIT: you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.
leovingi
>After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company
It's not even necessarily about intelligence but about the simple concept of unknown unknowns. If everyone had perfect knowledge of the current reality and could perfectly describe what it is that they want immediately, without spending any time investigating, producing proof-of-concept work, iterating on a product, etc. I would agree that it could be feasible for AI to replace a lot of programming work.
As it stands, what I just described above is THE BULK of development. Coding is the last thing that happens and it also happens to be the fastest, easiest and smallest part of the entire process.
Jean-Papoulos
Companies that do not adopt AI whereas their competitors do will eventually (slowly) fall behind.
mulmboy
> After 20 years in tech, I can't think of a single company I've worked for/with that would fit the profile of an "intelligent" company. All of them make poor and irrational decisions regularly. I think you over-estimate the intelligence of leadership whilst simultaneously under-estimating their greed and eventual ability to self-destruct.
Says nothing about companies and everything about you
> you also over-estimate the desire for developers to increase their productivity with AI. I use AI to reduce complexity and give me more breathing room, not to increase my output.
I'm the same. But I expect that once many begin to do this, there will be some who do use it for productivity and they will set the bar. Then people like you and I will either use it for productivity or fall behind.
darkhorse222
How old are you? All it takes is one bad experience to show you the emperor has no clothes. Corporations, executives, and middle managers are almost by definition self interested and short sighted.
Just look at the over hiring during covid and the methods used to cull that workforce after they realized their mistake. Back handed and inhumane. Executives are more followers than a junior dev is. They just have a lot more terminology to obscure that fact. But they are basically professional bullshitters, like consultant firms.
This is excluding executives with vision. But the market and corporate structure bias towards eliminating those leaders as they are not consistently profitable over every month.
johnnyanmac
I'm happy you've only worked for altruistic, not-for-profit minded companies that care about employee growth and takes pride in their tach stack above all else. I have not had as fortunate an experience.
>I expect that once many begin to do this, there will be some who do use it for productivity and they will set the bar.
Yeah, probably. I've had companies so pinpointed on "velocoity" instead of quality. I imagine they will definitely try to expect triple the velocity just because one person "gets so much done". Not realizing how much of that illusion is correcting the submissions.
aiono
> the quality of code produced by AI will improve dramatically as model evolves.
That's a very bold claim. We are already seeing plateu in LLM capabilities in general. And there is little improvement in places where they fall short (like making holistic changes in a large codebase) since their birth. They only improve where they are already good at such as writing small glue programs. Expecting significant breakthroughs with just scaling without any fundamentally changes to the architecture seems like too optimistic to me.
high_na_euv
>Of course both these assumptions are wrong, the quality of code produced by AI will improve dramatically as model evolves.
How are you so sure?
anothermathbozo
No one has certainty here. It’s an emergent technology and no one knows for certain how far it can be pushed.
It’s reasonable that people explore contingencies where the technology does improve to a point of driving changes in the labor market.
bodegajed
Theranos made bold claims too according to wikipedia Theranos claimed they devised diagnostic using blood tests that required very small amounts of blood. Then their PR machine made them raise $9 billion.
RohMin
I do feel with the rise of "reasoning" class of models, it's not hard to believe that code quality will improve over time.
high_na_euv
The thing is: how much
0.2x, 2x, 5x, 50x?
croes
Doesn't sound like improving dramically.
dev1ycan
he's not, just another delusional venture capitalist that hasn't bothered to look up the counter arguments to his point of view, done by mathematicians
anothermathbozo
It’s an emergent technology and no one knows for certain how far it can be pushed, not even mathematicians.
randmeerkat
> he's not, just another delusional venture capitalist that hasn't bothered to look up the counter arguments to his point of view, done by mathematicians
Don’t hate on it, just spin up some startup with “ai” and LLM hype. Juice that lemon.
nerder92
I'm not sure, it's an observation considering how AI improvement is related to Moore's law.
[1](https://techcrunch.com/2025/01/07/nvidia-ceo-says-his-ai-chi...)
somenameforme
That's an assumption. Most/all neural network based tech faces a similar problem of exponentially diminishing returns. You get from 0 to 80 in no time. A bit of effort and you eventually ramp it up to 85, and it really seems the goal is imminent. Yet suddenly each percent, and then each fraction of a percent starts requiring exponentially more work. And then you can even get really fun things like you double your training time and suddenly the resultant software starts scoring worse on your metrics, usually due to overfitting.
And it seems, more or less, clear that the rate of change in the state of the art has already sharply decreased. So it's likely LLMs have already entered into this window.
high_na_euv
But some say that Moore Law is dead :)
Anyway, the mumber of tiktok users coorelates with advancements in AI too!
Before tiktok the progress was slower, then when tiktok appeared it progressed as hell!
kykeonaut
However, an increase in computing quality doesn't necessarily mean an increase in output quality, as you need compute power + data to train these models.
Just increasing compute power will increase the performance/training speed of these models, but you also need to increase the quality of the data that you are training these models on.
Maybe... the reason why these models show a high school level of understanding is because most of the data on the internet that these models have been trained on is of high school graduate quality.
null
causal
Not to mention I haven't really seen AI replace anyone, except perhaps as a scapegoat for execs who were planning on layoffs anyway.
That said, I do think there is real risk of letting AI hinder the growth of Junior dev talent.
sanderjd
I think I've seen us be able to do more with fewer people than in the past. But that isn't the limiting factor for our hiring. All else equal, we'd like to just do more, when we can afford to hire the people. There isn't a fixed amount of work to be done. We have lots of ideas for products and services to make if we have the capacity.
causal
Agreed, I often see AI discussed as if most companies wouldn't take 10x more developers if they could have them for free
ssimpson
I tend to agree with you. The general pattern behind "x tool came along that made work easier" isn't to fire a bunch of folks, its to make the people that are there work whatever increment of ease of work more. ie, if the tool cuts work in half, you'd be expected to do 2x more work. Automation and tools almost never "makes our lives easier", it just removes some of the lower value added work. It would be nice to live better and work less, but our overlords won't let that happen. Same output with less work by the individual isn't as good as same or more output with the same or less people.
fragmede
> but our overlords won't let that happen
If you have a job, working for a boss, you're trading your time for money. If you're a contractor and negotiate being paid by the project, you're being paid for results. Trading your time for money is the underlying contract. That's the fundamental nature of a job working for somebody else. You can escape that rat race if you want to.
Someone I know builds websites for clients on a contract basis, and did so without LLMs. Within his market, he knows what a $X,000 website build entails. His clients were paying that rate for a website build out prior to AI-augmented programming, and it would take a week to do that job. With help from LLMs, that same job now takes half as much time. So now he can choose to take on more clients and take home more pay, or not, and be able to take it easy.
So that option is out there, if you can make that leap. (I haven't)
johnnyanmac
>You can escape that rat race if you want to.
I'm working on it. But it takes money and the overlords definitely are trying to squeeze as of late.
And yes, while I don't think I'm being replaced in months or years, I can a possibility in a decade or two of the ladder being pulled up on most programming jobs. We'll either be treated as well as artists (assuming we still don't unionize) or we'll have to rely on our own abilities to generate value without corporate overlords.
wincy
Interesting. So it sounds like I need to get into the market and charge half as much and steal all of his customers.
These things eventually always end up as a Red Queen’s race, where you have to run as fast as you can to stay in the same place.
Madmallard
What makes you think (1) will be true?
It is only generating based on training data. In mature code bases there is a massive amount of interconnected state that is not already present in any github repository. The new logic you'd want to add is likely something never done before. As other programmers have stated, it seems to be improving at generating useful boilerplate and making simple websites and such related to what's out there en masse on Github. But it can't make any meaningful changes in an extensively matured codebase. Even Claude Sonnet is absolutely hopeless at this. And the requirement before the codebase is "matured" is not very high.
ryandrake
> The new logic you'd want to add is likely something never done before.
99% of software development jobs are not as groundbreaking as this. It’s mostly companies doing exactly what their competitors are doing. Very few places are actually doing things that an LLM model has truly never seen crawling through GutHub. Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.
SpicyLemonZest
Groundbreakingness is different from the type of novelty that's relevant to an LLM. The script I was trying to write yesterday wasn't groundbreaking at all: it just needed to pull some code from a remote repository, edit a specific file to add a hash, then run a command. But it had to do that _within our custom build system_, and there's few examples of that, so our coding assistant couldn't figure out how to do it.
skydhash
> Even new innovative products generally boil down to the same database fetches and CRUD glue and JSON parsing and front end form filling code.
The simplest version of that is some CGI code a PHP script. Which everyone should be writing according to your description. But why so many books have been written to be able to do this seemingly simple task? So many frameworks, so many patterns, so many methodologies....
Madmallard
I don't know man
It can't do anything in these random Phaser games I'm making and even translating my 10,000 line XNA game to Phaser. It is totally hopeless.
Phaser has been out forever now, and XNA used to be too.
nerder92
> It is only generating based on training data
This is not the case anymore, current SOTA CoT models are not just parroting stuff from the training data. And as of today they are not even trained exclusively on publicly (and not so publicly) available stuff, but they massively use synthetic data which the model itself generated or distilled data from other smarter models.
I'm using and I know plenty of people using AI in current "mature" codebases with great results, this doesn't mean it does the work while you sip a coffee (yet)
*NOTE: my evidence for this is that o3 could not break ARC AGI by parroting, because it's a banchmark made exactly for this reason. Not a coding banchmark per se, but still transposable imo.
fragmede
Try Devin or OpenHands. OpenHands isn't quite ready for production, but it's informative on where things are going and to watch the LLM go off and "do stuff", kinda on its own, from my prompt (while I drink coffee).
leptons
> their existing enhanced devs using AI will suffice enough to their coding related needs.
Not my experience. I spend as much time reading through and replacing wrong AI generated code as I do writing my own code, so it's really wasting my time more often than helping. It's really hit or miss, and about the only thing the AI gets right most often is writing console.log statements based on the variable I've just assigned, and that isn't really "coding". And even then it gets it right only about 75% of the time. Sure, that saves me some time, but I'm not seeing the supposed acceleration AI is hyped as giving.
bodegajed
> Quality of code produced by AI will improve dramatically as model evolves.
Where are your facts? What is your basis on these future prediction? Sure small code snippets have already improved since gpt2. What about larger applications where layers and layers of abstractions coming from private company sensitive data?
> At the end of the day product is just one area of company development, build the complete e2e ultimate solution with 0 distribution or marketing will not help.
this sounds like PR garbage to me why use "e2e ultimate solution" provide technical details so we can verify what you're saying is true
dragonwriter
My opinion: tech isn't firing programmers for AI. If is firing peogrammers because of the financial environment, and waving around AI as a fig leaf to pretend that it is not really cutting back in output.
When the financial environment loosens again, there’ll be a new wave of tech hiring (which is about equally likely to publicly be portrayed as either reversing the AI firing or exploiting new opportunities due to AI, neither of which will be the real fundamental driving force.)
smitelli
I've come to believe it really is this.
Everybody got used to the way things worked when interest rates were near zero. Money was basically free, hiring was on a rampage, and everybody was willing to try reckless moonshots with slim chances for success. This went on for like fifteen years -- a good chunk of the workforce has only ever known that environment.
coolKid721
most narratives for everything are just an excuse for macro stuff. we had zirp for basically the entire period of 2008 - 2022 and when that stopped there was huge lay offs and less hiring. I see lots of newer/younger devs being really pessimistic about the future of the industry, being mindful of the macro factors is important so people don't buy into the AI narratives (which is just to bump up their stocks).
If people can get a safer return buying bonds they aren't going to invest in expansion and hiring. If there is basically no risk free rate of return you throw your money at hiring/new projects because you need to make a return. Lots of that goes into tech jobs.
cootsnuck
It's this. Companies love a smokescreen to tighten their belts without spooking markets and tanking their stock. And workers are the collateral damage.
johnnyanmac
No one 0ast sole very small businesses (aka a single person with contractors) is seriously trying to replace programmers with AI right now. I do feel we will hit that phase sometimes down the line (probably in the 30's).so I at least think this is a tale to keep in the back of our minds long term.
jvanderbot
What evidence do we have that AI is actually replacing programmers already? The article treats messaging on this as a forgone conclusion, but I strongly suspect it's all hype-cycle BS to cover layoffs, or a misreading of "Meta pivots to AI" headlines.
chubot
We'll probably never have evidence either way ... Did Google and Stack Overflow "replace" programmers?
Yes, in the sense that I suspect that with the strict counterfactual -- taking them AWAY -- you would have to hire 21 people instead of 20, or 25 instead of 20, to do the same job.
So strictly speaking, you could fire a bunch of people with the new tools.
---
But in the same period, the industry expanded rapidly, and programmer salaries INCREASED
So we didn't really notice or lament the change
I expect that pretty much the same thing will happen. (There will also be some thresholds crossed, producing qualitative changes. e.g. Programmer CEOs became much more common in the 2010's than in the 1990's.)
---
I think you can argue that some portion of the industry "got dumber" with Google/Stack Overflow too. Higher level languages and tech enabled that.
Sometimes we never learn the underlying concepts, and spin our wheels on the surface
Bad JavaScript ate our CPUs, and made the fans spin. Previous generations would never write code like that, because they didn't have the tools to, and the hardware wouldn't tolerate it. (They also wrote a lot of memory safety bugs we're still cleaning up, e.g. in the Expat XML parser)
If I reflect deeply, I don't know a bunch of things that earlier generations did, though hopefully I know some new things :-P
jvanderbot
This is an insightful comment. It smells of Jevron's paradox, right? More productivity leads to increased demand.
I just don't remember anyone saying that SO would replace programmers, because you could just copy-paste code from a website and run it. Yet here we are: GPTs will replace programmers, because you can just copy-paste code from a website and run it.
tguedes
I completely agree with Jevron's paradox being the right way to think about this. Much like ERP and HR software made it so you needed less back of office staff to accomplish the same task, but it allows these huge, multi-national companies to exist. I don't think these tens of thousands or hundreds of thousands employee companies would be possible without ERP and HR software.
I think another way of thinking about this is with low-code/no code tools. Another comment in this post said they never really took off and they didn't in the way some people expected. But a lot of large companies use them quite a bit for automating internal processes such as document/data aggregation and manipulation. JP Morgan has multiple job listings right now for RPA developers. Before this would needed to be done by actual developers.
I suspect (and hope) AI will follow a similar trajectory. I hope the future is exciting and we build new, more complex systems we can build that wasn't possible before due to lack of
sanderjd
People definitely said this about SO!
TheOtherHobbes
Google Coding is definitely a real problem. And I can't believe how wrong some of the answers on Stack Overflow are.
But the real problems are managerial. Stonks must go up, and if that means chasing a ridiculous fantasy of replacing your workforce with LLMs then let's do that!!!!111!!
It's all fun and games until you realise you can't run a consumer economy without consumers.
Maybe the CEOs have decided they don't need workers or consumers any more. They're too busy marching into a bold future of AI and robot factories.
Good luck with that.
If there's anyone around a century from now trying to make sense of what's happening today, it's going to look like a collective psychotic episode to them.
supergarfield
> It's all fun and games until you realise you can't run a consumer economy without consumers.
If the issue is that the AI can't code, then yes you shouldn't replace the programmers: not because they're good consumers, just because you still need programmers.
But if the AI can replace programmers, then it's strange to argue that programmers should still get employed just so they can get money to consume, even though they're obsolete. You seem to be arguing that jobs should never be eliminated due to technical advances, because that's removing a consumer from the market?
robertlagrant
I don't think this is anyone's plan. It's the biggest argument against why it won't be the plan: who'll pay for all of it? Unless we can Factorio the world, it seems more likely we just won't do that.
insane_dreamer
It'll happen gradually over time, with more pressure on programmers to "get more done".
I think it's useful to look at what has already happened at another, much smaller profession -- translators -- as a precursor to what will happen with programmers.
1. translation software does a mediocre job, barely useful as a tool; all jobs are safe
2. translation software does a decent job, now expected to be used as time-saving aid, expectations for translators increase, fewer translators needed/employed
3. translation software does a good job, translators now hired to proofread/check the software output rather than translate themselves, allowing them to do 3x to 4x as fast as before, requiring proportionally fewer translators
4. translation software, now driven by LLMs, does an excellent job, only cursory checks required; very few translators required mostly in specialized cases
daveguy
Yes, but in all 4 of these steps you are literally describing the job transformer LLMs were designed to do. We are at 1 (mediocre job) for LLMs in coding right now. Maybe 2 in a few limited cases (eg boilerplate). There's no reason to assume LLMs will ever perform at 3 for coding. For the same reason natural language programming languages like COBOL are no longer used -- natural language is not precise.
insane_dreamer
It seems the consensus is that we will reach level 3 pretty quickly given the pace of development in the past 2 years. Not sure about 4 but I’d say in 10 years we’ll be there.
Workaccount2
I actually know a professional translator and while a year ago he was full of worry, he now is much more relaxed about it.
It turns out that like art, many people just want a human doing the translation. There is a strong romantic element to it, and it seems humans just have a strong natural inclination to only want other humans facilitating communication.
insane_dreamer
I’ve done freelance translating (not my day job) for 20 years. What you describe is true for certain types of specialized translations, particularly anything that is literary in nature. But that is a very small segment. The vast majority of translation work is commercial in nature and for that companies don’t care whether a human or machine did it.
arrowsmith
How do they know that a human is doing the translation? What's to stop someone from just c&ping the text into an LLM, giving it a quick proofread, then sending it back to the client and saying "I translated this"?
Sounds like easy money, maybe I should get into the translation business.
aksosnckckd
The hard part of development isn’t converting an idea in human speak to idea in machine speak. It’s formulating that idea in the first place. This spans all the way from high level “tinder for dogs” concepts to low level technical concepts.
Once AI is doing that, most jobs are at risk. It’ll create robots to do manual labor better than humans as well.
insane_dreamer
Right. But it only takes 1 person, or maybe a handful, to formulate an idea that might take 100 people to implement. You will still need that one person but not the 100.
weatherlite
> It'll happen gradually over time
How much time? I totally agree with you but being early is the same as being wrong as someone clever once said. There's a huge difference between it happening in less than 5 years like Zuckerberg and Sam Altman are saying and it taking 20 more years. If the second scenario is what happens me and many people on this thread can probably retire rather comfortably, and humanity possibly has enough time to come up with a working system to handle this mass change. If the first scenario happens it's gonna be very very painful for many people.
jajko
20 years as in real replacement, maybe. But change will be cca gradual if looking at whole market, even if composed of many smaller jumps. Top management of companies are now itching for that promised paradise of minimal IT with just few experts. Then comes inevitable sobering up, but the direction is clear.
I wouldn't be considering programming if choosing university studies now. With that smart, many other fields look more stable, albeit demand curve and how comfy later years of career looks like is very different (maybe lawyers, doctors, for blue collars some trades but look at long term health effects with ie back or knee issues).
SirFatty
Zuckerberg said it.
https://www.inc.com/kit-eaton/mark-zuckerberg-plans-to-repla...
swiftcoder
In the ~8 years since I worked there, Zuckerberg announced that we'd all be spending our 8 hour workdays in the Metaverse, and when that didn't work out, he pivoted to crypto currency.
He's just trend-chasing, like all the other executives who are afraid of being left behind as their flagship product bleeds users...
65
We gotta put AI Crypto in the Blockchain Metaverse!
cma
Have they bled users?
icepat
Zuckerberg, as always, is well known for making excellent business decisions that lead to greater sector buy in. The Metaverse is going great.
scarface_74
On the other hand, Instagram has been called one of the greatest acquisitions of all time only below the Apple/Next acquisition.
falcor84
Really, that's what you're going with, arguing against the business acumen of the world's second richest person, and the only one at that scale with individual majority control over their company?
As for the Metaverse, it was always intended as a very long-term play which is very early to be judged, but as an owner of a Quest headset, it's already going great for me.
burkaman
Obviously the people developing AI and spending all of their money on it (https://www.reuters.com/technology/meta-invest-up-65-bln-cap...) are going to say this. It's not a useful signal unless people with no direct stake in AI are making this change (and not just "planning" it). The only such person I've seen is the Gumroad CEO (https://news.ycombinator.com/item?id=42962345), and that was a pretty questionable claim from a tiny company with no full-time employees.
causal
Planning to and succeeding at are very different things
SirFatty
I'd be willing to bet that "planning to" means the plan is being executed.
https://www.msn.com/en-us/money/other/meta-starts-eliminatin...
makerofthings
Part of my work is rapid prototyping of new products and technology to test out new ideas. I have a small team of really great generalists. 2 people have left over the last year and I didn't replace them because the existing team + chatGPT can easily take up the slack. So that's 2 people that didn't get hired that would have done without chatGPT.
ActionHank
There is little evidence that AI is replacing engineers, but there is a whole lot of evidence that shareholders and execs really love the idea and are trying every angle to achieve it.
chubot
The funny thing is that "replacing engineers" is framed as cutting costs
But that doesn't really lead to any market advantage, at least for tech companies.
AI will also enable your competitors to cut costs. Who thinks they are going to have a monopoly on AI, which would be required for a durable advantage?
---
What you want to do is get more of the rare, best programmers -- that's what shareholders and execs should be wondering about
Instead, those programmers will be starting their own companies and competing with you
insane_dreamer
> AI will also enable your competitors to cut costs.
which is why it puts pressure on your own company to cut costs
it's the same reason why nearly all US companies moved their manufacturing offshore; once some companies did it, everyone had to follow suit or be left behind due to higher costs than their competitors
TheOtherHobbes
If this works at all, they'll be telling AIs to start multiple companies and keeping the ones that work best.
But if that works, it won't take long for "starting companies" and "being a CEO" to look like comically dated anachronisms. Instead of visual and content slop we'll have a corporate stonk slop.
If ASI becomes a thing, it will be able to understand and manipulate the entirety of human culture - including economics and business - to create ends we can't imagine.
t-writescode
> Instead, those programmers will be starting their own companies and competing with you
If so, then why am I not seeing a lot of new companies starting while we're in this huge down-turn in the development world?
Or, is everyone like me and trying to start a business with only their savings, so not enough to hire people?
ryandrake
What's the far future end-state that these shareholders and execs envision? Companies with no staff? Just self-maintaining robots in the factory and AI doing the office jobs and paperwork? And a single CEO sitting in a chair prompting them all? Is that what shareholders see as the future of business? Who has money to buy the company's products? Other CEOs?
reverius42
Just a paperclip maximizer, with all humans reduced to shareholders in the paperclip maximizer, and also possibly future paperclips.
chasd00
> execs really love the idea and are trying every angle to achieve it.
reminds me of the offshoring hype in the early 2000's. Where it worked, it worked well but it wasn't the final solution for all of software development that many CEOs wanted it to be.
Nasrudith
Yep. It has the same rhyme of the worst case being 'wishes made by fools' too where they don't realize that they themselves don't truly know what to ask for, so getting exactly what they asked for ruins them.
only-one1701
If the latter is the case, then it's only a matter of time. Enshitification, etc.
iainctduncan
I can tell you from personal experience that investors are feeling pressure to magically reduce head count with AI to keep up with the joneses. It's pretty horrifying how little understanding or information some of the folks making these decisions have. (I work in tech diligence on software M&A and talk to investment committees as part of the job)
3s
For a lot of tasks like frontend development I’ve found that a tool like cursor can get you pretty far without much prior knowledge. IMO (and experience) many tasks that previously required to hiring a programmer or designer with knowledge of the latest frameworks can now be replaced by one motivated “prompt engineer” and some patience
daveguy
The deeper it gets you into code without prior knowledge the deeper it gets you into debug hell.
I assume the "motivated prompt engineer" would have to already be an experienced programmer at this point. Do you think someone who has only had an intro to programming / MBA / etc could do this right now with tools like cursor?
goosejuice
I love cursor, but yeah no way in hell. This is where it chokes the most and I've been leaning on it for non trivial css for a year or more. If I didn't have experience with frontend it would be a shit show. If you replaced a fe/designer with a "prompt engineer" at this stage it would be incredibly irresponsible.
Responsiveness, cohesive design, browser security, accessibility and cross browser compatibility are not easy problems for LLMs right now.
anarticle
Feels like C-suite thinks if they keep saying it, it will happen. Maybe! I think more likely programmers are experiencing a power spike.
I think it's a great time to be small, if you can reap the benefits of these tools to deliver EVEN FASTER than large enterprise than you already are. Aider and a couple Mac minis and you can have a good time!
pyrale
We have fired all our programmers.
However, the AI is hard to work with, it expects specific wording in order to program our code as expected.
We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.
phren0logy
I think people aren't getting your joke.
smitelli
The AI that replaced the people, however, is in stitches.
eimrine
Now we are!
GuB-42
> We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.
Also known as programmers.
The "AI" part is irrelevant. Someone with expertise in transmitting specifications to a computer is a programmer, no matter the language.
EDIT: Yep, I realized that it could be the joke, but reading the other comments, it wasn't obvious.
philipov
whoosh! (that's the joke)
HqatsR
Yes, the best way is to type the real program completely into the AI, so that ClosedAI gets new material to train on, the AI can make some dumb comments but the code works.
And the manager is happy that filthy programmers are "using" AI.
kamaal
>>However, the AI is hard to work with, it expects specific wording in order to program our code as expected.
Speaking English to make something is one thing, but speaking English to modify something complicated is absolutely something else. And Im pretty sure involves more or less the same effort as writing code itself. Of course regression for this something like this is not for the faint hearted.
aleph_minus_one
> We have hired people with expertise in the specific language needed to transmit our specifications to the AI with more precision.
These people are however not experts in pretending to be a obedient lackeys.
SketchySeaBeast
Hey! I haven't spent a decade of smiling through the pain to be considered an amateur lackey.
markus_zhang
Actually I think that's the near future, or close to it.
1. Humans also need specific wording in order to program code that stakeholders expected. A lot of people are laughing at AI because they think getting requirements is a human privilege.
2. On the contrary, I don't think people need to hire AI interfacers. Instead, business stakeholders are way more interested to interface with AI simply because they just want to get things done instead of filling a ticket for us. Some of them are going to be good interfacers with proper integration -- and yes we programmers are helping them to do so.
Side note: I don't think you are going to hear someone shouting that they are going to replace humans with AI. It started with this: people integrate AI into their workflow, layoff 10%, and see if AI helps to fill in the gap so they can freeze hire. Then they layoff 10% more.
And yes we programmers are helping the business to do that, with a proud and smile face.
Good luck.
ImaCake
Your argument depends on LLMs being able to handle the complexity that is currently the MBA -> dev interface. I suspect it won't really solve it, but its ability to facilitate and simplify that interface will be invaluable.
Im not convinced the people writing specs are capable of writing them well enough that an LLM can replace the human dev.
ryanjshaw
What job title are you thinking of using?
pyrale
Speaker With Expertise
amarcheschi
Soft Waste Enjoyer
yoyohello13
Tech Priest
__MatrixMan__
Technomancer. AI is far more like the undead than like a deity, at least for now.
beepboopboop
AI Whisperer
kayge
Full Prompt Developer
silveraxe93
oftwaresay engineeryay
re-thc
> it expects specific wording in order to program our code as expected
The AI complained that the message did not originate from a programmer and decided not to respond.
bryukh
"Let AI replace programmers" is the new "Let’s outsource everything to <some country>." Short-term cost savings, long-term disaster.
dogibog
[flagged]
isakkeyten
[flagged]
physicsguy
There's nothing discriminatory about it, it's the same if you outsource things within your own country except the price is higher. Contractors have a totally different way of working because they're not really interested in the long term of a project beyond being retained. If they code something in such a way that causes an issue that takes time to fix later then great - more hours we can charge the client for.
Outsourcing abroad is more difficult because of cultural differences though. Having worked with outsourced devs in India, I found that we got a lot of nodding in meetings when asked if they understood, avoiding saying no, and then it became clear when PRs came in that they didn't actually understand or do what they had been asked to do.
philipov
More important than cultural differences is timezone differences. Communication and collaboration is harder when you only have a couple hours of overlap between your working day and their working day. Much harder if you have no overlap at all. This isn't even a feature of outsourcing - it's a challenge for any globally distributed team.
jmcgough
Certainly there are competent engineers in every country, but I think what they are referencing is that back in the 90s and 2000s there were a lot of fears from US engineers that they would be replaced by less expensive engineers in other countries, which was attempted by some companies. Ultimately a number of these efforts failed to work well for the company, due to communication barriers and time zone differences.
toolz
Every job in the world is discriminatory if you take the less potent definition of the word. That's why we have job interviews, to explicitly discriminate. I presume you mean "discriminate in a bad way" but given the context I have no idea what that "bad way" is. Outsourcing has costs outside of just the up front payments, that isn't a secret and it has very little to do with technical expertise. Most software driven companies don't fall apart because of poorly implemented algorithms, they are more likely to do so because the humans have a difficult time interfacing in efficient ways and understanding and working towards the same goal together.
You can't just expect people from other countries to communicate as effectively as people who grew up right down the street from each other. Yes, it's objectively discriminatory, but not for hostile reasons.
heyoni
It’s not discriminatory at all! Or even the point OP is trying to make. Taking a significant number of jobs and outsourcing them overnight will quickly result in running out the talent pool in said country. It’s shortsighted and stupid because it assumes that there is an army of developers just sitting around standing by waiting for the next western tech company to give them high paying remote jobs. A large portion of that talent pool is already reserved by the biggest corporations.
Build up to it and foster growth in your overseas teams and you’ll do well. Thinking you can transform your department overnight _is_ a great way to boost your share price, cash out on a fat payday and walk away before your product quality tanks.
bryukh
> So no other country in the world can write code as good as wherever you are from?
I didn't say this -- I think it's your take. Even more -- I'm such an "outsource" software developer who is working for US and EU companies. My take is that overusing outsourcing in the long term, you can lose local education because "we can just hire from ... so why do we need to teach ours?" -- I saw it already, even on an "in-country-scale" level.
Retric
It’s not about the ability to write code, it’s about the ability to communicate ideas back and forth. Even just a few time zones is a real issue let alone any linguistic or cultural issues.
MonkeyClub
GP sounds shortsighted on first take, but consider how outsourcing is good and cheap for the companies, but in the long run creates huge unemployment pools in the original country.
Negative consequences can also be social, no-one is saying that it's, say, lowering of product quality.
c03
Modern development is not as much about writing "good code", but just as much about good communication. There is a very real risk of losing good communication when outsourcing.
zoogeny
I'm not sure why people are so sure one way or the other. I mean, we're going to find out. Why pretend you have a crystal ball and can see the future?
A lot of articles like this just want to believe something is true and so they create an elaborate argument as to why that thing is true.
You can wrap yourself up in rationalizations all you want. There is a chance firing all the programmers will work. Evidence beats argument. In 5 years we'll look back and know.
It is actually probably a good idea to hedge your bets either way. Use this moment to trim some fat, force your existing programmers to work in a slightly leaner environment. It doesn't feel nice to be a programmer cut in such an environment but I can see why companies might be using this opportunity.
uh_uh
This. Articles like this are examples of motivated reasoning and seem to be coming from a place of insecurity by programmers who feel their careers threatened.
bigfatkitten
For each programmer who actually spends their time on complex design work or fixing difficult bugs, there are many more doing what amounts to clerical work. Adding a new form here, fiddling with a layout there.
It is the latter class who are in real danger.
awkward
The new form and layout are what the business wants and can easily articulate. What they need is people who understand whether the new form needs to both be stored in the local postgres system or if it should trigger a Kafka event to notify other parts of the business.
The AI only world is still one where the form and layout get done, but what happens to that data afterward?
smeeger
the AI will happily do all that for you…
saalweachter
Layout fiddlers make changes people can see and understand.
If your job is massaging data for nebulous purposes using nebulous means and getting nebulous results, that you need to basically be another person doing the exact same thing to understand the value of, there's going to be a whole lot of management saying "Do we really need all those guys over there doing that? Can't we just have like one guy and a bunch of new AI magic?"
soco
But the question is, are we there yet? I have yet to hear of an AI bot who can eat up a requirement and add said new form in the right place, or fiddle with a layout. Do you know any? So all those big promises you read right now are outright lies. When will we reach that point? I don't do gambling. But we are not there, regardless what the salespeople or fancy journalists might be claiming all day long.
fakedang
Cursor already does the latter task pretty well, as I'm sure other AI agents already do. AI struggles only when it's something complex, like dealing with a geometric object, or plugging together infra, or programme logic.
Last year, I built a reasonably complicated e-commerce project wholly with AI, using the zod library and some pretty convoluted e-commerce logic. While it was a struggle, I was able to build it out in a couple of weeks. And I had zero prior experience even building forms in react, forget using zod.
Now shipping it to production? That's something AI will struggle at, but humans also struggle at that :(
franktankbank
> Now shipping it to production? That's something AI will struggle at, but humans also struggle at that :(
Why? Just because that's where the rubber hits the road? It's a different skillset but AI can do systems design too and probably direct a knowledgable but unpracticed implementer.
bigfatkitten
We're at a point where companies have figured out that they can hire someone on a clerical wage to prompt an AI to do this grunt work, rather than a CS grad on $100k a year.
greentxt
Pizza maker will not be the first job automated away. Nor will janitor. Form fiddlers are cheap and can be blamed. AI fiddlers can be blamed too but are not cheap, yet.
elric
That kind of boring busywork can be eliminated by using better abstractions. At the same time, it's a useful training ground for junior developers.
pentel-0_5
[flagged]
sunami-ai
I agree with the statement in the title.
Using AI to write code does two things:
1. Everything seems to go faster at first until you have to debug it because the AI can't seem to be able to fix the issue... It's hard enough to debug code you wrote yourself. However, if you work with code written by others (team environment) then maybe you're used to this, but not being able to quickly debug code you're responsible for will shoot you in the foot.
2. You brain neurons in charge of code production will be naturally re-assigned for other cognitive tasks. It's not like riding a bicycle or swimming which once learned is never forgotten. It's more like advanced math, which if you don't practice you can forget.
Short term gain; long term pain.
dkjaudyeqooe
Essentially: people are vastly overestimating AI ability and vastly underestimating HI ability.
Humans are supremely adaptable. That's what our defining attribute as a species is. As a group we can adapt to more or less any reality we find ourselves in.
People with good minds will use whatever tools they have to enhance their natural abilities.
People with less good minds will use whatever tools they have to cover up their inability until they're found out.
smeeger
you are objectively wrong because you dismiss out of hand the possibility that AI will not continue to improve. the heuristics are against you and you have no reasoning or evidence behind your assertion that we have already hit the ceiling. youre arrogant. just confront it already
guccihat
When the AI dust settles, I wonder who will be left standing among the groups of developers, testers, scrum masters, project leaders, department managers, compliance officers, and all the other roles in IT.
It seems the general sentiment is that developers are in danger of being replaced entirely. I may be biased, but it seems not to be the most likely outcome in the long term. I can't imagine how such companies will be competitive against developers who replace their boss with an AI.
__MatrixMan__
> I can't imagine how such companies will be competitive against developers who replace their boss with an AI.
Me neither, but I think it'll be a gratifying fight to watch.
phist_mcgee
Please take the scrum masters first.
Havoc
That’s what people said about outsourcing too. The corporate meat grinder keeps rolling forward anyway.
Every single department and person believes the world will stop turning without them but that’s rarely how that plays out.
rossdavidh
You have a point, all people do like to think they're more irreplaceable than they are, but the last round of offshoring of programmers did in fact end up with the companies trying to reverse course a few years later. GM was the most well-known example of this, but I worked at several organizations that found that getting software done on the other side of the planet was a bad idea, and ended up having to reverse course.
The core issue is that the bottleneck step in software development isn't actually the ability to program a specific thing, it's the process of discovering what it is we actually want the program to do. Having your programmers AT THE OFFICE and in close communication with the people who need the software, is the best way to get that done. Having them on the other side of the planet turned out to be the worse way.
This is unintuitive (to programmers as well as the organizations that might employ them), and therefore they have to discover it the hard way. I don't think this is something LLMs will be good at, now or ever. There may come a day when neural networks (or some other ML) will be able to do that, but that day is not near.
marcosdumay
> That’s what people said about outsourcing too.
And they were right... and a lot of companies fully failed because of it.
And the corporate meat grinder kept rolling forward anyway. And the decision makers were all shielded from the consequences of their incompetence anyway.
When the market is completely corrupted, nothing means anything.
beretguy
I ones worked for a company that used to hire 70 developers from Vietnam to work on their product. Then one day they decided to hire ~5 developers from US. These 5 developers did the job faster, better and cheaper than 70 Vietnamese developers. So they fired all overseas developers and doubled the size of US team to about a dozen.
rightbyte
When comparing sweatshops to a proper inhouse team I think the ingroup and outgroup can be left unspecified. I fear I one day wake up and have become a jingoist.
Espressosaurus
I believe AI is the excuse, but that this is just to cover another wave of outsourcing.
netcan
Our ability to predict technological "replacement" is pretty shoddy.
Take banking for example.
ATMs are literally called "teller machines." Internet banking is a way of "automating banking."
Besides those, every administrative aspect of banking went from paper to computer.
Do banks employ fewer people? Is it a smaller industry? No. Banks grew steadily over these decades.
It's actually shocking how little network enabled PCs impacted administrative employment. Universities, for example, employ far more administrative staff than they did before PC automated many of their tasks.
At one point (during and after dotcom), PayPal and suchlike were threatening to "turn billion dollar businesses into million dollar businesses." Reality went in the opposite direction.
We need to stop analogizing everything in the economy to manufacturing. Manufacturing is unique in its long term tendency to efficiency.other industries don't work that way.
lnrd
Is there data about this or is just your perception? Because my perception would be different, for example in my country countless bank branches closed and a lot of banking jobs do not exist anymore thanks to widespread home-banking usage (which I also know differs from country to country). This is also from the tales of people that had careers in banking and now tell how less banking jobs there are compared to when they joined in the 80s.
I wouldn't be sure that growth as an industry/business is correlated to a growth in jobs too.
Maybe I'm wrong, I would love to see some data about it.
saalweachter
Googling around, it looks like in the US the number of tellers has declined by 28% over the last 10 years, and is forecast to decline another 15% over the next 10. Earlier data was not easy enough to find in the time I'm willing to spend.
jajko
Bank branches for physical contact decreased everywhere, Covid was the last nail in the coffin for many. In the meantime, backoffice jobs rose or even exploded. More and more complex IT, way more regulations from everywhere.
Not sure how overall numbers look like, I would expect slight decrease overall, but for IT definitely grew. Those are really not same type of jobs, although in minds of public are all 'bankers', since all are bank employees.
therockhead
> Do banks employ fewer people? Is it a smaller industry? No. Banks grew steadily over these decades.
Profits may have grown but In Ireland at least, the number of branches have declined drastically.
null
Draiken
Yes banks employ less people. In my country there are now account managers handling hundreds of clients virtually. Most of the local managers got fired.
I find it easy to say from our privileged position that "tech might replace workers but it'll be fine".
Even if all the replaced people aren't unemployed, salaries go down and standards of living for them fall off a cliff.
Tech innovation destroys lives in our current capitalist society because only the owners get the benefits. That's always been true.
marcosdumay
> Even if all the replaced people aren't unemployed, salaries go down and standards of living for them fall off a cliff.
Salaries of the remaining people tend to go up when that happens. And costs tend to go down for the general public.
Owners are actually supposed to only see a temporary benefit during the change, and then go back to what they had before. If that's not how things are happening around you¹, consult with your local market-competition regulator why they are failing to do their job.
1 - Yeah, I know it's not how things are happening around you. That doesn't change the point.
Draiken
I'm sorry but I have to call BS.
>Salaries of the remaining people tend to go up when that happens.
You're telling me with a straight face that after a company replaces part of its workforce with tech/automation, salaries go up? Really? Please show me some data on that because every single graph I've ever seen of salaries must be wrong then. We've had an enormous amount of innovation and breakthroughs in the last decades, but weirdly enough all salaries remain stagnated. If this is true, they should be going up constantly every time we offshore some work or get more efficient technology.
The company can have 50000% growth and salaries will NOT go up. They basically never go up unless the companies want to retain an employee that's at risk of leaving and the replacement cost is high.
The objective of a company is to give money to its owners, nothing else. Salaries are viewed as a cost, so they will never willingly increase their costs unless it's absolutely necessary.
> And costs tend to go down for the general public.
Assuming there aren't monopolies involved and it's a commodity, yes, that sometimes happens. If there's any monopoly involved, unfortunately companies will simply pocket the difference.
aleph_minus_one
> Tech innovation destroys lives in our current capitalist society because only the owners get the benefits.
If you want to become a (partial) owner, buy stocks. :-)
fanatic2pope
The market, in its majestic equality, allows the rich as well as the poor to buy stocks, trade bitcoin, and to own property.
Draiken
Do I really own Intel/Tesla/Microsoft by buying their stock? No I don't.
I can't influence anything on any of these companies unless I was already a billionaire with a real seat at the table.
Even on startups where, in theory, employees have some skin in the game, it's not really how it works is it? You still can't influence almost anything and you're susceptible to all the bad decisions the founders will make to appease investors.
Call me crazy but to say I own something, I have to at least be able to control some of it. Otherwise it's wishful thinking.
There's such a huge disconnect between people reading headlines and developers who are actually trying to use AI day to day in good faith. We know what it is good at and what it's not.
It's incredibly far away from doing any significant change in a mature codebase. In fact I've become so bearish on the technology trying to use it for this, I'm thinking there's going to have to be some other breakthrough or something other than LLM's. It just doesn't feel right around the corner. Now completing small chunks of mundane code, explaining code, doing very small mundane changes. Very good at.