Skip to content(if available)orjump to list(if available)

A.I. is prompting an evolution, not extinction, for coders

toprerules

People are absolutely insane with their takes on AI replacement theory. The complexity of our stacks has grown exponentially since the 70s. Very few people actually comprehend how many layers of indirection, performance, caching, etc. are between their CRUD web app and bare metal these days.

AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code. This is where the job market is for developers. Unless you 100% solve the problem of feeding every single third party monitoring tool, logging, compiler output, system stats down to the temperature of RAM, and then make it actually understand how to fix said enormous system (it can't do this even if you did give it the context by the way), then AI will only increase the amount of engineers you need.

hn_throwaway_99

> AI is going to increase the rate of complexity 10 fold by spitting out enormous amounts of code.

This is true, and I am (sadly, I'd say) guilty of it. In the past, for example, I'd be much more wary about having too much duplication. I was working on a Go project where I needed to have multiple levels of object mapping (e.g. entity objects to DTOs, etc.), and with LLMs it just spit out the answer in seconds (correct I'd add), even though it was lots and lots of code where in the past I would have written a more generic solution to prevent me from having to write so much boilerplate.

I see where the evolution of coding is going, and as a late middle aged developer it has made me look for the exits. I don't disagree with the business rationale of the direction, and I certainly have found a ton of value in AI (e.g. I think it makes learning a new language a lot easier). But I think it makes programming so much less enjoyable for me personally. I feel like it's transformed the job more to "editor" from "author", and for me, the nitty gritty details of programming were fun.

Note I'm not making any broad statement about the profession generally, I'm just stating with some sadness that I don't enjoy where the day-to-day of programming is heading, and I just feel lucky that I've saved up enough in the earlier part of my career to get out now.

aerhardt

I don't always programming in the small, and still feel that AIs provide plenty of chance for architecture, design, refactoring. For me it's been an absolute boon, I'm enjoying build more than ever. At any rate it's undeniably transformative and I can see many people not enjoying the end state.

outside1234

Really? I sort of feel the opposite. I am a mid-career as well and HIGHLY TIRED of writing yet another set of boilerplate to do a thing or chase down some syntax error in the code and the fact that AI will now do this for me has given me a lot more energy to focus on the higher level thinking about how it all fits together.

codr7

So instead of being creative and finding ways to avoid duplication, you look for a way to make copies faster.

That's one way to solve the problem.

Not the way I'm looking for when hiring.

hn_throwaway_99

Right now you're getting downvoted, but I don't disagree with you. It's not hard for me to see how lots of people like how AI helps them code (again, I find it helpful in tons of areas), so I think it's more of a personal preference kind of thing. It's also probably that I'm older (nearing 50), and I think there is a lot of good research that a fundamental shift happens in most people's brains in their 40s that makes it more difficult to make major shifts to new ways of doing things (and I've found that in myself).

I think the only thing that perhaps I don't totally agree with is the idea that AI just lets you focus on a higher level of thinking while it "takes care of the details". AI is still the leakiest of abstractions, and while coding LLMs have gotten much better over the past 2 years I can't just trust it, so I still have to review every line that goes to prod. I just find that task much less enjoyable ("editing") than being the author of code. And heck, I'm someone that really enjoys doing code reviews. I think with code reviews my mind is in a state that I'm helping to mentor another human, and I love that aspect of it. I'm not so energetic about helping to train our robot overlords.

voidhorse

I do not look forward to the amount of incompetence and noise that increasing adoption of these tools will usher in. I've already had to deal with a codebase in which it was clear that the author fundamentally misunderstood what a trie data structure was. I was also having an difficult time trying to talk to them about the implementation and their misconceptions. lo and behold I eventually find out the reason they chose this data structure was because they asked ChatGPT what to do and they never actually understood, conceptually, what they were doing or using. This made the whole engagement with the code and process of fixing things way harder. Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...

itsoktocry

>Not only did I now have to fix the bunk code, I also had to spend significant time disabusing the author of their own misunderstandings...

People with your attitude will be the first to be replaced.

Not because you code isn't as good as an AI; maybe it's even better. But because your personality makes you a bad teammate.

CGamesPlay

So, AI created a job opportunity for you?

voidhorse

I suppose that's one way to look at it. But it's a sort of "bs" unproductive job, fixing up poor outcomes, and overall a less efficient scenario that experts doing it right in the first place. Worse, there was already a readily available implementation that could have been used here rather than a hand-rolled, half-baked AI output. In that respect, the code itself was pure noise and the whole activity was predominantly a waste of my time.

zamalek

The problem is the asinine interviews we are going to have to tolerate in order to screen against AI-kiddies. You think HackerRank is bad? Just you wait...

saaaaaam

That’s called consultancy and you can bill chunky rates by the hour. You should be rubbing your hands with glee!

And then work out how to do code review and fixing using AI, lightly supervised by you so that you can do it all whilst walking the dog or playing croquet or something.

perrygeo

I've yet to see an LLM response or an LLM generated diff that suggests removing or refactoring code. Every AI solution is additive; new functions, new abstractions added in every step. Increased complexity is all but baked into the system.

Software engineering jobs involve working in a much wider solution space - writing new code is but one intervention among many. I hope the people blindly following LLM advice realize their lack of attention to detail and "throw new code at it" attitude comes across as ignorant and foolish, not hyper-productive.

4b11b4

Ask for a refactor...

Ask for multiple refactors and their trade-offs

plagiarist

I agree they cannot handle a complex codebase at all at this moment in time.

But I think I would rather just end my career instead of transitioning into fixing enormous codebases written by LLMs.

JTyQZSnP3cQGa8B

The complexity has grown but not the quality. We went from writing ADA code with contracts and all sorts of protections with well thought architectures, to random crap written in ReactJS in web sites that now weigh more than a full install of Windows 95.

I’m really ashamed of what SWE has become and AI will increase that tenfold as you say. We shouldn’t cheer up on that, especially if I will have to debug all that crap.

And if it increases the number of engineers, they won’t be good due to a lack of education (I already experience this at work). But anyway I don’t believe it, managers will not waste more money on us, that would go against modern capitalism.

toprerules

Oh yes, I'm with you. I didn't say I liked it. I am a low level munger and I like it that way - the lowest layers/oldest layers of the stack tend to be the pieces that are well written and stand the test of time. Where I see AI hitting is at the upper, devil may care, layers of application stack that will be an absolutely hellscape to deal with as a competent engineer.

bigbones

I expect pretty much the opposite to happen: it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI. If a machine can act more reliably by simplifying its inputs at a fraction of the cost of the equivalent human labour, the system has always adjusted to accommodate the machine.

The most obvious example of this already happening is in how function calling interfaces are defined for existing models. It's not hard to imagine that principle applied more generally, until human intervention to get a desired result is the exception rather than the rule as it is today.

I spent most of the past 2 years in "AI cope" mode and wouldn't consider myself a maximalist, but it's impossible not to see already from the nascent tooling we have that workflow automation is going to improve at a rapid and steady rate for the foreseeable future.

JumpCrisscross

> it makes sense for languages, stacks and interfaces to become more amenable to interfacing with AI

The theoretical advance we're waiting for in LLMs is auditable determinism. Basically, the ability to take a set of prompts and have a model recreate what it did before.

At that point, the utility of human-readable computer languages sort of goes out the door. The AI prompts become the human-readable code, the model becomes the interpreter and it eventually, ideally, speaks directly to the CPUs' control units.

This is still years--possibly decades--away. But I agree that we'll see computer languages evolving towards auditability by non-programmers and reliabibility in parsing by AI.

SkiFire13

> The theoretical advance we're waiting for in LLMs is auditable determinism.

Non-determinism in LLMs is currently a feature and introduced consciously. Even if it wasn't, you would have to lock yourself on a specific model, since any future update would necessarily be a possibly breaking change.

> At that point, the utility of human-readable computer languages sort of goes out the door.

Its utility is having a non-ambiguous language to describe your solution in and that you can audit for correctness. You'll never get this with an LLM because its very premise is using natural language, which is ambiguous.

bigbones

> The theoretical advance we're waiting for in LLMs is auditable determinism

I think this is a manifestation of machine thinking - the majority of buyers and users of software rarely ask for or need this level of perfection. Noise is everywhere in the natural environment, and I expect it to be everywhere in the future of computing too.

toprerules

You're missing the point, there are specific reasons why these stacks have grown in complexity - even if you introduce "API for AI interface" as a requirement, you still have to balance that with performance, reliability, interfacing with other systems, and providing all of the information necessary to debug when AI gets it wrong. All of the same things that humans need apply to AI - the claim for AI isn't that it deterministically solve every problem it can comprehend.

So now we're looking at a good several decades of us even getting our human interfacing systems to amend themselves to AI will still requiring all the current complexity they already have. The end result is more complexity not less.

null

[deleted]

bigbones

Based on what I've seen so far, I'm thinking a timeline more like 5-10 years where anything involving at least frontend has all but evaporated. What value is there in having a giant app team grind for 2 years on the perfect Android app when a user can simply ask for the display they want, and 5 variants of it until they are happy, all in a couple of seconds while sitting in the back of a car. What happens to all the hundreds of UI frameworks when a system as a widespread as Android adopts a technology approach like this?

Backend is significantly murkier, there are many tasks it seems unlikely an AI will accomplish any time soon (my toy example so far is inventing and finalizing the next video compression standard). But a lot of the complexity in backend derives from supporting human teams with human styles of work, and only exists due to the steady cashflow generated by organizations extracting tremendous premiums to solve problems in their particular style. I have no good way to explain this - what value is a $500 accounting system backend if models get good enough at reliably spitting out bespoke $15 systems with infinite customizations in a few seconds for a non-developer user, and what of all the technologies whose maintenance was supported by the cashflows generated by that $500 system?

dmix

I wonder if AI is going to reduce the amount of JS UIs. AI bots can navigate simple HTML forms much easier than crazy React code with 10 layers of divs for a single input. It's either that or people create APIs for everything and document how they are related and interact with documentation.

baq

Claude is so good at react the amount of UIs will increase.

theGnuMe

I wonder if anyone is applying AI to cobol…

Blackthorn

There's no readily available Stack Overflow answers for Cobol, so it'll do about as good there as it does digital signal processing.

stray

I think IBM is using LLM to rewrite COBOL code into Java.

agentultra

I'd really like to know the parameters are. I hear claims like, "it saves me an hour a day," or, "I'm 30% more productive with AI." What do these figures mean? They seem like proxies for fuzzy feelings.

When I see boring, repetitive code that I don't want to look at my instinct isn't to ignore it and keep adding more boring, repetitive code. It's like seeing that the dog left a mess on your carpet and pretending you didn't see it. It's easier than training the dog and someone else will clean it... right?

My instinct is to fix the problem causing there to be boring, repetitive code. Too much of that stuff and you end up with a great surface area for security errors, performance problems, etc. And the fewer programmers that read that code and try to understand it the more likely it becomes that nobody will understand it and why it's there.

The idea that we should just generate more code on top of the code until the problem goes away is alien to me.

Although it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.

So really, is AI a tool to make us more productive or a tool to remove our bargaining power?

qwertox

> So really, is AI a tool to make us more productive or a tool to remove our bargaining power?

Don't you notice how it makes you more productive, that you can solve problems faster? It would be really odd if not.

And regarding the bargaining power: that's not the other side of the scale, it's a different problem. If your code monkey now gets as good as your average developer, the average developer will have lost some relative value, unless he also upped his game by using AI.

If everyone gets better, why would you see this as something bad, which makes us lose "bargaining power"? Because you no longer can put the least effort which your employer expects from you? Even then: it's not like AI makes things harder, it makes them better. At least for me software development has become more enjoyable.

While 5 years ago I was asking myself if I really want to do this for the rest of my career, I now know that I want to do this, with this added help, which takes away much of the tedious stuff like looking up solution-snippets on Stack Overflow. Plus, I know that I will have to deal less and less with writing code, and more and more with managing code solutions offered to me.

itsoktocry

>it makes a lot more sense when I probe into why developers feel like they need to adopt AI -- they're afraid they won't be competitive in the job market in X years.

Amazing. You think that the only reason people are using AI is because it's being forced on them?

I honestly feel kinda bad for some people in this thread who don't see the freight train coming.

agentultra

That’s the majority of answers I get when I ask folks. It’s not a huge sample size.

There are a few loud people who think AI programming is the best thing since sliced bread.

What’s the freight train?

johnecheck

General artificial intelligence, duh!

It doesn't matter today's tools fail to live up to their promises, by 2027 we're going to have AI just as smart as humans! They'll totally be able to make a moderately complex change to an existing code base correctly enough that you DONT need to spend just as long as it would've taken you to code it yourself cleaning up after it.

Source: trust me bro (also buy my AI product)

mschild

Well to give to a concrete example. I use it to write test cases for the CRUD applications that I sometimes have to work on. Some test cases already exist and I feed the tests and actually code including additional instructions into a model and get relatively decent output. We also use a code review bot that we feed repository relevant instructions to and get decent basic PR comments. It even caught an edge case that 3 other developers didn't consider.

I think AI can be yet another tool that takes some repetitive tasks off my hands. I still obviously check all the code it generated.

TrackerFF

I'm a big user of LLM tools.

The problem, so far, is that they're still...quite unreliable, to say it least. Sometimes I can feed the model files, and it will read and parse the data 100 out of 100 times. Other times, the model seems clueless about what to do, and just spits out code on how to do it manually, with some vague "sorry I can't seem to read the file", multiple times, only to start working again.

And then you have the cases where the models seem to dig themselves into some sort of terminal state, or oscillate between 2-3 states, that they can't get out off - until you fire up a new model, and transfer the code to it.

Overall they do save me a ton of time, especially with boilerplate stuff, but very routinely even the most SOTA models will have their stupid moments, or keep trying to do the same thing.

codr7

Are you including the time you spend fighting the model?

crmd

You could be describing the performance of me and most of my friends and colleagues over the past five years.

It’s insane how similar non-deterministic software systems already are to biological. Maybe I’ve been wrong and consciousness is a computation.

jdashg

I always thought hacking scenes in sci-fi were unrealistic, but if you're cooking up AI-fortified code lasagna at your endpoints, there are going to be a mishmash of vulnerabilities: Expert robust thought will be spread very thin by the velocity that systemic forces push developers to.

bigtimesink

> Mark Zuckerberg, Meta’s chief executive, stirred alarm among developers last month when he predicted that A.I. technology sometime this year would effectively match the performance of a midlevel software engineer

Either Meta has tools an order of magnitude more powerful than everyone else, or he's drinking his own koolaid.

FredPret

At some point in the past, tools like Wordpress et al made it easy for the average person to roll out their own website.

This probably increased the overall demand for professional website makers and messed-up-Wordpress-fixers.

Now the argument goes that the average business will roll out their own apps using ChatGPT (amusing / scary), or that big software co's will replace engineers with LLMs.

For this last point, I just don't see how any of the current or near-future models could possibly load enough context to do actual engineering as opposed to generating code.

atlantic

I've found that AI has saved me time consulting Stack Overflow. It combines thorough knowledge of the documentation with a lot of practical experience gleaned from online forums

It has also saved time producing well-defined functions, for very specific tasks. But you have to know how to work with it, going through several increasingly complex iterations, until you get what you want.

Producing full applications still seems a pipedream at this stage.

itsoktocry

>Producing full applications still seems a pipedream at this stage.

Do you mean like: "write me an app that does XYZ?"

Well, it's a pipedream because you probably couldn't even get a room of developers to agree on how to do it. There are a million ways.

But this isn't really how programmers are expecting to use AI, are they?

fhd2

That's kinda still how I get the most out of it - search, more or less. Claude gives me great starting points from which I can do some refining/confirming searches and documentation lookups. _Starting_ with search feels like a drag now. But the information and code I get is unreliable at least 20 % (just a guess, frankly, did no statistics) of the time, so I treat the output as things to try or investigate, rather than things to ship.

You'll probably get a few responses from folks that happily tab complete their software and don't sweat the details. Some get away with that, I'm generally not in a position where it's OK to not fully understand the system I'm building. There's a lot of stuff that's better to find out during development than in a late night production system debugging session.

monicaliu

New to AI assisted coding, but I'm finding myself spending a lot of time debugging its convincingly wrong output.

notnullorvoid

I've been choosing not to use most of the AI code assistant stuff for a while, I try it every now and then. Each time it's the same outcome, it actively reduces my productivity by a fair amount. I suspect this is due to a mix of the majority of my programming being non-trivial (library building, complex-ish algos), and that I'm a bit of a perfectionist coder who enjoys programming.

LLMs are useful tools for programming, as a kind of search engine and squeaking rubber duck. AI as a programmer is worse than a junior, it's the junior that won't actively learn and improve. I think current AI architecture limits it from being much more than that.

jenkstom

It seems like AI will generate opportunities for fixing code. Both in reducing internal technical debt ("code maintenance", which is a specialized skill already) and external technical debt (architecture, which is being built by AI also). Eventually AI will be good enough for both of these things as well, and then we may just become the priests of the Temples of Syrinx.

polishdude20

Our great computers fill our hollow halls.

m2spring

Business wants short term-solutions. The long-term effects it doesn't care, even if it clearly bites them in the ass.