Skip to content(if available)orjump to list(if available)

AI Can Write Your Code. It Can't Do Your Job

_pdp_

To be fair, AI cannot write the code!

It can write some types of code. It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.

If it could write the code, I do not see why not deploy it more effectively to write new types of operating systems, experiment with new programming languages and programming paradigms. The $3B is better spent on coming up with truly novel technology that these companies could monopolise with their models. Well, the can't, not yet.

My gut feeling tells me that this might be actually possible at some point but at enormous cost that will make it impractical for most intents and purposes. But even if it possible tomorrow, you would still need people that understand the systems because without them we are simply doomed.

In fact, I would go as much as saying that the demand for programmers will not plummet but skyrocket requiring twice as many programmer we have today. The world simply wont have enough programmers to supply. The reason I think this might actually happen is because the code produced by AI will be so vast overtime that even if humans need to handle/understand 1% that will require more than the 50M developers we have today.

DrewADesign

If you’re writing simple code, it’s often a one-shot. With medium-complexity code, it gets the first 90% done in a snap. Easily faster than I could ever do it. The problem is that 90% is never the part that sucks up a bunch of time— it’s the final 10%, and in many cases for me, it’s been more hindrance than help. If I’d just taken the driver’s wheel, making heavy use of autocomplete, I’d have done better and with less frustration. Having to debug code I didn’t write that’s an integral part of what I’m building is an annoying context switch for anything non-trivial.

noremotefornow

I’m very confused by this as in my domain space I’ve been able to nearly one-shot most coding assignments since this summer (really Sonnet3.5h) by pointing specific models at well-specified requirements. Things like breaking down a long functional or technical spec document into individual tasks, implementing, testing, deployment and change management. Yes, it’s rather straightforward scripting, like automation on Salesforce. That work is toast and spec-driven development will surge as people go more hands-off the direct manipulation of symbols representing machine instructions, on average.

kankerlijer

There is vast difference between writing glue code and engineering systems. Who will come up with the next Spring Boot, Go, Rust, io_uring, or whatever, once the profession has completely reduced itself to pleasing short outcomes?

zingar

> it can even find bugs

This is one of the harder parts of the job IMHO. What is missing from writing “the code” that is not required for bug fixes?

zingar

Without arguing with your main point:

> (few people like writing unit tests)

The TDD community loves tests and finds writing code without tests more painful than writing tests before code.

Is your point that the TDD community is a minority?

> It does a better job at writing unit test (not perfect) then the fellow human programmer

I see a lot of very confused tests out of cursor etc that do not understand nor communicate intent. Far below the minimum for a decent human programmer.

rhines

I see tests as more of a test of the programmer's understanding of their project than anything. If you deeply understand the project requirements, API surface, failure modes, etc. you will write tests that enforce correct behaviour. If you don't really understand the project, your tests will likely not catch all regressions.

AI can write good test boilerplate, but it cannot understand your project for you. If you just tell it to write tests for some code, it will likely fail you. If you use it to scaffold out mocks or test data or boilerplate code for tests which you already know need to exist, it's fantastic.

AndrewKemendo

>It is fascinating that it can bootstrap moderately complex projects form a single shot. It does a better job at writing unit test (not perfect) then the fellow human programmer (few people like writing unit tests). It can even find bugs and point + correct broken code. But apart from that, AI cannot, or at least not yet, write the code - the full code.

Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system, and public health ... what have the Romans ever done for us?

IanCal

It doesn’t need to do all of a job to reduce total jobs in an area. Remove the programming part then you can reduce the number of people for the same output and/or bring people who can’t program but can do the other parts into the fold.

> If OpenAI believed GPT could replace software engineers, why wouldn’t they build their own VS Code fork for a fraction of that cost?

Because believing you can replace some or even most engineers leaves space for hiring the best. It increases the value of the best, and this is all assuming right now - they could believe they have tools coming in two years to replace many more engineers yet still hire them now.

> You sit in a meeting where someone describes a vague problem, and you’re the one who figures out what they actually need. You look at a codebase and decide which parts to change and which to leave alone. You push back on a feature request because you know it’ll create technical debt that’ll haunt the team for years. You review a colleague’s PR and catch a subtle bug that would’ve broken production. You make a call on whether to ship now or wait for more testing.

These are all things that LLMs are doing to various degrees of success though. They’re reviewing code, they can (I know because I had this with for 5.1) push back on certain approaches, they absolutely can decide what parts of code adds to change.

And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?

zingar

> And as for turning vague problems into more clear features? Is that not something they’re unbelievably suited for?

I personally find LLMs to be fantastic for taking my thoughts to a more concrete state through robust debate.

I see AI turning many other folks’ thoughts into garbage because it so easily heads in the wrong direction and they don’t understand how to build self checking into their thinking.

seanmcdirmid

I used AI to write some python code and some bazel rules to generate some python code around that to do some new workflow system I wanted to prototype. It just did it, it would make mistakes but since I had it running tests it would fix the code after running the tests.

The big issue is that I didn’t know the APIs very well, and I’m not much of a Python programmer. I could have done this by hand in around 5 days with a ramp up to get started, but it took less than a day just to tell the AI what I wanted and then to iterate with more features.

null

[deleted]

chocoboaus3

I used AI to build an app just for myself that parses data (using pandas, python etc, not LLM but an LLM coded it) for a report that i need to produce

it's purely for myself, no one else.

I think this is what AI can do at the moment, in terms of mass market SaaS vibe codes, it will be harder. Happy to be proven wrong.

PacificSpecific

This is my experience as well. It's been great to make an application that's small in scope, doesn't require access to my main project repo and is basically a nice to have value add for the client.

I was already quite adept in the language and frameworks involved and the risk was very small so it wasn't a big time sink to review the application PR's.

For me the lesson learned wrt agentic coding is to adjust my expectations relative to online rhetoric and it can be sometimes be useful for small isolated one-offs.

allovertheworld

Aka its the next stage of stackoverflow/google search

zingar

What is it that you feel is missing that would take AI from “just for myself” to “mass market SaaS vibes”?

chocoboaus3

being able to properly deal with scale and security. Being able to be confident that if I am capturing PII data into my application, its as secure as it can be and as secure as if a principal developer had put the architecture together etc.

rhines

Mass market SAAS will generally just use other products to handle this stuff. And if there does happen to be a leak, they just say sorry and move on, there are very few consequences for security failures.

chocoboaus3

Yes i cross referenced the data to ensure it was producing the correct numbers vs my manual methods

rwaksmunski

It's decent at explaining my code back to me, so I can make sure my intent is visible within code/comments/tracing messages. Not too bad at writing test cases either. I still write my code.

zingar

Are you saying that you literally write the features by yourself and that you only use LLMs to understand old code and write tests?

Or a more meta point that “LLMs are capable of a lot”?

zingar

Reasons why the attempted cursor acquisition might not be about replicating cursor (with or without human help): shutting down competition; market share; understanding user behavior; training data

Sparkyte

Likely won't for a while. The race to get all of the memory is likely a squeeze attempt against startups and not consumers. Side effect consumers.

We need regulations to prevent such large scale abuse of economic goods especially if the final output is mediocre.

nextworddev

The reason why there are meetings is due to existing org layers.

Thus, the root cause of the meetings' existence is BS mostly. That's why you have BS meetings.

The fastest way to drive AI adoption is thus by thinning out org layers.

socketcluster

There are many different ways to write code. The more code there is, the more possible versions of the system could have existed to solve that same set of problems; each with different tradeoffs.

The challenge is writing code in such a way that you end up with a system which solves all the problems it needs to solve in an efficient and intuitive way.

The difference between software engineering and programming is that software engineering is more like a discovery process; you are considering a lot of different requirements and constraints and trying to discover an optimal solution for now and for the foreseeable future... Programming is just churning out code without much regard for how everything fits together. There is little to no planning involved.

I remember at university, one of my math lecturers once said "Software engineering? They're not software engineers, they're programmers."

This is so wrong. IMO, software engineering is the essence of engineering. The complexity is insane and the rules of how to approach problems need to be adapted to different situations. A solution which might be optimal in one situation may be completely inappropriate for a slightly different situation due to a large number of reasons.

When I worked on electronics engineering team projects at university, everyone was saying that writing the microcontroller software was the hardest part. It's the part most teams struggled with, more so than PCB design... Yet software engineers are looked down upon as members of an inferior discipline... Often coerced into accepting the lesser title of 'developer'.

I'm certain there will be AIs which can design optimal PCBs, optimal buildings, optimal mechanical parts, long before we have AI which can design optimal software systems.

zingar

Whenever someone gets into Important Reasons why Software Engineering is Different from Programming, I hear a bunch of things that should just be considered “competent programming”.

> Programming is just churning out code without much regard for how everything fits together

What you’re describing is a beginner, not a programmer

> There is little to no planning involved > trying to discover an optimal solution for now and for the foreseeable future

I spend so much time fighting against plans that attempt to take too much into account and are unrealistic about how little is known before implementation. If the most Important Difference is that software engineers like planning, does that mean that being SE makes you less effective?

tehjoker

I think it’s undeniable as an engineering profession when performance and algorithm choice come into play, that or safe control of embedded or industrial devices. There are other contexts where its engineering too, it’s just engineering in the sense that someone designing commodity headphones is doing electronics engineering.

mikert89

Software engineering will get automated, all of the issues with the current models will get worked out in time. People can beg and wish that its not true, but it is. We have a few more good years left and then this career is over

heliumtera

If you are so sure AI is more competent than you at your job, who am I to disagree

bgwalter

So far there are mainly horrible "AI-coded" websites that look like they were produced by the Bootstrap framework in 2014. They use 100% CPU in Firefox despite having no useful functionality.

It is the Mc Donalds version of programming, except that Mc Donalds does not steal the food they serve.

mikert89

like half of engineers are vibe coding full time in their jobs, wake up

skydhash

Why not say that the sky will fall on ou heads while you’re at it? /s

null

[deleted]

gaigalas

When we talk about code, you think it's about code, but it's communication _about solving problems_ which happens to use code as a language.

If you don't understand that language, code becomes a mystery, and you don't understand what the problem is we're trying to solve.

It becomes this entity, "the code". A fantasy.

Truth is: we know. We knew it way before you. Now, can you please stop stating the obvious? There are a lot of problems to solve and not enough time to waste.