Skip to content(if available)orjump to list(if available)

Using AI Generated Code Will Make You a Bad Programmer

NitpickLawyer

I am old enough to have heard this before.

C makes you a bad programmer. Real men code in assembler.

IDEs make you a bad programmer. Real men code in a text editor.

Intellisense / autocomplete makes you a bad programmer. Real men RTFM.

Round and round we go...

mywittyname

My opinion is that these are not analogous.

Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.

It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.

However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.

Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.

HPsquared

I'll always remember a lab we had in university where we hand-wrote machine code to do blinkenlights, and used an array of toggle switches to enter each byte into memory by hand.

saubeidl

All of this is true, but all of the examples that came before were deterministic, so once you understood the abstraction, you still understood the whole thing.

AI is different.

monkaiju

Those are all syntactic changes, AI attempts to be semantic, totally different.

mohsen1

I'm just enjoying the last few years of this career. Let me have fun!

Joking aside, we have to understand that this is the way software is being created and this tool is going to be the tool most trivial software (which most of us make) will be created with.

I feel like the industry is telling me: Adopt of become irrelevant

jf22

I already miss the fun heads down days of unraveling complex bugs.

Now I'm just telling AI what to do.

antfarm

I have always found it way easier to write code than to understand code written by someone else. I use Claude for research and architectural discussions, but only allow it to present code snippets, not to change any files. I treat those the same way I treat code from Stack Overflow and manually adapt them to the present coding guidelines and my aesthetics. Not a recipe for 10x, but it gets road blocks out of the way quickly.

true2octave

> It's probably fine--unless you care about self-improvement or taking pride in your work.

I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time

wrs

[delayed]

darkwater

And you/we will be replaced by an AI that will solve the business problem (the day they get so good to actually do that, which might happen or not but... who knows?)

sallveburrpi

I really really hope an AI will do this work and solve all the “business problems” so I can go and be a goat herder

noman-land

Go herd goats. You don't need to wait for AI to destroy your livelihood.

rybosworld

And the person that hand-writes the code won't be replaced?

darkwater

Yes, as well.

There aare probably 2 ways to see te future of LLMs / AI: they are either going to have the capabilities to replace all white collar work, or they are not.

If you think they are going to replace us, then yo ucan either surrender or fight back, and personally I read all these anti-AI posts as fighting back, to help people realize we might be digging our own grave.

If, OTOH, you see AI as a force-multiplier tool that's never going to completely replace a human developer then yes, probably the smartest thing to do is to learn how to master this new tool, but at the same time keep in mind the side effects it might bring, like atrophy.

shams93

I agree, I was always annoyed in projects where these kids thought they were still in school and spinning up incredible levels of over abstraction that led to some really horrible security problems.

null

[deleted]

hudon

and a teacher is hired to teach, but some self-improve so they may become headmaster

saubeidl

Maybe you become worse at solving business problems with technology once you let that muscle atrophy?

tomjen3

For me AI is really powerful autocomplete. Like you said, I wrote the abstraction years ago. Writing the abstraction again now is not required.

A time and place may come where the AI are so powerful I’m not needed. That time is not right now.

I have used Rider for years at this point and it automatically handles most imports. It’s not AI, but its one of the things that is just not needed for me to even think about.

jonas21

You could make the same argument that "Using Open-Source Code Will Make You a Bad Programmer" -- and in fact, a generation ago, many people did.

billy99k

It doesn't make you a bad developer, it just stops novel and innovative ways of doing something, because the cheaper way is to just use what's free.

shadowgovt

I've also heard similar arguments about "Using stackoverflow instead of RTFM makes you a bad programmer."

These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.

bloppe

It has long been understood that programming is more about reading code than writing code. I don't see any issue with having LLMs write code. The real issue arises when you stop bothering to read all the code that the LLM writes.

jf22

The same way the loom made bad weavers.

Anybody know any weavers making > 100k a year?

null

[deleted]

rbbydotdev

While I agree with much of the sentiment, I believe a point will approach where the amount of code and likely its complexity; due to having been written by ai, will require ai to work with and maintain

monkaiju

But theyre worse at navigating nuance and complexity than humans...

kevin42

Is it just me, or does anyone else use AI not just to write code, but to learn. Since I've been using Claude I've learned a lot about Rust by having it build things for me, then working with that code. I've never been a front end guy, but I had it write a Chrome plugin for me, then I used that code to learn how it works. It's not a black box to me, but I don't need to look up some CSS stuff I've never used. I can prompt Claude to write it and then I can look at it then "Huh, that's how it works". Better than researching it myself, I can see an example of exactly how it's done, then I learn from that.

I'm doing a lot of new things I never would have done before. Yes, I could have googled APIs and read tutorials, but I learn best by doing, and AI helps me learn a lot faster.

chankstein38

Me too! I got into ESP32s and sensors thanks to AI. I wouldn't have had time or energy after stressful work all day but thanks to them I can get firmware written for my projects. Along the way I'm also learning how the firmware has to be written and finding issues with what the AI wrote and correcting them.

If people aren't learning from AI it's their fault. Yeah AI makes stuff up and hallucinates and can be wrong but how is that different than a distracted senior dev? AI is available to me 24/7 to answer my questions in minutes or seconds where half the time when I message people I have to wait 30-60min for a response.

People just need to approach things intelligently and actually learn along the way. You can easily get to the point where you're thinking more clearly about a problem than the AI writing your code pretty quickly if you just pay attention and do the research you need to understand what's happening. They're not as factual as a textbook but they don't need to be to give you the space to ask the right questions and they'll frequently provide sources (though I'd heavily recommend checking them. Sometimes the sources are a joke)

pdntspa

I second this. It's like having a second brain with domain expertise in pretty much anything I could want to ask questions of. And while factual assertions may still be problematic (hallucinations), I can very quickly run code and see if it does what I want or not. I don't care if it hallucinates if it solves my problem with code that is half decent. Which it does.

striking

I do agree this is where AI shines. If you need a quick rehash of something that's been done a zillion times before or a quick integration between two known good components, AI's great.

But the skills you describe are still skills, reading and researching and doing your own fact finding are still important to practice and be good at. Those things only get more important in situations off the beaten path, where AI doesn't always give you trustworthy answers or do trustworthy work.

I'm still going to nurture some of these skills. If I'm trying to learn, I'll stick to using AI only when I'm truly stuck or no longer having fun.

outside2344

I am using AI to learn EVERYTHING. Spanish, code, everything. Honestly, the largest acceleration I am getting is in research towards design docs (which then get used for implementation).

chankstein38

I'm curious how the spanish is going! Have you used any interesting methods or are you just kind of talking to it and asking it questions about spanish?

RationPhantoms

Absolutely. It's a tireless rubik's cube. One that you can rotate endlessly to digest new material. It doesn't sigh heavily or not have the mental bandwidth to answer. Yes, it should not be trusted with high precision information but the world can get by quite well on vibes.

shadowgovt

I have definitely had Claude make recommendations that gave me structural insight into the code that I didn't have on my own, and I integrated that insight.

People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.

okokwhatever

This is the smartest answer in this polarized thread

mmoll

That may be dangerous. The more obscure the topic, the more likely it is that the AI will come up with a working but needlessly convoluted solution.

focusgroup0

Adapt, accept, or be replaced

billy99k

This is the future of code.

I know plenty of 50-something developers out of work because they stuck to their old ways and the tech world left them behind.