The hidden cost of AI coding
474 comments
·April 23, 2025TrackerFF
andybak
> Some people love programming
> Other people see all that as an means to an end
I think it's worth pointing out that most people are both these things at different times.
There's things I care about and want a deep understanding of but there's plenty of tasks I want to just "go away". If I had an junior coder - I'd be delegating these. Instead I use AI when I can.
There's also tasks where I want a jump start. I prefer fixing/improving code over writing from scratch so often a bad AI attempt is still valuable to me.
celsius1414
You likely don’t have a say in the matter, but you should have a junior developer. That’s where senior developers come from.
scarface_74
Why should I have a junior developer who is going to do negative work instead of poaching a mid developer who is probably underpaid since salary compression and inversion are real?
As a manager, say I do hire a junior developer, invest time into them and they level up. I go to the HR department and tell them that they deserve a 30% raise to bring them inline with the other mid level developers.
The HR department is going to say that’s out of policy and then the developer jumps ship.
andybak
You presume I work for a company!
yason
Many have said that it's useful to delegate writing boilerplate code to an AI so that you can focus on the interesting bits that you do want to write yourself, for the sake of enjoying writing code.
I recognize that and I kind of agree, but I think I don't entirely. Writing the "boring" boilerplate gives me time to think about the hard stuff while still tinkering with something. I think the effect is similar to sleeping on it or taking a walk, but without interrupting the mental cruncing that's going in my brain during a good flow. I piece together something mundane that is as uninteresting as it is mandatory, but at the same time my subconscious is thinking about the real stuff. It's easier that way because the boilerplate does actually, besides being boring, still connect to the real stuff, ultimately.
So, you're kind of working on the same problem even if you're just letting your fingers keep typing something easy. That generates nice waves of intensity for my work. My experience regarding AI tends to break this sea of subconsciousness: you need to focus on getting the AI to do the right thing which, unlike typing it yourself, is ancillary to the original problem. Maybe it's just a matter of practise and at some point I can keep my mind on the domain in question eventhough I'm working an AI instead of typing boilerplate myself.
mrtksn
The first time you write the code to accomplish something you get your highs.
IMHO there's no joy in doing the same thing multiple times. DRY doesn't help with that, you end up doing a lot of menial work to adapt or integrate previous code.
Most of the for-profit coding is very boring.
yapyap
> On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea. I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So if someone generates their music with AI to get their idea to music you don’t look down on it?
Personally I do, if you don’t have the means to get to the end you shouldn’t get to the end and that goes double in a professional setting. If you are just generating for your own enjoyment go off I guess but if you are publishing or working for someone that’ll publish (aka a professional setting) you should be the means to the end, not AI.
_heimdall
Where do you draw that line though?
If you're talking about a person using an LLM, or some other ML system, to help generate their music then the LLM is really just a tool for that person.
I can't run 80 mph but I can drive a car that fast, its my tool to get the job done. Should I not be allowed to do that professionally if I'm not actually the one achieving that speed or carrying capacity?
Personally my concerns with LLMs are more related to the unintended consequences and all the unknowns in play given that we don't really know how they work and aren't spending much effort solving interoperability. If they only ever end up being a tool, that seems a lot more in line with previous technological advancements.
bluefirebrand
> I can't run 80 mph but I can drive a car that fast
If you drive a car 80mph you don't get to claim you are a good runner
Similarly if you use an LLM to generate 10k lines of code, you don't get to claim you are a good programmer
Regardless of the outcome being the "same"
probably_wrong
> I can't run 80 mph but I can drive a car that fast, its my tool to get the job done.
Right, but if you use a chess engine to win a chess championship or if you use a motor to win a cycling championship, you would be disqualified because getting the job done is not the point of the exercise.
Art is (or should be) about establishing dialogues and connections between humans. To me, auto-generated art it's like choosing between seeing a phone picture of someone's baby and a stock photo picture of a random one - the second one might "get the job done" much better, but if there's no personal connection then what's the point?
jstummbillig
Why?
What has always held true so far: <new tool x> abstracts challenging parts of a task away. The only people you will outcompete are those, who now add little over <new tool x>.
But: If in the future people are just using <new tool x> to create a product that a lot of people can easily produce with <new tool x>, then, before long, that's not enough to stand out anymore. The floor has risen and the only way to stand out will always be to use <new tool x> in a way that other people don't.
Workaccount2
People who can't spin pottery shouldn't be allowed to have bowls, especially mass produced by machine ones.
I understand your point, but I think it is ultimately rooted in a romantic view of the world, rather than the practical truth we live in. We all live a life completely inundated with things we have no expertise in, available to us at almost trivial cost. In fact it is so prevalent that just about everyone takes it for granted.
hooverd
Sure, but they also shouldn't claim they're potters because they went to Pottery Barn.
selimthegrim
Sounds like Communist Albania where everybody had to be able to repair the car and take it apart and put it back together to own one
scarface_74
Your company doesn’t care about how you got to the end, they just care about did you get there and meet all of the functional and non functional requirements.
My entire management chain - manager, director and CTO - are all technical and my CTO was a senior dev at BigTech less then two years ago. But when I have a conversation with any of them, they mostly care about whether the project I’m working on/leading is done on time/within budget/meets requirements.
As long as those three goals are met, money appears in my account.
One of the most renown producers in hip hop - Dr. Dre - made a career in reusing old melodies. Are (were) his protégés - Easy-E, Tupac, Snoop, Eminem, 50 cent, Kendrick Lamar, etc - not real musicians?
RHSeeger
> So if someone generates their music with AI to get their idea to music you don’t look down on it?
It depends entirely on how they're using it. AI is a tool, and it can be used to help produce some wonderful things.
- I don't look down on a photographer because they use a tool to take a beautiful picture (that would have taken a painter longer to paint)
- I don't look down on someone using digital art tools to blur/blend/manipulate their work in interesting ways
- I don't look down on musicians that feed their output through a board to change the way it sounds
AI (and lots of other tools) can be used to replace the creative process, which is not great. But it can also be used to enhance the creative process, which _is_ great.
apercu
If they used an algorithm to come up with a cool melody and then did something with it, why look down on it?
Look at popular music for the last 400 years. How is that any different than simply copying the previous generations stuff and putting your own spin on it?
If you heard a CD in 1986 then in 2015 you wrote a song subconsciously inspired by that tune, should I look down on you?
I mean, I'm not a huge fan of electronic music because the vast majority of it sounds the same to me, but I don't argue that they are not "real musicians".
I do think that some genres of music will age better than others, but that's a totally different topic.
randcraw
I think you don't look down at the product of AI, only the process that created it. Clearly the craft that created the object has become less creative, less innovative. Now it's just a variation on a theme. Does such work really deserve the same level of recognition as befitted Beethoven for his Ninth or Robert Bolt for his "A Man for all Seasons"?
apercu
I've always distilled this down to people who like the "craft" and those who like the "result".
Of course, everything is on a scale so it's not either/or.
But, like you, how I get there matters to me, not just the destination.
Outside the context of music, a project could be super successful but if the journey was littered with unnecessary stress due to preventable reasons, it will still leave a bad taste in my mouth.
bluefirebrand
> I've always distilled this down to people who like the "craft" and those who like the "result".
I find it very unlikely anyone who only likes the results will ever pick up the craft in the first place
It takes a very specific sort of person to push through learning a craft they dislike (or don't care about) just because they want a result badly enough
ponector
I hate IT, will pick literally anything else to work at, but the money is an issue.
godelski
What's "the result"? Because I don't like how this divide is being stated (it's pretty common).
Seems to me that "the result" is "the money" and not "the product".
Because I'd argue those that care about the product, the thing being built, the tool, take a lot of pride in their work. They don't cut corners. They'll slog through the tough stuff to get things done.
These things align much more with the "loves coding" group than "the result". Frankly, everyone cares about "the result" and I think we should be clear about what is actually meant
cainxinth
Some writers like to write. Some like to have written.
asdff
The issue with programming is that it isn't like music or really any other skill where you get feedback right away and operate in a well understood environment. And a lot of patterns are not well designed as they are often based on what a single developer things the behavior ought to be instead of something more deterministic like the laws of physics that influence the cord patterns we use in music.
Nope, your code might look excellent. Why the hell isn't it running though? Three hours later you find you added a b when you closed your editor somewhere in the code in a way your linter didn't pick up and the traceback isn't clear about, maybe you broke some all important regex, it doesn't matter. One second, it's fixed, and you just want to throw the laptop out the window and never work on this project again. So god damned stupid.
And other things are frusterating too. Open a space deliminated python file, god forbid you add a tab without thinking. And what is crazy about that is if the linter is smart enough to say "hey you put a tab here instead of spaces for indent" then why does it even throw the error and not just accept both spaces and tabs? Just another frustration.
Really I would love to just go at it, write code, type, fly, be in the flow state, like one does building something with the hands or making music or doing anything in the physical world. But no. Constant whack a mole. Constantly hitting the brakes. Constant blockers. How long will this take to implement? I have no fucking idea man, could be 5 seconds or 5 weeks and you don't often know until you spend the 5 seconds and see that didn't do it yet.
schwartzworld
> like the laws of physics that influence the cord patterns we use in music.
So much of what we think of as law in music is just being used to the conventions. Lots of amazing music would have been considered noise if created in an earlier time.
> The issue with programming is that it isn't like music or really any other skill where you get feedback right away and operate in a well understood environment.
Funny, I feel the opposite about programming. The feedback comes in milliseconds. Ok the build didn’t break, ok the ui is working, now check if the change I made is right, now run the tests, etc. and the environment is fully documented, depending on your tooling of choice and the quality of its docs.
dakiol
I’m in group A and B. I do programming for the sake for it at home. I read tons of technical books for the love of it. At work, though, I do whatever the company wants or whatever they allow me… I just do it for the money.
iamleppert
There's nothing stopping you from coding if you enjoy it. It's not like they have taken away your keyboard. I have found that AI frees me up to focus on the parts of coding I'm actually interested in, which is maybe 5-10% of the project. The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about. I care about certain things that I know will make the product better, and achieve its goals in a clever and satisfying way.
Even when I'm stuck in hell, fighting the latest undocumented change in some obscure library or other grey-bearded creation, the LLM, although not always right, is there for me to talk to, when before I'd often have no one. It doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
melvinroest
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing circle of hell that I really could care less about.
Even more so, I remember making a Chrome extension and feeling intimidated. I knew that I'd be comfortable with most of it given that JS is used but I just didn't know how to start.
With an LLM it is way faster to spin up some default config and get going versus reading a tutorial. What I've noticed in that respect is that I just read what it does and then immediately reason why it's there. "Oh, there's a manifest.json file with permissions and a few other things, fair, makes sense. Oh, so you have the HTML/CSS/JS of the extension, you have the HTML/CSS/JS of the page you're injecting some code into and you have the JS of a background worker. Ah yea, I get that."
And then I just get immediately on coding.
dxroshan
> What I've noticed in that respect is that I just read what it does and then immediately reason why it's there ....
How if it hallucinate and gives you wrong code and explanation? It is better to read documentations and tutorials first.
doix
> How if it hallucinate and gives you wrong code
Then the code won't compile, or more likely your editor/IDE will say that it's invalid code. If you're using something like Cursor in agent mode, if invalid code is generated then it gets detected and the LLM keeps re-running until something is valid.
> It is better to read documentations and tutorials first.
I "trust" LLM's more than tutorials, there's so much garbage out there. For documentation, if the LLM suggests something, you can see the docstrings in your IDE. A lot of the time that's enough. If not, I usually go read the implementation if I _actually_ care about how something works, because you can't always trust documentation either.
selfhoster11
Do you mean the laconic and incomplete documentation? And the tutorials that range from "here's how you do a hello world" to "draw the rest of the fucking owl" [0], with nothing in between to actually show you how to organise a code base or file structure for a mid-level project?
Hallucinations are a thing. With a competent human on the other end of the screen, they are not such an issue. And the benefits you can reap from having LLMs as a sometimes-mistaken advisory tool in your personal toolbox are immense.
melvinroest
Fair question. So far I've seen two things:
1. Code doesn't compile. This case is obvious on what to do.
2. Code does compile.
I don't work in Cursor, I read the code quick, to see the intent. And when done with that decide to copy/paste it and test the output.
You can learn a lot by simply reading the code. For example, when I see in polars a `group_by` function call but I didn't know polars could do that, now I know because I know SQL. Then I need to check the output, if the output corresponds to what I expect a group by function to do, then I'll move on.
There comes a point in time where I need more granularity and more precision. That's the moment where I ditch the AI and start to use things such as documentation and my own mind. This happens one to two hours after bootstrapping a project with AI in a language/library/framework I initially knew nothing about. But now I do, I know a few hours worth of it. That's enough to roughly know where everything is and not be in setup hell and similar things. Moreover, by just reading the code, I get a rough idea on how beginner to intermediate programmers think about the problem space the code is written in as there's always a certain style of writing certain code. This points me into the direction on how to think about it. I see it as a hint, not as the definitive answer. I suspect that experts think differently about it, but given that I'm just a "few hours old" in the particular language/lib/framework, I think knowing all of this is already really amazing.
AI helps with quicker bootstrapping by virtue of reading code. And when it gets actually complicated and/or interesting, then I ditch it :)
gilbetron
What do you do if you "hallucinate" and write the wrong code? Or if the docs/tutorial you read is out of date or incorrect or for a different version than you expect?
That's not a jab, but a serious question. We act like people don't "hallucinate" all the time - modern software engineering devops is all about putting in guardrails to detect such "hallucinations".
Spivak
Even when it hallucinates it still solves most of the unknown unknowns which is good for getting you unblocked. It's probably close enough to get some terms to search for.
brigandish
Most tutorials fail to add meta info like the system they're using and versions of things, that can be a real pain.
apothegm
So much this. The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts) and lets me focus on the interesting bits like what it is I want to build and how the pieces should fit together. And debugging, which I find satisfying.
Sadly, I find it sorely lacking at dealing with build systems and that particular type of boilerplate, mostly because it seems to mix up different versions of things too much and gives you totally broken setups more often than not. I’d just as soon never deal with the he’ll that is front end build/lint/test config again.
dxroshan
> The AI takes care of the tedious line by line what’s-the-name-of-that-stdlib-function parts (and most of the tedious test-writing parts)
AI generated tests are a bad idea.
simonw
AI generated tests are genuinely fantastic, if you treat them like any other AI generated code and review them thoroughly.
I've been writing Python for 20+ years and I still can't use unittest.mock without looking up the details every time. ChatGPT and Claude are great at that, which means I use it more often because I don't have to deal with the frustration of figuring it out.
apothegm
Just as with anything else AI, you never accept test code without reviewing it. And often it needs debugging. But it handles about 90% of it correctly and saves a lot of time and aggravation.
otabdeveloper4
Well, maybe they just need X lines of so-called "tests" to satisfy some bullshit-job metrics.
tcfhgj
Aren't stdlib functions the ones you know by heart after a while anyways?
apothegm
Depends on the language. Python for instance has a massive default library, and there are entire modules I use anywhere from one a year to once a decade —- or never at all until some new project needs them.
danielbln
Not everyone works in a single language and/or deep in some singular code base.
wvh
I think the fear for those of us who love coding, stability and security, that we are going to be confronted with apples that are rotten on the inside and our work, our love, is going to turn (even more so) into pain. The challenge in computing is that the powers that decide have little overview over the actual quality and longevity of any endeavour.
I work as a consultant assessing other people's code and it's hard not to lose my religion, sort of speak.
righthand
They perhaps haven’t taken away your keyboard but anecdotally, a few friends work at places where their boss is requiring them to use the LLMs. So you may not have to code with them but some people are starting to be chained to them.
hermanradtke
Yes, there are bad places to work. There are also places that require detailed time tracking, do not allow any time to write tests, have very long hours, tons of on-call alerts, etc.
7589447636
How long until it becomes the rule because of some arbitrary "productivity" metric? Sure, you may not be forced to use it, but you'll be fire for being "unproductive".
godelski
You write that like the latter is in opposition to the former. Yet the content suggests the latter is the former
godelski
And even when that's not the case you are still indirectly working with them because your coworker is and "somehow" their code has gotten worse
voidUpdate
> The rest is boiler plate, cargo-culted, Dockerfile, build system and bash environment variable passing
I keep seeing people saying to use an LLM to write boilerplate, but like... do you not just copy that from another project where you already wrote it?
handzhiev
No, because it's usually a few years old and already obsolete - the frameworks and the language have gone through a gazillion changes and what you did in 2021 suddenly no longer works at all.
moooo99
I mean, the training data also has a cutoff date and changed beyond that are not reflected in the code suggestions.
Also, I know that people love to joke on modern software and JS in particular. But if you take react code from 2020 and drop it into a new react codebase it still works. Even class based components work. Yes, if you jumped on the newest framework bandwagon every time stuff will break all the time, but AI won’t be able to help you with that either. If you went for relatively stable frameworks, you can re use boilerplate completely or with relatively minimal adjustments
jay_kyburz
lol, I've been cutting and pasting from the same projects I started in 2010. When you work in vanilla js it doesn't change.
asdff
Ehh most people are good about at least throwing a warning before they break a legacy pattern. And you can also just use old versions of your tools. I'm sure the 2021 tool still does the job. Most people aren't working on the bleeding edge here. Old versions of numpy are fine.
moooo99
I keep seeing that suggestion as well and the only sensible way I see would be to use one off boilerplate, anything else does not make sense.
If you keep re-using boilerplate once in a while copying it from elsewhere is fine. If you re-use it all the time, just get a macro setup in your editor of choice. IMHO that is way more efficient than asking AI to produce somewhat consistent boilerplate
pelagicAustral
You know. I have my boilerplate in Rails and it is just a work of art... I simply clone my BP repo, bundle, migrate, run and I have user management, auth, smtp client, sms alerts, and literally everything I need to get started. And it was just this same week I decided to try a code assistant, and my result was shockingly good, once you provide the assistant with a good clean starting point, and if you are very clear on what you want to build, then the results are just too good to be dismissed.
So yes, boilerplate, but also yes, there is definitely something to be gained from using ai assistants.
jrapdx3
Like many others writing here, I enjoy coding (well, mostly anyway), especially the when it requires deep thought and patient experimentation to get anywhere. It's also great to preside over finally wiring together the routines (modules, libraries) that bind a project into a coherent whole.
Haven't much used AI to assist. After all, hard enough finding authentic humans capable and willing to voluntarily review/critique one's code. So far AI doesn't consistently provide that kind of help. OTOH seems almost certain over time AI systems will improve in terms of specific and comprehensive "insights" into the particular types of code one is writing.
I think an issue is that human creativity is hard to measure. Likely enough AI is even tougher to assess. Probably AI will increasingly be assigned tasks like constructing project skeletons, assuring parts can be joined together without undue strain, handling "boilerplate" and other routine chores. To be sure the landscape will look different in 50 years, I'm certain we'd be amazed were we able to see what future systems will be doing.
In any case, we shouldn't hesitate to use tools that genuinely boost our creativity. One badly needed role would be enabling development of higher reliability software. Still that's a far cry from the contributions emanating from the best of human originality, talent and motivation.
2snakes
I read one characterization which is that LLMs don't give new information (except to the user learning) but they reorganize old information.
barrenko
Custodians of human knowledge.
docmechanic
That’s only true if you tokenize words rather than characters. Character tokenization generates new content outside the training vocabulary.
selfhoster11
All major tokenisers have explicit support for encoding arbitrary byte sequences. There's usually a consecutive range of tokens reserved for 0x00 to 0xFF, and you can encode any novel UTF-8 words or structures with it. Including emoji and characters that weren't a part of the model's initial training, if you show it some examples.
asdff
Why stop there? Just have it spit out the state of the bits on the hardware. English seems like a serious shackle for an LLM.
emaro
Kind of, but character-based tokens make it a lot harder and more expensive to learn semantics.
skydhash
> doesn't judge or sneer at you, or tell you to "RTFM". It's better than any human help, even if its not always right because its at least always more reliable and you don't have to bother some grey beard who probably hates you anyway.
That’s a lot of trauma you’re dealing with.
tptacek
After all, if we lose the joy in our craft, what exactly are we optimizing for?
Solving problems for real people. Isn't the answer here kind of obvious?
Our field has a whole ethos of open-source side projects people do for love and enjoyment. In the same way that you might spend your weekends in a basement woodworking shop without furnishing your entire house by hand, I think the craft of programming will be just fine.
frollogaston
Same as when higher-level languages replaced assembly for a lot of use cases. And btw, at least in places I've worked, better traditional tooling would replace a lot more headcount than AI would.
codr7
Not even close, those were all deterministic, this is probabilistic.
tptacek
The output of the LLM is probabilistic. The code you actually commit or merge is not.
frollogaston
So what? I know most compilers are deterministic, but it really only matters for reproducible builds, not that you're actually going to reason about the output. And the language makes few guarantees about the resulting instructions.
eddd-ddde
Yet the words you chose to use in this comment were entirely modelled inside your brain in a not so different manner.
pjmlp
I already see this happening with low code, SaaS and MACH architectures.
What used to be a project doing a CMS backend, now is spent doing configurations on a SaaS product, and if we are lucky, a few containers/serveless for integrations.
There are already AI based products that can automate those integrations if given enough data samples.
Many believe AI will keep using current programming languages as translation step, just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around.
achierius
> just like those Assembly developers thought compiling via Assembly text generation and feeding into an Assembly would still be around
Confused by what you mean. Is this not the case?
JohnFen
> Solving problems for real people. Isn't the answer here kind of obvious?
No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
Presumably, the reason for choosing software development as the method of solving problems for people is because software development itself brings joy. Different people find joy in different aspects even of that, though.
For my part, the stuff that AI is promising to automate away is much of the stuff that I enjoy about software development. If I don't get to do that, that would turn my career into miserable drudgery.
Perhaps that's the future, though. I hope not, but if it is, then I need to face up to the truth that there is no role for me in the industry anymore. That would pretty much be a life crisis, as I'd have to find and train for something else.
simonw
"There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method."
Software development is almost unique in the scale that it operates at. I can write code once and have it solve problems for dozens, hundreds, thousands or even millions of people.
If you want your work to solve problems for large numbers of people I have trouble thinking of any other form of work that's this accessible but allows you to help this many others.
Fields like civil engineering are a lot harder to break into!
JodieBenitez
> That would pretty much be a life crisis, as I'd have to find and train for something else.
There's inertia in the industry. It's not like what you're describing could happen in the blink of an eye. You may well be at the end of your career when this prophecy is fulfilled, if it ever comes true. I sure will be at the end of mine and I'll probably work for at least another 20 years.
jll29
The inertia argument is real, and I would compare it to the mistaken believe of some at IBM in the 1970s that SQL would be used by managers to query relational databases directly, so no programming was needed anymore.
And what happened? Programmers make the queries and embed them into code that creates dashboards that managers look at. Or managers ask analysts who have to interpret the dashboards for them... It rather created a need for more programmers.
Compare embedded SQL with prompts - SQL queries compared to assembler or FORTRAN code is closer to English prose for sure. Did it take some fun away? Perhaps, if manually traversing a network database is fun to anyone, instead of declaratively specifying what set of data to retrieve. But it sure gave new fun to people who wanted to see results faster (let's call them "designers" rather than "coders"), and it made programming more elegant due to the declarativity of SQL queries (although that is cancelled out again by the ugliness of mixing two languages in the code).
Maybe the question is: Does LLM-based coding enable a new kind of higher level "design flow" to replace "coding flow"? (Maybe it will make a slightly different group of people happy?)
threatofrain
> No. There are a thousand other ways of solving problems for real people, so that doesn't explain why some choose software development as their preferred method.
I don't see why we should seek an explanation if there are thousands of ways to be useful to people. Is being a lawyer particularly better than being an accountant?
fragmede
I'm probably just not as smart or creative as you, but say my problem is I have a ski cabin that I want to rent it to strangers for money. Nevermind a thousand, What are 100 ways without using software that I could do something about that, vs listing it on Airbnb?
JohnFen
I was speaking about solving people's problems generally. It's easy to find specific problems that are best addressed with software, just as it's easy to find specific problems that can't be addressed with software.
ToucanLoucan
> Solving problems for real people. Isn't the answer here kind of obvious?
Look at the majority of the tech sector for the last ten years or so and tell me this answer again.
Like I guess this is kind of true, if "problems for real people" equals "compensating for inefficiencies in our system for people with money" and "solutions" equals "making a poor person do it for them and paying them as little as legally possible."
tptacek
Those of us who write software professionally are literally in a field premised on automating other people's jobs away. There is no profession with less claim to the moral high ground of worker rights than ours.
simonw
I often think about the savage job-destroying nature of the open source community: hundreds of thousands of developers working tirelessly to unemploy as many of their peers as possible by giving away the code they've written for free.
(Interesting how people talk about AI destroying programming jobs all the time, but rarely mention the impact of billions of dollars of code being given away.)
JohnFen
> Those of us who write software professionally are literally in a field premised on automating other people's jobs away.
How true that is depends on what sort of software you write. Very little of what I've accomplished in my career can be fairly described as "automating other people's jobs away".
concats
"Ten year contract you say?"
"Yes, yes... Satellites stay in orbit for a while. What about it?"
"Looks a bit cramped in there."
"Stop complaining, at least it's a real job, now get in, we're about to launch."
Verdex
Speak for yourself.
I've worked in a medical space writing software so that people can automate away the job that their bodies used to do before they broke.
ToucanLoucan
> Those of us who write software professionally are literally in a field premised on automating other people's jobs away.
Depends what you write. What I work on isn't about eliminating jobs at all, if anything it creates them. And like, actual, good jobs that people would want, not, again, paying someone below the poverty line $5 to deliver an overpriced burrito across town.
smj-edison
Bit of a tangent but...
Haven't we been automating jobs away since the industrial revolution? I know AI may be an exception to this trend, but at least with classical programming, demand goes up, GDP per capita goes up, and new industries are born.
I mean, there's three ways to get stuff done: do it yourself, get someone else to do it, or get a machine to do it.
#2 doesn't scale, since someone still has to do it. If we want every person to not be required to do it (washing, growing food, etc), #3 is the only way forward. Automation and specialization have made the unthinkable possible for an average person. We've a long way to go, but I don't see automation as a fundamentally bad thing, as long as there's a simultaneous effort to help (especially those who are poor) transition to a new form of working.
frollogaston
Yeah I see it as fair game
dml2135
It’s not automation that is the problem, it’s that the fruits of that automation disproportionately go to those at the top. Don’t blame software engineering for that, blame capitalism.
EasyMarion
solving real problems is the core of it, but for a lot of people the joy and meaning come from how they solve them too. the shift to AI tools might feel like outsourcing the interesting part, even if the outcome is still useful. side projects will stick around for sure, but i think it's fair to ask what the day-to-day feels like when more of it becomes reviewing and prompting rather than building.
clbrmbr
I have actually had some really great flow evenings lately, the likes of which I have not enjoyed in many years, precisely because of AI-assisted coding. The trick is to break the task down in to components that are of moderate complexity so that the AI can handle them (Gemini 2.5 Pro one-shots), and keep your mind on the high-level design which today's AI cannot coordinate.
What helps me is to think of it like I'm a kid again, learning to code full of ideas but without any pre-conceived notions. Rather than the Microsoft QuickBasic manual in my hands, I've got Gemini & Claude Code. I would be gleefully coding up a storm of games, websites, dubious webcrawlers, robots, and lord knows what else. Plenty of flow to be had.
wolvesechoes
I always wonder what kind of projects are we talking about. I am currently writing a compiler and simulation engine for differential-algebraic equations. I tried few models, hoping they would help me, but they could not provide any help with small details nor with bigger building blocks.
I guess if you code stuff that had been coded a lot in public repos, it is fine, otherwise AI does not help in any way. Actually, I think I wasted more time trying to make it produce the output I wish for than it took me to do this myself.
Espressosaurus
That's been my experience. If it's been solved a million times, it's helpful. If you're out on the frontier where there's no public code, it's worse than useless.
If you're somewhere in between (where I am now) it's situationally useful for small sub-components but you need to filter it heavily or you'll end up wasting a day or two going down a wrong rabbit-hole either because you don't know the domain well enough to tell when it's bullshitting or going down a wrong path, or don't know the domain well enough to use the right keyword to get it to cough up something useful. I've found domain knowledge essential for deciding when it's doing something obviously wrong instead of saying "I don't know" or "This is the wrong approach to the problem".
For the correct self-contained class or block of code, it is much faster to specify the requirements and go through a round or two of refinement than it is to write it myself. For the wrong block of code it's a complete waste of time. I've experienced both in the last few days.
uludag
I don't even think you have to be on the frontier for LLMs to lose most of their effectiveness. Large legacy codebases with deeply ingrained tribal knowledge and loads of idiosyncrasies and inconsistencies will do the trick. Sad how most software projects end in this state.
Obviously LLMs in this situation will still be insanely helpful, but in the same way that Google searches or stack overflow is insanely helpful.
91bananas
For me it's been toy games built on web languages, which happens to be something I toyed with via my actual raw skills for the past 15 years. LLMs have opened many new doors and options for what I can build because I now technically "know everything" in the world via LLMs. Stuff that I would get stuck wasting hours on is now solved in minutes. But then it ALWAYS reaches a point where the complexity the LLM has generated is too much and the model can no longer iterate on what it's built.
kelsey978126
people seem to forget this type of argument from the article was used for stack overflow for years, calling it the destruction of programming. "How can you get into flow when you are just copying and pasting?". Those same people are now all sour grapes for AI assisted development. There will always be detractors saying that the documentation you are using is wrong, the tools that you are using are wrong, and the methodology you are using is wrong.
AI assisted development is no different from managing an engineering team. "How can you trust outsourced developers to do anything right? You won't understand the code when it breaks"... "How can you use an IDE, vim is the only correct tool" etc etc etc.
Nothing has changed besides the process. When people started jumping on object orientation they called procedures the devil itself, just as procedures were once called structured programming and came to banish away the considered harmful goto. Everything is considered harmful when theres something new around the corner that promises to either make development more productive or developers more interchangeable. These are institutional requirements and will never go away.
Embrace AIOP (AI oriented programming) to banish copy and paste google driven development which is now considered harmful.
halfmatthalfcat
The issue with "AIOP" is that you don't have a litany of others (as is the case with SO) providing counter examples, opinions, updated best practices, etc. People take the AI output as gospel and suffer for it without being exposed so the ambiguity that surrounds implementing things.
randcraw
Will an engineering team ever be able to craft a thing of wonder, that surprises and delights? I think great software can do that. But I've seen it arise only rarely, and almost always as originating from one enlightened mind, someone who imagined a better way than the well-trod paths taken by so many who went before. I can imagine AI as a means to go only 'where man gas gone before'.
Workaccount2
I'm a classic engineer, so lots of experience with systems and breaking down problems, but probably <150 hours programming experience over 15 years. I know how computers work and "think", but I an awful at communicating with them. Anytime I have needed to program something I gotta crash course the language for a few days.
Having LLMs like 2.5 now are total game changers. I can basically flow chart a program and have Gemini manifest it. I can break up the program into modules and keep spinning up new instances when context gets too full.
The program I am currently working on is up to ~5500 LOC, probably across 10ish 2.5 instances. It's basically an inventory and BOM management program that takes in bloated excel BOMs and inventory, and puts it in an SQLite database, and has a nice GUI. Absolutely insane how much faster SQLite is for databases than excel, lol.
e40
I've heard a _lot_ of stories like this. What I haven't heard is stories about the deployment of said applications and the ability of the human-side author to maintain the application. I guess that's because we're in early days for LLM coding, or the people who did this aren't talking (about their presumed failures... people tend to talk about successes publicly, not the failures).
Workaccount2
At my day job I have 3 programs written by LLM used in production. One written by GPT-4 (in spring 2023) and recently upgraded by gemini 2.5, and the other two by Claude 3.7
One is a automatic electronics test system that runs tests and collects measurements (50k+ readings across 8-12 channels)(GPT-4, now with a GUI and faster DB thanks to 2.5). One is a QC tool to help quickly make QC reports in our companies standard form (3.7). And the last is a GUI CAD tool for rendering and quickly working through ancient manufacturing automation scripts from the 80's/90's to bring them up to compatibility with modern automation tooling (3.7).
I personally think that there is a large gap between what programs are, and how each end user ultimately uses them. The programs are made with a vast scope, but often used narrowly by individuals. The proprietary CAD program that we were going to use originally for the old files was something like $12k/yr for a license. And it is a very powerful software package. But we just needed to do one relatively simple thing. So rather than buy the entire buffet, buy the entire restaurant, Claude was able to just make simple burger.
Would I put my name on these and sell to other companies? No. Am I confident other LLM junkies could generate similar strongly positive outcomes with bespoke narrow scope programs? Absolutely.
dgs_sgd
Only 150 hours of programming in 15 years? Are you in more of an Architect / Tech Lead role than an IC (individual contributor) role?
Workaccount2
I'm an electrical engineer and work mostly with power electronics.
melvinroest
This is the way. I feel like a kid too again. It's way more fun actually. As a kid I got too frustrated for not being able to install my WAMP stack.
sebstefan
Added joy for me as well mostly by giving me the relevant API calls I need straight away, from publically available documentation, instead of having to read docs myself. "How do I do X in Y"
And if something's not obvious I can always fetch the specifics of any particular calls. But at least I didn't have to find the name of that call in the first place.
cushychicken
I’m right there with you on this.
Thanks for the comment. You articulated how I feel about this situation very well.
null
ang_cire
This comment section really shows the stark divide between people who love coding and thus hate AI, and people who hate coding and thus love AI.
Honestly, I suspect the people who would prefer to have someone or something else do their coding, are probably the devs who are already outputting the worst code right now.
vendiddy
I love coding but I also love AI.
I don't know if I'm a minority but I'd like to think there are a lot of folks like me out there.
You can compare it to someone who is writing assembly code and now they've been introduced to C. They were happy writing assembly but now they're thrilled they can write things more quickly.
Sure, AI could lead us to write buggier code. Sure, AI could make us dumber because we just have AI write things we don't understand. But neither has to be the case.
With better tools, we'll be able to do more ambitious things.
simonw
I think there are a lot of us, but the people who dislike AI are much more vocal in online conversations about it.
(The hype merchant, LinkedIn influencer, Twitter thread crowd are super noisy but tend to stick to their own echo chambers, it's rare to have them engage in a forum like Hacker News directly.)
amiantos
We're the silent majority, I'm pretty sure. If you love coding, you probably love technology, and if you love technology, you probably love AI, which is inarguably the most interesting tech advancement in this decade.
The others, who are not like us? They've got other priorities. If you hate coding but you love AI, you're probably into software engineering because of the money, not love of technology. If you love coding and you hate AI, you're probably more committed to some sort of ideology than you are the love of technology. If you hate coding and you hate AI, well, I hope you throw your cellphone into the river and find a nice cabin in the woods somewhere to hide in.
dml2135
> If you love coding and you hate AI, you're probably more committed to some sort of ideology than you are the love of technology.
As someone that you may characterize as one of these people, I can share some perspective.
First, I would question the premise that “love of technology” is not itself an ideology.
I do love technology, but not for its own sake. I love solving problems, I love tinkering, and I love craftsmanship and invention.
But technology can also be dangerous, it can set us backwards and not forwards, and its progress is never as inevitable as its evangelists claim. You need to view technology with a critical eye, and remember that tools are tools, and not panaceas.
So I guess I’d ask you — what’s so wrong with choosing to live in a cabin in the woods without a cellphone?
ang_cire
> throw your cellphone into the river and find a nice cabin in the woods somewhere
Would that I could...
square_usual
> I don't know if I'm a minority
No, there's plenty of top-class engineers who love coding with AI. e.g. Antirez.
ang_cire
I love AI as a concept.
I hate the reality of our current AI, which is benefitting corporations over workers, being used for surveillance and censorship (nevermind direct social control via misinformation bots), and is copying the work of millions without compensating them in order to do it.
And the push for coders to use it to increase their output, will likely just end up meaning expectations of more LoC and more features faster, for the same pay.
But FOSS, self-hosted LLMs? Awesome!
senko
How is using Claude over Llama benefitting corporations over workers? I work with AI every day and sum total of my token spend across all providers is less than a single NVidia H100 card I'd have to buy (from a pretty big corporation!), at the very least, for comparable purpose?
How are self-hosted LLMs not copying the work of millions without compensating them for it?
How is the push for more productivity through better technology somehow bad?
I am pro FOSS but can't understand this comment.
RedNifre
Right, just how back in the day, people who loved writing assembly hated high level languages and people who found assembly too tedious loved compilers.
bgwalter
First of all, Lisp, Fortran and COBOL had been around most of the time when assembly was popular. Assembly was used because of resource constraints.
Secondly, you are not writing anything you get from an LLM. You prompt it and it spits out other people's code, stripped of attribution.
This is what children do: Ask someone to fix something for you without understanding the result.
RedNifre
Right, just how the assembler spits out machine code that other people figured out for you, so you don't even understand the result.
lucaspauker
Good artists copy, great artists steal
scarface_74
I very much understand the result of code that it writes. But I have never gotten paid to code. I get paid to use my knowledge of computers and the industry to save the company money or to make the company money.
Do you feel the same way when you delegate assignments to more junior developers and they come back with code?
cushychicken
It’s almost like there’s a big range of different comprehension styles among human beings, and a varying set of preferences that go with those.
aprxi
Cant one enjoy both? After all, coding with AI in practice is still coding, just with a far higher intensity.
ang_cire
It is absolutely possible to enjoy both- I have used LLMs to generate code for ideas about alternate paths to take when I write my code- but prompt generation is not coding, and there are WAY too many people who claim to be coding when they have in fact done nothing of the sort.
> a far higher intensity
I'm not sure what this is supposed to mean. The code that I've gotten is riddled with mistakes and fabrications. If I were to use it directly, it would significantly slow my pace. Likewise, when I use LLMs to offer alternative methods to accomplish something, I have to take the time to sit down and understand what they're proposing, how to actually make it work, and whether that route(s) would be better than my original idea. That is a significant speed reduction.
The only way I can imagine LLMs resulting in "far higher intensity" is if I was just yolo'ing the code into my program, and then doing frantic integration, correction, and bugfix work afterwards.
Sure, that's "higher intensity", but that's just working harder and not smarter.
m2024
[dead]
bluefirebrand
It is not coding the same way riding a bus is not driving
You may get to the same destination, but it is not the same activity
rfoo
What if I prefer to have a clone of me doing my coding, and then I throw my clone under the bus and start to (angrily) hyperfocus explore and change every piece to be beautiful? Does this mean I love coding or I hate coding?
It's definitely a personality thing, but that's so much more productive for me, than convincing myself to do all the work from scratch after I had a design.
I guess this means I hate coding, and I only love the dopamine from designing and polishing my work instead of making things work. I'm not sure though, this feels like the opposite of hate coding.
ang_cire
If you create a sufficiently absurd hypothetical, anything is possible.
Or are you calling an LLM a "clone" of you? In that case, it's more, "if you create a flawed enough starting premise, anything is possible".
rfoo
> flawed enough starting premise
That's where we start to disagree what future looks like, then.
It's not there yet, in that the LLM-clone isn't good enough. But amusingly a not nearly good enough clone of me already made me more productive, in that I'm able to deliver more while maintaining the same level of personal satisfaction with my code.
ookblah
yeah i definitely enjoy the craft and love of writing boilerplate or manually correcting simple errors or looking up functions /s. i hate how it's even divided into "two camps", it's more like a big venn diagram.
skydhash
Who write boilerplate this day? I just lift the code from the examples in the docs (especially css frameworks). And I love looking at functions docs, because after doing it a few times, you develop an holistic understanding of the library and your speed increases. Kinda like learning a foreign language. You can use an app to translate everything, or asks for the correct word when the needs arises. The latter is a bit frustrating at the beginning, but that’s the only way to become fluent.
ang_cire
Seriously, I see this claim thrown around as though everyone writes the same starting template 50 times a week. Like, if you've got a piece of "boilerplate" code you're constantly rewriting... Save It! Put it in a repo or a snippet somewhere that you can just copy-paste when you need it.
You don't need a multi-million dollar LLM to give you slightly different boilerplate snippets when you already have a text editor on your computer to save them.
melvinroest
I replaced "code" for "singing" to make a point.
> This comment section really shows the stark divide between people who love singing and thus hate AI-assisted singing, and people who hate singing and thus love AI-assisted singing.
> Honestly, I suspect the people who would prefer to have someone or something else do their singing, are probably the singers who are already outputting the worst singing right now.
The point is: just because you love something, doesn't mean you're good at it. It is of course positively correlated with it. I am in fact a better singer because I love to sing compared to if I never practiced. But I am not a good singer, I am mediocre at best (I chose this example for a reason, I love singing as well as coding! :-D)
And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
ang_cire
I think the analogy/ substitution falls apart in that singing is generally not very stable or lucrative (for 99.999% of singers), so it is pretty rare to find someone singing who hates it. Much less uncommon to find people working in IT who hate the specific work of their jobs.
And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
Parae
I have been working in IT for 5 years while being a professional musician for 8 years (in France and touring in Europe). I've never met a single singer who told me they hate singing, on other hand, I can't even count how many of my colleagues told me how much they hate coding.
Another analogy would be with sound engineering. I've met sound engineer who hate their job as they would rather play music. They are also the ones whose jobs are likely to be replaced by AI. And I would argue that the argument stand stills. AI Sound Engineers who hate working on sound are often the bad sound engineers.
melvinroest
> I think the analogy/ substitution falls apart in that singing is generally not very stable or lucrative (for 99.999% of singers), so it is pretty rare to find someone singing who hates it.
I tried to cover this particular case with:
> And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
---
> Yes, loving something is no guarantee of skill at it, but hating something is very likely to correspond to not being good at it, since skills take time and dedication to hone. Being bad at something is the default state.
I tried to cover this particular case with:
> It is of course positively correlated with it.
---
> Being bad at something is the default state.
Well, skill-wise yes. But being talented at something can happen, even when you hate something.
williamcotton
> And I think we do tend to (rightfully) look down on e.g. singers who lip-sync concerts or use autotune to sing at pitches they otherwise can't, nevermind how we'd react if one used AI singing instead of themselves.
Autotune is de rigueur for popular music.
In general, I'm not sure that I agree with looking down on people.
cushychicken
I love coding - but I am not very good at it. I can describe what I want in great detail, with great specificity. But I am not personally very good at turning that detailed specification into the proper syntax and incantations.
AI is like jet fuel for me. It’s the translation layer between specs and code I’ve always wanted. It’s a great advisor for implementation strategies. It’s a way to try new strategies in code quickly.
I don’t need to get anyone else to review my code. Most of this is for personal projects.
I don’t really write professionally, so I don’t have a ton of need for it to manage realities of software engineering (large codebases, peer reviews, black box internal systems, etc). That being said - I do a reasonable amount of embedded Linux work, and AI understands the Linux kernel and device drivers very well.
To extend your metaphor: AI is like a magic microphone that makes all of my singing sound like Tony Rice, my personal favorite singer. I’ve always wanted to sound like him - but I never will. I don’t have the range or the training. But AI allows my coding level to get to that corresponding level with writing software.
I absolutely love it.
ang_cire
This is really interesting to me.
Do you love coding, or do you love creating programs?
It seems like the latter given your metaphor being a microphone to make you seem like you could sing well, i.e. wanting the end state itself rather than the achievement via the process.
"wanted to sound like him" vs "wanted to sing like him"
59nadir
> To extend your metaphor: AI is like a magic microphone that makes all of my singing sound like Tony Rice, my personal favorite singer. I’ve always wanted to sound like him - but I never will. I don’t have the range or the training. But AI allows my coding level to get to that corresponding level with writing software.
My experience with LLMs leads me to believe that it's more likely that the magic microphone in this case makes you sound still very, very bad, but being that you're not a good singer you can't tell the difference between very, very bad and Tony Rice's singing. You sang the song, though, i.e. the solution was reached.
59nadir
> And while it is easier to become good at coding than at singing - for professional purposes at least - I believe that the effect still holds.
I can't reconcile this with my own view of things but I think "for professional purposes at least" is doing a lot of work in your sentence and I get the feeling you intend to say "good enough" by adding that bit.
Most programmers are very bad at programming (and problem solving) and only if they were compared to absolute beginners with zero insight could they be said to be "good" (and most of that comes down to them at least knowing the names of maybe a couple of concepts, etc., which makes for at least a partial map of the knowledge space). Most of them will never become good at what they do either, but will stay middling because they basically just glue libraries together and learn surface level things over and over.
If all you've ever done is learn a language, a backend framework in that language, learn how to use SQL, learn JavaScript, learn a couple of frontend frameworks for JavaScript, you've just basically learned a bunch of trivia that at best could be considered table stakes for a certain part of the industry.
If you're not actually doing free form problem solving, implementing things from scratch, reading code you didn't have to read and building and reinforcing your own fundamentals in other ways you won't ever be a good programmer.
I've worked with people who've spent 10+ years in the industry only to be essentially useless without frameworks and libraries and it regularly showed in how poor their output was. It wasn't any better when they did have frameworks and libraries to use, but they could pretend it was because at least a solution was reached. The truth is that in most of those cases a much better version could've been reached by simply re-implementing only the parts that were needed, but these types of programmers don't have the capability to do so.
scarface_74
I started “coding” in 1986 in assembly on an Apple //e and by the time I graduated from college, I had experience with 4 different processor families - 65C02, 68K, PPC and x86. I spent the first 15 years of my career programming in C and C++ along with other languages.
Coding is just a means to an end - creating enough business value to convince the company I’m working for to give me money that I can exchange for food and shelter.
If AI can help me do that faster, I’m going to use it. Neither do I want to spend months procuring hardware and managing building out a server room (been there done that) when I can just submit some yaml/HCL and have it done for me in a few minutes.
ThrowawayTestr
I like solving problems but I hate coding. Wasting 20 minutes because you forgot a semicolon or something is not fun. AI let's me focus on the problem and not bother with the tedious coding bit.
skydhash
That comment makes me deeply suspicious about your debugging skills. And the formatting of your code.
ThrowawayTestr
I write code to solve problems for my own use or for my hobby electronics projects. Asking chatgpt to write a script is faster than reading the documentation of some python library.
Just last week it wrote me a whole application and gui to open a webpage at a specific time. Yeah it breaks after the first trigger but it works for what I need.
nairadithya
This doesn't even make sense, forgetting a semicolon is immediately caught by the compiler. What positive benefits does AI provide here?
masfuerte
It depends on the language. Javascript is fine without semicolons until it isn't. Of course, a linter will solve this more reliably than AI.
ThrowawayTestr
By knowing about libraries that I don't have to read and learn and being able to glue them together to quickly accomplish my task.
CopyOnWrite
Most comments here surprise me: I am using Githubs Copilot / ChatGPT 4.0 at work with a code base which is mostly implements a basic CRUD service... and outside of small/trivial example (where the generated code is mostly okay), prompting is more often than not a total waste of time. Now, I wonder if I am just totally unable to write/refine good prompts for the LLM (as it works for smaller samples, I hope I am not too far off) or what could explain the huge discrepancy of experience. (Just for the record: I would totally not mind if the LLM writes the code for the stuff I have to do at work.)
To clarify my questions: - Who here uses LLMs to generate code for bigger projects at work? (>= 20k lines of code) - If you use LLMs for bigger projects: Do you need to change your prompting strategy to get good results? - What programming languages are you using in your code bases? - Are there other people here who experience that LLMs are no help for non trivial problems?
douglasisshiny
I'm in the same boat. I've largely stopped using these tools other than asking questions about a language that I'm less familiar with or a complex type in typescript for which it can be helpful (sometimes). Otherwise, I felt like I was just wasting my time and becoming lazier/worse as a developer. I do wonder whether LLMs have hit a wall and we're in a hype cycle.
CopyOnWrite
Yes, I have the same feeling about the wall/hype cycle. Most of my time is understanding code and formulating a plan to change code w/o breaking anything... even if LLMs would generate 100% perfect code on the first try, it would not help in a big way.
One thing I forgot to mention is asking LLMs questions from within the IDE instead of doing a web search... this works quite nice, but again, it is not a crazy productivity boost.
thi2
My employer gives me access to Jetbrains AI, I work on a Vue Frontend with a Kotlin Spring Boot backend.
The codebase is not too old and has grown without too much technical debt, with complex prompts I never had decent success. Its usefull for quick "what does this do" checks but any real functionality seems to be lacking.
Maybe I'm not refining my prompts good enough but doing so would take longer than implementing it myself.
Recently I tried Jetbrains Junie, which acts like Claude if I understand it correctly.
I had a really refined prompt, ran it three times with adjustments and fine tuning but the result was still lacking. So I tossed it and wrote it myself. But watching the machine nearly getting it right was still impressive.
aitchnyu
Jetbrains AI runs on a "discount LLM" and their ratings were below 2 stars. I tried two others, which played games with me to reduce context and use cheaper models. I then switched to Aider which leads me to believe a moderate Claude user may need to spend 30$ a month, but I use Gemini models and I didnt exceed 5$.
CrimsonRain
You are just bad with prompting or working with very obscure language/framework or bad coding pattern or all of it. I had a talk with a seasoned engineer who has been coding for 50 years and has created many amazing things over lifetime about him having really bad results with AI tools I suggested for him. When I use AI for the same purposes in the same repo he's working on, it works nicely. When he does it, results are always not what he wants. It comes down to a combination of him not understanding how to guide the LLMs to correct direction and using a language/framework (he's not familiar with) he can't judge the LLMs output. It is really important to know what you want, be able to describe it in short points (but important points). Points that you know ai will mess up if you don't specify. And also be able to figure out which direction the ai is heading with the solution and correct it EARLY rather than later. Not overloading context/memory with unnecessary things. Focusing on key areas to improve and much more. I'm using AI to get solutions done that I can definitely do myself but it'll take a certain amount of time to hunt down all documentation, API/lib calls etc. With AI, 1/10th time is enough.
I've had massive success with java, js/TS, html css, go, rust, python, bitbucket pipelines/GitHub actions, cdk, docker compose, SQL, flutter/dart, swift etc.
douglasisshiny
I've had the same experience as the person to whom you're responding. After reading your post, I have to ask: if you're putting so much effort into prompting it with specific points, correcting it often, etc., why not just write the code yourself? It sounds like you're putting a good deal of effort into prompting it.
Aren't you worried that overtime you'll rely on it too much and your offhand knowledge will get worse?
CrimsonRain
I'm still spending less effort/time. A very significant amount.
I do write plenty of things myself. Sometimes, I ignore AI completely and write 100s of lines. Sometimes, I take copilot suggestions every other line, as I'm writing something "common" and copilot can "read" my mind. And sometimes, I write 100s of lines purely by prompting. It is a fine line when to do which; also depends on mood.
I am not worried about that as I spend hours everyday reading. I'm also the type of person who, when something is needed in a document, do not search for it using CTRL+F, but manually look thru it. It always takes more time but I also learn adjacent things to the topic I need.
And I never commit a single line from AI without reading and understanding it myself. So it might come up with 100 line solution for me, but I probably already know what I wanted and off chance it came up with something correct but in a way I did not know, I do read and learn it.
Ultimately, to me, the knowledge that I can !reset null in docker compose override is important. Remembering if it is !null reset or reset !null or !reset null (i.e., syntax) is not important. My offhand knowledge is not getting worse as I am constantly learning things; I just focus less on specific syntaxes or API signatures now.
You can apply the same argument with IDE. Almost all developers will fail to write proper JS/TS/Java etc without IDE help.
CopyOnWrite
I have read somewhere, that LLMs are mostly helpful to junior developers.
Is it possible the person claiming success with all these languages/tools/technologies is just on a junior level and is subjectively correct but has no point of reference how fast coding is for seniors and how quality code looks like?
xandrius
Not OP, it be comes natural and doesn't take a lot of time.
Anyway, if you want to, LLMs can today help with a ton of programming languages and frameworks. If you use any of the top 5 languages and it still doesn't work for you, either you're doing some esoteric work or you're doing it wrong.
CopyOnWrite
I do not rule out, that I am just very bad with prompting.
It just surprises me, that you write you had massive successes with "java, js/TS, html css, go, rust, python, bitbucket pipelines/GitHub actions, cdk, docker compose, SQL, flutter/dart, swift etc.", if you include the usual libraries/frameworks and the diverse application areas for these technologies, even with LLMs support it seems to me crazy to be able to make meaningful contributions in non trivial code bases.
Concerning SQL I can report another fail with LLMs, in a trivial code base with a handful of entities the LLM cannot come up with basic window functions.
I would be very interested if you could write up a blog post or could make a youtube video demonstrating your prompting skills... Perhaps demonstrating with a bigger open source project in any of the mentioned languages how to add a non trivial feature with your prompting skills?
CrimsonRain
Unfortunately, I'm at a stage of personal life where I do not have time to blog. I'd love to but :(
The stuff I work on for company is confidential and even getting authorization to use AI was such a hassle.
Based on some of your replies, I think you have an impression of current generation AIs that is 100% wrong. I can not blame you as the impression you have, is what the AI companies want you to have, that's what they are hyping.
In another comment, you mentioned someone should demo how AI can add a non-leaf feature to a non-trivial LOC codebase. This is what AI companies say AI can do. But the truth is, (current gen) AIs can not do this except a few rare cases. I can not demo this to you as I can't do this and do not attempt to do it either on day to day tasks.
The issue is context. What you are asking requires AI to have a huge amount of context that it simply is not equipped to handle (at least not right now).
What AIs are really good at is to do small fragment of a task given enough clear requirements.
When I want AI to write a Handler in my Controller, I don't just ask it to "write a function to handle POST call for entity E."
I write the javadoc /* */ comment that defines the signature and explains a little about how the core process of this handling works. I can even copy/paste similar handler from another controller if I think that will help.
Ultimately, my brain already knows the input, output and key transformations that needs to happen in this function. I just write minimal amount (esp comments) and get AI to complete the rest.
So if I need to write a non-leaf feature, I will break it down to several leaf features and then pass it on to AI and if needed, manually assemble them.
I had to write 500LOC bash script to handle a software deployment. This is not the way to do it but I was forced by circumstances created by someone else. Anyways, if I had to write the whole thing by hand, it'd take multiple days as bash syntax is not forgiving and the stuff I needed to do in the script were quite complex/crazy (and stupid).
I think I wrote about 50+ lines of text describing the whole process, which you can think of as a requirement document.
With a few tries, I was able to get the whole script with near accuracy. My reading revealed some issues. Pointed them to AI. It fixed them. Tests revealed some other issues. Again, AI fixed them after pointing out. I was able to get the whole thing done in just an hour or so.
thi2
> You are just bad with prompting or working with very obscure language/framework or bad coding pattern or all of it
You just described every existing legacy project^^
manojlds
Play with Cursor or Claude Code a bit and then make a decision. I am not on the this is going to replace Devs boat, but this has changed the way I code and approach things.
CopyOnWrite
Could you perhaps point me to a youtube video which demonstrates an experienced prompter sculpting code with Cursor/Clause Code?
In my search I just found trivial examples.
My critic so far:
- Examples seem always to be creating a simple application from scratch
- Examples always use super common things (like create a blog / simple website for CRUD)
What I would love to see (see elsewhere): Adding a non trivial feature to a bigger code base. Just a youtube video/demonstration. I don't care about language/framework etc. ...
kirubakaran
This morning I made this while sipping coffee, and it solves a real problem for my gf: https://github.com/kirubakaran/xmldiffer Sure it's not enormous, and it was built from scratch, but imho it's not a trivial thing either. It would've taken me at least a day or two of full time work, and I certainly don't have a couple of days to spare on a task like this. Instead, pair programming with AI made it into a fun relaxing activity.
knlam
Copilot is just plain bad. The result is day and night compare with cursor + gemini 2.5 (of course with good prompting)
merb
Copilot can also use Gemini 2.5 and sonnet 3.7.
7589447636
> Now, I wonder if I am just totally unable to write/refine good prompts for the LLM (as it works for smaller samples, I hope I am not too far off) or what could explain the huge discrepancy of experience.
Programming language / stack plays plays a big role, I presume.
CopyOnWrite
Fair enough. Still, I was out of luck for some fairly simple SQL statements, were the model knows 100% of the DDL statements.
stopyellingatme
Same here. We have a massive codebase with large classes and the LLMs are not very helpful. Frontend stuff is okay sometimes but the backend models are too complex at this point, I guess.
pdntspa
Tooling and available context size matters a lot. I'm having decent luck with Gemini 2.5 and Roo Code.
pdimitar
I don't know man, maybe prompt most of your work, eyeball it and verify it rigorously (which if you cannot do, you should absolutely never touch an LLM!), run a script to commit and push after 3 hours and then... work on whatever code makes you happy without using an LLM?
Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have. I have started deriving a lot of value from those LLMs I chose to interact with by specifying clear boundaries on what's the priority and what can wait for later and what should be completely ignored due to this or that objective (and a number of other parameters I am giving them). When you do that well, they are extremely useful.
only-one1701
I see this "prompting is an art" stuff a lot. I gave Claude a list of 10 <Route> objects and asked it to make an adjustment to all of them. It gave me 9 back. When I asked it to try again it gave me 10 but one didn't work. What's "prompt engineering" there, telling it to try again until it gets it right? I'd rather just do it right the first time.
codr7
We used to make fun of and look down on coders who mindlessly copy paste and mash the compile button until the code runs, for good reasons.
pdimitar
Did you skip the "rigorously verify the LLM code" part of my comment on purpose, just to show contempt?
pdimitar
Then don't use it? Nobody is making you.
I am also barely using LLMs at the moment. Even 10% of the time would be generous.
What I was saying is that I have tried different ways of interacting with LLMs and was happy to discover that the way I describe stuff to another senior dev actually works quite fine with an LLM. So I stuck to that.
Again, if an LLM is not up to your task, don't waste your time with it. I am not advocating for "forget everything you knew and just go ask Mr. AI". I am advocating for enabling and productivity-boosting. Some tasks I hate, for some I lack the deeper expertise, others are just verbose and require a ton of typing. If you can prompt the LLM well and vet the code yourself after (something many commenters here deliberately omit so they can happily tear down their straw man) then the LLM will be a net positive.
It's one more tool in the box. That's all there is to it really. No idea why people get so polarizing.
tmpz22
Prompt engineering is just trying that task on a variety of models and prompt variations until you can better understand the syntax needed to get the desired outcome, if the desired outcome can be gotten.
Honestly you’re trying to prove AI is ineffective by telling us it didn’t work with your ineffective protocol. That is not a strong argument.
only-one1701
What should I have done there? Tell it to make sure that it gives me all 10 objects I give it back? Tell it to not put brackets in the wrong place? This is a real question --- what would you have done?
FuckButtons
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
Hard disagree, I get to hyperfocus on making magical things that surprise and delight me every day.
NineWillows
Nice. I've got a whole lot of magical things that I need built for my day job. Want to connect so I can hand the work over to you? I'll still collect the paychecks, but you can have the joy. :)
pdimitar
[flagged]
FuckButtons
My work is the magical stuff, I don’t write much code outside of work, I don’t have time with two young kids.
hellisothers
> Let's stop pretending or denying it: most of us would delegate our work code to somebody else or something else if we could.
I don’t think this is the case, if anything the opposite is true. Most of us would like to do the work code but have realized, at some career point, that you’re paid more to abstract yourself away from that and get others to do it either in technical leadership or management.
diggan
> I don’t think this is the case, if anything the opposite is true
I'll be a radical and say that I think it depends and is very subjective.
Author above you seems to enjoy working on code by itself. You seem to have a different motivation. My motivation is solving problems I encounter, code just happen to be one way out of many possible ones. The author of the submission article seems to love the craft of programming in itself, maybe the problem itself doesn't even matter. Some people program just for the money, and so on.
pdimitar
Well, does not help that a lot of work tasks are meaningless drudgery that we collectively should have trivialized and 100% automated at least 20 years. That was kind of the core my point: a lot of work tasks are just plain BS.
elicksaur
>most of us would delegate our work code to somebody else or something else if we could.
Laughably narrow-minded projection of your own perspective on others.
jappgar
We all delegate. Did you knit your own clothes or is that too boring for you?
Enjoying to code/knit is fine but we can no longer expect to get paid well to do it.
elicksaur
Each activity we engage in has different use, value, and subjective enjoyment to different people. Some people love knitting! Personally, I do know how to sew small tears, which is more than most people in the US these days.
Just because I utilize the services of others for some things does not mean that it should be expected I want to utilize the service of others for all things.
This is a preposterous generalization and exactly why I said the OP premise is laughable.
Further, you’ve shifted OP’s point from subjective enjoyment of an activity to getting “paid well” - this is an irrelevant tangent to whether “most” people in general would delegate work if they could.
codr7
I wouldn't, I got into software exactly because I enjoy solving problems and writing code. Verifying shitty, mindless, computer generated code is not something I would consider doing for all the money in the world.
pdimitar
1. I work on enjoyable problems after I let the LLM do some of the tasks I have to do for money. The LLM frees me bandwidth for the stuff I truly love. I adore solving problems with code and that's not going to change ever.
2. Some of the modern LLMs generate very impressive code. Variables caching values that are reused several times, utility functions, even closure helpers scoped to a single function. I agree that when the LLM code's quality falls bellow a certain threshold then it's better in every way to just write it yourself instead.
troupo
> Still, prompting LLMs well requires eloquence and expressiveness that many programmers don't have
It requires magical incantations that may or may not work and where a missing comma in a prompt can break the output just as badly as the US waking up and draining compute resources.
Has nothing to do with eloquence
ang_cire
> most of us would delegate our work code to somebody else or something else if we could.
I saw your objections to other comments on the basis of them seemingly not having a disdainful attitude towards coding they do for work, specifically.
I absolutely do have tasks, coding included, that I don't want to do, and find no joy in. If I can have my manager assign the task to someone else, great! But using an LLM isn't that, so I'm still on the hook for ensuring all the most boring parts of that task (bugfixing, reworks, integration, tests, etc) get done.
My experience with LLMs is that they simply shift the division of time away from coding, and towards all the other bits.
And it can't possibly just be about prompting. How many hundreds of lines of prompting would you need to get an LLM to understand your coding conventions, security baselines, documentation reqs, logging, tests, allowed libraries, OSS license restrictions (i.e. disallowed libraries), etc? Or are you just refactoring for all that afterwards?
Maybe you work somewhere that doesn't require that level of rigor, but that doesn't strike me as a good thing to be entrenching in the industry by increasing coders' reliance on LLMs.
pdimitar
A super necessary context here is that I barely use LLM at all still. Maybe I should have said so but I figured that too much nuance would ruin a top-level comment and mostly casually commented on a tradeoff of using or not using LLMs.
Where I use LLMs:
1. Super boring and annoying tasks. Yes, my prompts for those include various coding style instructions, requests for small clarifying comments where the goal of the code is not obvious, tests. So, no OSS license restrictions. Libraries I specify most of the times I used LLMs (and only once did I ask it to suggest a library). Logging and telemetry I add myself. So long story short, I use the LLM to show me a draft of a solution and then mercilessly refactor it to match my practices and guidelines. I don't do 50 exchanges out of laziness, no.
2. Tasks where my expertise is lacking. I recently used an LLM to help me with making a `.clone()`-heavy Rust code to become nearly zero-copy for performance reasons -- it is a code on a hot path. As much as I love Rust and I am fairly good at it (realistically I'm IMO at 7.5 / 10), all the lifetimes and zero-copy semantics I still don't know yet. A long session with an LLM after, I emerged both better educated and with a faster code. IMO a win-win.
ang_cire
That's interesting, especially wrt the Rust example. I actually like LLMs as reference docs, I just don't trust their code as far as I can throw it.
Thanks for the follow-up!
jimbob45
The act of coding preserves your skills for that all-important verification step. No coding and the whole system falls apart.
codr7
Exactly, how are you supposed to verify anything when you don't have any skills left beyond prompting.
pdimitar
You don't. That's why you don't use an LLM most of the time. I was talking about cases where either the tasks were too boring or required an expertise that I didn't have at the time.
Thought it was obvious.
pdimitar
Absolutely. That's why I don't give the LLM the reins for long, nor do I tell it to do the whole thing. I want to keep my mind sharp and my abilities honed.
ahamilton454
I’ve been struggling with a very similar feeling. I too am a manager now. Back in the day there was something very fulfilling about fully understanding and comprehending your solution. I find now with AI tools I don’t need to understand a lot. I find the job much less fulfilling.
The funny thing is I agree with other comments, it is just kind of like a really good stack overflow. It can’t automate the whole job, not even close, and yet I find the tasks that it cannot automate are so much more boring (the ones I end up doing).
I envy the people who say that AI tools free them up to focus on what they care about. I haven’t been able to achieve this building with ai, if anything it feels like my competence has decreased due to the tools. I’m fairly certain I know how to use the tools well, I just think that I don’t enjoy how the job has evolved.
Kiro
Can't relate at all. I've never had so much fun programming as I have now. All the boring and tedious parts are gone and I can finally focus on the code I love to write.
drooby
I've been singin' this song for years. We should return to Small Data. Hand picked, locally sourced, data. Data I can buy at a mom and pop shop. Data I can smell, data I can feel, data I can yearn for.
Gone are those days.
smj-edison
I'm guessing you're referencing KRAZAM? https://youtu.be/eDr6_cMtfdA
null
kristjank
When we outsource the parts of programming that used to demand our complete focus and creativity, do we also outsource the opportunity for satisfaction? Can we find the same fulfillment in prompt engineering that we once found in problem-solving through code?
Most of AI-generated programming content I use are comments/explanations for legacy code, closely followed by tailored "getting started" scripts and iterations on visualisation tasks (for shitty school assignments that want my pyplots to look nice). The rest requires an understanding, which AI can help you achieve faster (it's read many a book related to the topic, so it can recall information a lot like an experienced colleague may), but it can't confer capital K Knowledge or understanding upon you. Some of the tasks it performs are grueling, take a lot of time to do manually, and provide little mental stimulation. Some may be described as lobotomizing and (in my opinion) may mentally damage you in the "Jack Torrance typewriter" kinda way.
It makes me able to work on the fun parts of my job which possess the qualities the article applauds.
lrvick
So long as your experience and skill allows you to produce work of higher quality than average for your industry, then you will always have a job which is to review that average quality work, and surgically correct it when it is wrong.
This has always been true in every craft, and it remains true for programmers in a post LLL world.
Most training data is open source code written by novice to average programmers publishing their first attempts at things and thus LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience.
Honestly to most programmers early in their career right now, I would suggest spending more time reviewing code, and bugfixes, than writing code. Review is the skillset the industry needs most now.
But you will need to be above average as a software reviewer to be employable. Go out into FOSSland and find a bunch of CVEs, or contribute perf/stability/compat fixes, proving you review and improve things better than existing automated tools.
Trust me, there are bugs -everywhere- if you know how to look for them and proving you can find them is the resume you need now.
The days of anyone that can rub two HTML tags together having a high paying job are over.
nottorp
> LLMS are heavily biased to replicate the naive, slow, insecure code largely uninformed by experience
The one time i pasted LLM code without reviewing it it belonged on accidentally quadratic.
It was obvious at first read, but probably not for a beginner. The accidental complexity was hidden behind API calls that weren't wrong, just grossly inefficient.
Problem might be, if you lose the "joy" and the "flow" you'll stop caring about things like that. And software is bloated enough already.
kcexn
The problem with FOSSland is that it is increasingly driven by commercial interests, not by volunteers.
I don't know the last time I encountered a used (not random hobby projects) FOSS project that wasn't funded and supported by a company (with exceptions maybe only in the GNU software suite, but even then lots of authors there are making submissions using company email addresses).
I think it's totally acceptable to not make open-source contributions to those projects unless someone is paying you to.
lrvick
Companies pay with money for specific features they want, just like individuals pay with their even more valuable time to add features they want.
This is fine, so long as the community decides what features they actually want to create the "menu" of unfunded objectives donors can sponsor
Some people love programming, for the sake of programming itself. They love the CS theory, they love the tooling, they love most everything about it.
Other people see all that as an means to an end - and find no joy from the technical aspect of creating something. They're more interested in the end result / product, rather than the process itself.
I think that if you're in group A, it can be difficult to understand group B. In vice versa.
I'm a musician, so I love everything about creating music. From the theory, to the mastery of the instrument, the tens of thousands of hours I've poured into it...finally being able to play something I never thought I'd be able to, just by sheer willpower and practice. Coming up with melodies that feel something to me, or I can relate to something.
On the other hand, I know people that want to jump straight to the end result. They have some melody or idea in their head, and they just want to generate some song that revolves around that idea.
I don't really look down on those people, even though the snobs might argue that they're not "real musicians". I don't understand them, but that's not really something I have to understand either.
So I think there are a lot of devs these days, that have been honing their skills and love for the craft for years, that don't understand why people just want things to be generated, with no effort.