Skip to content(if available)orjump to list(if available)

AI Coding assistants provide little value because a programmer's job is to think

kace91

These articles keep popping up, analyzing an hypothetical usage of AI (and guessing it won’t be useful) as if it wasn’t something already being used in practice. It’s kinda weird to me.

“It won’t deal with abstractions” -> try asking cursor for potential refactors or patterns that could be useful for a given text.

“It doesn’t understand things beyond the code” -> try giving them an abstract jira ticket or asking what it things about certain naming, with enough context

“Reading code and understanding whether it’s wrong will take more time than writing it yourself” -> ask any engineer that saves time with everything from test scaffolding to run-and-forget scripts.

It’s as if I wrote an article today arguing that exercise won’t make you able to lift more weight - every gymgoer would raise an eyebrow, and it’s hard to imagine even the non-gymgoers would be sheltered enough to buy the argument either.

thenaturalist

While I tend to agree with your premise that the linked article seems to be reasoning to the extreme off the basis of a very small code snippet, I think the core critique the author wants to make stands:

AI agents alone, unbounded, currently cannot provide huge value.

> try asking cursor for potential refactors or patterns that could be useful for a given text.

You, the developer, will be selecting this text.

> try giving them an abstract jira ticket or asking what it things about certain naming, with enough context

You still selected a JIRA ticket and provided context.

> ask any engineer that saves time with everything from test scaffolding to run-and-forget scripts.

Yes that is true, but again, what you are providing as a counterfactual are very bounded, aka easy contexts.

In any case, the industry (both the LLM providers as well as tooling builders and devs) is clearly going into the direction of constantly etching out small imoprovements by refining which context is deemed relevant for a given problem and most efficient ways to feed it to LLMs.

And let's not kid ourselves, Microsoft, OpenAI, hell Anthropic all have 2027-2029 plans where these things will be significantly more powerful.

Wowfunhappy

Here's an experience I've had with Claude Code several times:

1. I'll tell Claude Code to fix a bug.

2. Claude Code will fail, and after a few rounds of explaining the error and asking it to try again, I'll conclude this issue is outside the AI's ability to handle, and resign myself to fixing it the old fashioned way.

3. I'll start actually looking into the bug on my own, and develop a slightly deeper understanding of the problem on a technical level. I still don't understand every layer to the point where I could easily code a solution.

4. I'll once again ask Claude Code to fix the bug, this time including the little bit I learned in #3. Claude Code succeeds in one round.

I'd thought I'd discovered a limit to what the AI could do, but just the smallest bit of digging was enough to un-stick the AI, and I still didn't have to actually write the code myself.

(Note that I'm not a professional programmer and all of this is happening on hobby projects.)

sdesol

> I once again ask Claude Code to fix the bug, this time including the little bit I learned in #3. Claude Code fixes the problem in one round.

Context is king, which makes sense since LLM output is based on probability. The more context you can provide it, the more aligned the output will be. It's not like it magically learned something new. Depending on the problem, you may have to explain exactly what you want. If the problem is well understood, a sentence would most likely be suffice.

theshrike79

I had Claude go into a loop because I have cat aliased as bat

It wanted to check a config json file, noticed that it had missing commas between items (because bat prettifies the json) and went into a forever loop of changing the json to add the commas (that were already there) and checking the result by 'cat'ing the file (but actually with bat) and again finding out they weren't there. GOTO 10

The actual issue was that Claude had left two overlapping configuration parsing methods in the code. One with Viper (The correct one) and one 1000% idiotic string search system it decided to use instead of actually unmarshaling the JSON :)

I had to use pretty explicit language to get it stop fucking with the config file and look for the issue elsewhere. It did remember it, but forgot on the next task of course. I should've added the fact to the rule file.

(This was a vibe coding experiment, I was being purposefully obtuse about not understanding the code)

tptacek

Why does it matter that you're doing the thinking? Isn't that good news? What we're not doing any more is any the rote recitation that takes up most of the day when building stuff.

d0liver

I think "AI as a dumb agent for speeding up code editing" is kind of a different angle and not the one I wrote the article to address.

But, if it's editing that's taking most of your time, what part of your workflow are you spending the most time in? If you're typing at 60WPM for an hour then that's over 300 lines of code in an hour without any copy and paste which is pretty solid output if it's all correct.

viraptor

In lots of jobs, the person doing work is not the one selecting text or the JIRA ticket. There's lots of "this is what you're working on next" coding positions that are fully managed.

But even if we ignored those, this feels like goalpost moving. They're not selecting the text - ok, ask LLM what needs refactoring and why. They're not selecting the JIRA ticket with context? Ok, provide MCP to JIRA, git and comms and ask it to select a ticket, then iterate on context until it's solvable. Going with "but someone else does the step above" applies to almost everyone's job as well.

danielschreber

   >etching out
Could you explain what you mean by etching out small improvements? I've never seen the phrase "etching out" before.

quesera

Not OP, but might be an autocorrection for "eking out"

tomnipotent

I think maybe you have unrealistic expectations.

Yesterday I needed to import a 1GB CSV into ClickHouse. I copied the first 500 lines into Claude and asked it for a CREATE TABLE and CLI to import the file. Previous day I was running into a bug with some throw-away code so I pasted the error and code into Claude and it found the non-obvious mistake instantly. Week prior it saved me hours converting some early prototype code from React to Vue.

I do this probably half a dozen times a day, maybe more if I'm working on something unfamiliar. It saves at a minimum an hour a day by pointing me in the right direction - an answer I would have reached myself, but slower.

Over a month, a quarter, a year... this adds up. I don't need "big wins" from my LLM to feel happy and productive with the many little wins it's giving me today. And this is the worst it's ever going to be.

charlie-83

Out if interest, what kind of codebases are you able to get AI to do these things on. Everytime I have tried it with even simpler things than these it has failed spectacularly. Every example I see of people doing this kind of thing seems to be on some kind if web development so I have a hypothesis that AI might currently be much worse for the kinds of codebases I work on.

kace91

I currently work for a finance-related scaleup. So backend systems, with significant challenges related to domain complexity and scalability, but nothing super low level either.

It does take a bit to understand how to prompt in a way that the results are useful, can you share what you tried so far?

charlie-83

I have tried on a lot of different projects.

I have a codebase in Zig and it doesn't understand Zig at all.

I have another which is embedded C using zephyr RTOS. It doesn't understand zephyr at all and even if it could, it can't read the documentation for the different sensors nor can it plug in cables.

I have a tui project in rust using ratatui. The core of the project is dealing with binary files and the time it takes to explain to it how specific bits of data are organised in the file and then check it got everything perfectly correct (it never has) is more than the time to just write the code. I expect I could have more success on the actual TUI side of things but haven't tried too much since I am trying to learn rust with this project.

I just started an android app with flutter/dart. I get the feeling it will work well for this but I am yet to verify since I need to learn enough flutter to be able to judge it

My dayjob is a big C++ codebase making a GUI app with Qt. The core of it is all dealing with USB devices and Bluetooth protocols which it doesn't understand at all. We also have lots of very complicated C++ data structures, I had hoped that the AI would be able to at least explain them to me but it just makes stuff up everytime. This also means that getting it to edit any part of the codebase touching this sort if thing doesn't work. It just rips up any thread safety or allocates memory incorrectly etc. It also doesn't understand the compiler errors at all, I had a circular dependency and tried to get it to solve it but I had to give so many clues I basically told it what the problem was.

I really expected it to work very well for the Qt interface since building UI is what everyone seems to be doing with it. But the amount of hand holding it requires is insane. Each prompt feels like a monkey's paw. In every experiment I've done it would have been faster to just write it myself. I need to try getting it to write an entirely new pice of UI from scratch since I've only been editing existing UI so far.

Some of this is clearly a skill issue since I do feel myself getting better at prompting it and getting better results. However, I really do get the feeling that it either doesn't work or doesn't work as well on my code bases as other ones.

doug_durham

I work in Python, Swift, and Objective-C. AI tools work great in all of these environment. It's not just limited to web development.

charlie-83

I suppose saying that I've only seen it in web development is a bit of an exaggeration. It would be more accurate to say that I haven't seen any examples of people using AI on a codebase that looks like on of the ones I work on. Clearly I am biased just lump all the types of coding I'm not interested in into "web development"

idontwantthis

That’s my experience too. It also fails terribly with ElasticSearch probably because the documentation doesn’t have a lot of examples. ChatGPT, copilot and claude were all useless for that and gave completely plausible nonsense. I’ve used it with most success for writing unit tests and definitely shell scripts.

tyleo

Agreed. It isn’t like crypto where the proponents proclaimed some use case that would prove value always on the verge of arriving. AI is useful right now. People are using these tools now and enjoying them.

d0liver

> Observer bias is the tendency of observers to not see what is there, but instead to see what they expect or want to see.

Unfortunately, people enjoying a thing and thinking that it works well doesn't actually mean much on its own.

But, more than that I suspect that AI is making more people realize that they don't need to write everything themselves, but they never needed to to begin with, and they'd be better off to do the code reuse thing in a different way.

jdiff

I'm not sure that's a convincing argument given that crypto heads haven't just been enthusiastically chatting about the possibilities in the abstract. They do an awful lot of that, see Web3, but they have been using crypto.

plsbenice34

Even in 2012 bitcoin could very concretely be used to order drugs. Many people have used it to transact and preserve value in hostile economic environments. Etc etc. Ridiculous comment.

Personally i have still yet to find LLMs useful at all with programming.

tehjoker

bitcoin tracks the stock market

rsynnott

People are using divining rods now and enjoying them: https://en.wikipedia.org/wiki/Dowsing

awesome_dude

I don't (use AI tools), I've tried them and found that they got in the way, made things more confusing, and did not get me to a point where the thing I was trying to create was working (let alone working well/safe to send to prod)

I am /hoping/ that AI will improve, to the point that I can use it like Google or Wikipedia (that is, have some trust in what's being produced)

I don't actually know anyone using AI right now. I know one person on Bluesky has found it helpful for prototyping things (and I'm kind of jealous of him because he's found how to get AI to "work" for him).

Oh, I've also seen people pasting AI results into serious discussions to try and prove the experts wrong, but only to discover that the AI has produced flawed responses.

tptacek

I don't actually know anyone using AI right now.

I believe you, but this to me is a wild claim.

plsbenice34

Essentially the same for me, I had one incident where someone was arguing in favor of it and then immediately embarrassed themselves badly because they were misled by a chatgpt error. I have the feeling that this hype will collapse as this happens more and people see how bad the consequences are when there are errors

amelius

If AI gives a bad experience 20% of the time, and if there are 10M programmers using it, then about 3000 of them will have a bad experience 5 times in a row. You can't really blame them for giving up and writing about it.

verelo

It’s all good to me - let these folks stay in the simple times while you and i arbitrage our efforts against theirs? I agree, there’s massive value in using these tools and it’s hilarious to me when others don’t see it. My reaction isn’t going to be convince them they’re wrong, it’s just to find ways to use it to get ahead while leaving them behind.

d0liver

I need some information/advice -> I feed that into an imprecise aggregator/generator of some kind -> I apply my engineering judgement to evaluate the result and save time by reusing someone's existing work

This _is_ something that you can do with AI, but it's something that a search engine is better suited to because the search engine provides context that helps you do the evaluation, and it doesn't smash up results in weird and unpredictable ways.

Y'all think that AI is "thinking" because it's right sometimes, but it ain't thinking.

If I search for "refactor <something> to <something else>" and I get good results, that doesn't make the search engine capable of abstract thought.

andybak

AI is usually a better search engine than a search engine.

dragonwriter

AI alone can't replace a search engine very well at all.

AI with access to a search engine may be present a more useful solution to some problems than a bare search engine, but the AI isn't replacing a search engine it is using one.

senordevnyc

This seems like a great example of someone reasoning from first principles that X is impossible, while someone else doing some simple experiments with an open mind can easily see that X is both possible and easily demonstrated to be so.

Y'all think that AI is "thinking" because it's right sometimes, but it ain't thinking.

I know the principles of how LLMs work, I know the difference between anthropomorphizing them and not. It's not complicated. And yet I still find them wildly useful.

YMMV, but it's just lazy to declare that anyone who sees it differently than you just doesn't understand how LLMs work.

Anyway, I could care less if others avoid coding with LLMs, I'll just keep getting shit done.

skydhash

If you observe it at the right time, a broken clock will appear to be working, because it's right twice a day.

Kapura

weird metaphor, because a gym goer practices what they are doing by putting in the reps in order to increase personal capacity. it's more like you're laughing at people at the gym, saying "don't you know we have forklifts already lifting much more?"

kace91

That’s a completely different argument, however, and a good one to have.

I can buy “if you use the forklift you’ll eventually lose the ability to lift weight by yourself”, but the author is going for “the forklift is actually not able to lift anything” which can trivially be proven wrong.

d0liver

More like, "We had a nice forklift, but the boss got rid of it replaced it with a pack of rabid sled dogs which work sometimes? And sometimes they can also sniff out expiration dates on the food (although the boxes were already labeled?). And, I'm pretty sure one of them, George, understands me when I talk to him because the other day I asked him if he wanted a hotdog and he barked (of course, I was holding a hotdog at the time). But, anyway, we're using the dogs, so they must work? And I used to have to drive the forklift, but the dogs just do stuff without me needing to drive that old forklift"

cgriswald

I see it as almost the opposite. It’s like the pulley has been invented but some people refuse to acknowledge its usefulness and make claims that you’re weaker if you use it. But you can grow quite strong working a pulley all day.

asadotzler

"If you want to be good at lifting, just buy an exoskeleton like me and all my bros have. Never mind that your muscles will atrophy and you'll often get somersaulted down a flight of stairs while the exoskeleton makers all keep trying, and failing, to contain the exoskeleton propensity for tossing people down flights of stairs."

null

[deleted]

crispinb

It's the barstool economist argument style, on long-expired loan from medieval theology. Responding to clear empirical evidence that X occurs: "X can't happen because [insert 'rational' theory recapitulation]"

bastawhiz

I have not had the same experience as the author. The code I have my tools write is not long. I write a little bit at a time, and I know what I expect it to generate before it generates it. If what it generates isn't what I expect, that's a good hint to me that I haven't been descriptive enough with my comments or naming or method signatures.

I use Cursor not because I want it to think for me, but because I can only type so fast. I get out of it exactly the amount of value that I expect to get out of it. I can tell it to go through a file and perform a purely mechanical reformatting (like converting camel case to snake case) and it's faster to review the results than it is for me to try some clever regexp and screw it up five or six times.

And quite honestly, for me that's the dream. Reducing the friction of human-machine interaction is exactly the goal of designing good tools. If there was no meaningful value to be had from being able to get my ideas into the machine faster, nobody would buy fancy keyboards or (non-accessibility) dictation software.

theshrike79

I'm like 80% sure people complaining about AI doing a shit job are just plain holding it wrong.

The LLM doesn't magically know stuff you don't tell it. They CAN kinda-sorta fetch new information by reading the code or via MCP, but you still need to have a set of rules and documentation in place so that you don't spend half your credits on the LLM figuring out how to do something in your project.

flowerthoughts

I was wanting to build a wire routing for a string of lights on a panel. Looked up TSP, and learned of Christofides herustic. Asked Claude to implement Christofides. Went on to do stuff I enjoy more than mapping Wikipedia pseudo code to runnable code.

Sure, it would be really bad if everyone just assumes that the current state of the art is the best it will ever be, so we stop using our brains. The thing is, I'm very unlikely to come up with a better approximation to TSP, so I might as well use my limited brain power to focus on domains where I do have a chance to make a difference.

doug_durham

This is exactly the way I succeed. I ask it to do little bits at a time. I think that people have issues when they point the tools at a large code base and say "make it better". That's not the current sweet spot of the tools. Getting rid of boiler plate has been a game changer for me.

skydhash

I think my currently average code writing speed is 1 keyword per hour or something as nearly all my time coding is spent either reading the doc (to check my assumptions) or copy-pasting another block of code I have. The very short bust of writing code like I would write prose is done so rarely I don't even bother remembering them.

I've never written boilerplate. I copy them from old projects (the first time was not boilerplate, it was learning the technology) or other files, and do some fast editing (vim is great for this).

kristopolous

It's the "Day-50" problem.

On Day-0, AI is great but by Day-50 there's preferences and nuance that aren't captured through textual evidence. The productivity gains mostly vanish.

Ultimately AI coding efficacy is an HCI relationship and you need different relationships (workflows) at different points in time.

That's why, currently, as time progresses you use AI less and less on any feature and fall back to human. Your workflow isn't flexible enough.

So the real problem isn't the Day-0 solution, it's solving the HCI workflow problem to get productivity gains at Day-50.

Smarter AI isn't going to solve this. Large enough code becomes internally contradictory, documentation becomes dated, tickets become invalid, design docs are based on older conceptions. Devin, plandex, aider, goose, claude desktop, openai codex, these are all Day-0 relationships. The best might be a Day-10 solution, but none are Day-50.

Day-50 productivity is ultimately a user-interface problem - a relationship negotiation and a fundamentally dynamic relationship. The future world of GPT-5 and Sonnet-4 still won't read your thoughts.

I talked about what I'm doing to empower new workflows over here: https://news.ycombinator.com/item?id=43814203

Bengalilol

You pinpoint a truly important thing, even though I cannot put words onto it, I think that getting lost with AI coding assistants is far worse than getting lost as a programmer. It is like doing vanilla code or trying to make a framework suit your needs.

AI coding assistants provide 90% of the time more value than the good old google search. Nothing more, nothing less. But I don't use AI to code for me, I just use it to optimize very small fractions (ie: methods/functions at most).

> The future world of GPT-5 and Sonnet-4 still won't read your thoughts. Chills ahead. For sure, it will happen some day. And there won't be any reason to not embrace it (although I am, for now, absolutely reluctant to such idea).

kristopolous

It's why these no-code/vibe-code solutions like bolt, lovable, and replit are great at hackathons, demos, or basic front-ends but there's a giant cliff past there.

Scroll through things like https://www.yourware.so/ which is a no-code gallery of apps.

There's this utility threshold due to a 1967 observation by Melvin Conway:

> [O]rganizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.

https://en.wikipedia.org/wiki/Conway%27s_law

The next step only comes from the next structure.

Lovable's multiplayer mode (https://lovable.dev/blog/lovable-2-0) combined with Agno teams (https://github.com/agno-agi/agno) might be a suitable solution if you can define the roles right. Some can be non or "semi"-human (if you can get the dynamic workflow right)

rsynnott

> It's why these no-code/vibe-code solutions like bolt, lovable, and replit are great at hackathons, demos, or basic front-ends but there's a giant cliff past there.

Back in the day, basically every "getting started in Ruby on Rails" tutorial involved making a Twitter-like thing. This seemed kind of magic at the time. Now, did Rails ultimately end up fundamentally end up totally changing the face of webdev, allowing anyone to make Twitter in an afternoon? Well, ah, no, but it made for a good tech demo.

rcarmo

4 lines of JS. A screenful of “reasoning”. Not much I can agree with.

Meanwhile I just asked Gemini in VS Code Agent Mode to build an HTTP-like router using a trie and then refactor it as a Python decorator, and other than a somewhat dumb corner case it failed at, it generated a pretty useful piece of code that saved me a couple of hours (I had actually done this before a few years ago, so I knew exactly what I wanted).

Replace programmers? No. Well, except front-end (that kind of code is just too formulaic, transactional and often boring to do), and my experiments with React and Vue were pretty much “just add CSS”.

Add value? Heck yes - although I am still very wary of letting LLM-written code into production without a thorough review.

jdiff

Not even front end, unless it literally is a dumb thin wrapper around a back end. If you are processing anything on that front end, AI is likely to fall flat as quickly as it would on the backend.

permo-w

based on what?

jdiff

My own experience writing a web-based, SVG-based 3D modeler. No traditional back end, but when working on the underlying 3D engine it shits the bed from all the broken assumptions and uncommon conventions used there. And in the UI, the case I have in mind involved pointer capture and event handling, it chases down phantoms declaring it's working around behavior that isn't in the spec. I bring it the spec, I bring it minimal examples producing the desired behavior, and it still can't produce working code. It still tries to critique existing details that aren't part of the problem, as evidenced by the fact it took me 5 minutes to debug and fix myself when I got tired of pruning context. At one point it highlighted a line of code and suggested the problem could be a particular function getting called after that line. That function was called 10 lines above the highlighted line, in a section it re-output in a quote block.

So yes, it's bad for front end work too if your front end isn't just shoveling data into your back end.

AI's fine for well-trodden roads. It's awful if you're beating your own path, and especially bad at treading a new path just alongside a superhighway in the training data.

nottorp

> I had actually done this before a few years ago, so I knew exactly what I wanted

Oh :) LLMs do work sometimes when you already know what you want them to write.

jolt42

> that kind of code is just too formulaic, transactional and often boring to do

No offense, but that sounds like every programmer that hasn't done front-end development to me. Maybe for some class of front-ends (the same stuff that Ruby on Rails could generate), but past that things tend to get not boring real fast.

rcarmo

I do a fair amount of dashboards and data handling stuff. I don’t really want to deal with React/Vue at all, and AÍ takes most of the annoyance away.

d0liver

> I had actually done this before a few years ago, so I knew exactly what I wanted

Why not just use the one you already wrote?

xnx

> Why not just use the one you already wrote?

Might be owned by a previous employer.

linsomniac

This is a funny opinion, because tools like Claude Code and Aider let the programmer spend more of their time thinking. The more time I spend diddling the keyboard, the less time I have available to be thinking about the high-level concerns.

If I can just thinking "Implement a web-delivered app that runs in the browser and uses local storage to store state, and then presents a form for this questionnaire, another page that lists results, and another page that graphs the results of the responses over time", and that's ALL I have to think about, I now have time to think about all sorts of other problems.

That's literally all I had to do recently. I have chronic sinusitis, and wanted to start tracking a number of metrics from day to day, using the nicely named "SNOT-22" (Sino-Nasal Outcome Test, I'm not kidding here). In literally 5 minutes I had a tool I could use to track my symptoms from day to day. https://snot-22.linsomniac.com/

I asked a few follow-ups ("make it prettier", "let me delete entries in the history", "remember the graph settings"). I'm not a front-end guy at all, but I've been programming for 40 years.

I love the craft of programming, but I also love having an idea take shape. I'm 5-7 years from retirement (knock on wood), and I'm going to spend as much time thinking, and as little time typing in code, as possible.

I think that's the difference between "software engineer" and "programmer". ;-)

minimaxir

> But AI doesn't think -- it predicts patterns in language.

Boilerplate code is a pattern, and code is a language. That's part of why AI-generated code is especially effective for simple tasks.

It's when you get into more complicated apps that the pros/cons of AI coding start to be more apparent.

permo-w

not even necessarily complicated, but also obscure

jawns

A programmer's JOB is not to think. It's to deliver value to their employer or customers. That's why programmers get paid. Yes, thinking hard about how to deliver that value with software is important, but when it comes to a job, it's not the thought that counts; it's the results.

So if I, with AI augmentation, can deliver the same value as a colleague with 20% less thought and 80% less time, guess whose job is more secure?

I know, I know, AI tools aren't on par with skilled human programmers (yet), but a skilled human programmer who uses AI tools effectively to augment (not entirely replace) their efforts can create value faster while still maintaining quality.

skydhash

The value is in working and shipped features. This value increase when there's no technical debt dragging it down. Do the 20% less thought and 80% less time still hold?

ChrisMarshallNY

I haven't been using AI for coding assistance. I use it like someone I can spin around in my chair, and ask for any ideas.

Like some knucklehead sitting behind me, sometimes, it has given me good ideas. Other times ... not so much.

I have to carefully consider the advice and code that I get. Sometimes, it works, but it does not work well. I don't think that I've ever used suggested code verbatim. I always need to modify it; sometimes, heavily.

So I still have to think.

gersh

It seems like the traditional way to develop good judgement is by getting experience with hands-on coding. If that is all automated, how will people get the experience to have good judgement? Will fewer people get the experiences necessary to have good judgement?

nico

Compilers, for the most part, made it unnecessary for programmers to check the assembly code. There are still compiler programmers that do need to deal with that, but most programmers get to benefit from just trusting that the compilers, and by extension the compiler programmers, are doing a good job

We are in a transition period now. But eventually, most programmers will probably just get to trust the AIs and the code they generate, maybe do some debugging here and there at the most. Essentially AIs are becoming the English -> Code compilers

asadotzler

In my experience, compilers are far more predictable and consistent than LLMs, making them suitable for their purpose in important ways that LLMs are not.

permo-w

I honestly think people are so massively panicking over nothing with AI. even wrt graphic design, which I think people are most worried about, the main, central skill of a graphic designer is not the actual graft of sitting down and drawing the design, it's having the taste and skill and knowledge to make design choices that are worthwhile and useful and aesthetically pleasing. I can fart around all day on Stable Diffusion or telling an LLM to design a website, but I don't know shit about UI/UX design or colour theory or simply what appeals to people visually, and I doubt an AI can teach me it to any real degree.

yes there are now likely going to be less billable hours and perhaps less joy in the work, but at the same time I suspect that managers who decide they can forgo graphic designers and just get programmers to do it are going to lose a competitive advantage

adocomplete

I disagree. It's all about how you're using them. AI coding assistants make it easy to translate thought to code. So much boilerplate can be given to the assistant to write out while you focus on system design, architecture, etc, and then just guide the AI system to generate the code for you.

beernet

Call it AI, ML, Data Mining, it does not matter. Truth is these tools have been disrupting the SWE market and will continue to do so. People working with it will simply be more effective. Until even them are obsolete. Don't hate the player, hate the game.

throw54644532

This is becoming even more of a consensus now as in it feels like the tech is somewhat already there, or just about to come out.

As a software professional what makes it more interesting is that the "trick" (reasoning RL in models) that unlocked disruption of the software industry isn't really translating to other knowledge work professions. The disruption of AI is uneven. I'm not seeing in my circles other engineers (e.g. EE's, Construction/Civil, etc), lawyers, finance professionals, anything else get disrupted as significantly as software development.

The respect of the profession has significantly gone down as well. From "wow you do that! that's pretty cool..." to "even my X standard job has a future; what are you planning to do instead?" within a 3 year period. I'm not even in SV, NY or any major tech hubs.

protocolture

So theres no value in dealing with the repeatable stuff to free the programmer up to solve new problems? Seems like a stretch.

9rx

There is no new value that we didn’t already recognize. We’ve know for many decades that programming languages can help programmers.