Skip to content(if available)orjump to list(if available)

Engineers who dismiss AI

Engineers who dismiss AI

159 comments

·December 19, 2025

plastic-enjoyer

> The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t.

For me, however, there is one issue: how can I utilize AI without degenerating my own abilities? I use AI sparingly because, to be honest, every time I use AI, I feel like I'm getting a little dumber. I fear that excessive use of AI will lead to the loss of important skills on the one hand and create dependencies on the other. Who benefits if we end up with a generation of software developers who can no longer program without AI? Programming is not just writing code, but a process of organizing, understanding, and analyzing. What I want above all is AI that helps me become better at my job and continue to build skills and knowledge, rather than making me dependent on it.

tjchear

If we sees ourselves less as a programmer and more as a software builder, then it doesn’t really matter if our programming skills atrophy in the process of adopting this tool, because it affords us to build at a higher abstraction level), kind of like how a PM does it. This up-leveling in abstractions have happened over and over in software engineering as our tooling improves over time. I’m sure some excellent software engineers here couldn’t write in assembly code to save their lives, but are wildly productive and respected for what they do - building excellent software.

That said, as long as there’s the potential for AI to hallucinate, we’ll always need to be vigilant - for that reason I would want to keep my programming skills sharp.

AI assisted software building by day, artisanal coder by night perhaps.

thisissomething

> how can I utilize AI without degenerating my own abilities?

Couldn't the same statement, to some extent, be applied to using a sorting lib instead of writing your own sorting algorithm? Or how about using a language like python instead of manually handling memory allocation and garbage collection in C?

> What I want above all is AI that helps me become better at my job and continue to build skills and knowledge

So far, on my experience, the quality of what AI outputs is directly related to the quality of the input. I've seen some AI projects made by junior devs that a incredibly messy and confusing architecture, despite they using the same language and LLM model that I use? The main difference? The AI work was based on the patterns and architecture that I designed thanks to my knowledge, which also happens to ensure that the AI will produce less buggy software.

duskdozer

>For me, however, there is one issue: how can I utilize AI without degenerating my own abilities?

My cynical view is you can't, and that's the point. How many times before have we seen the pattern of "company operates at staggering losses while eliminating competition or becoming entrenched in enough people's lives, and then clamps down to make massive profits"?

mythz

Do you save time by using a calculator / spreadsheet or try to do all calculations in your head, because your ability to do quick calculations degrades the more you rely on tools to do it.

I'm not too worried about degrading abilities since my fundamentals are sound and if I get rusty due to lack of practice, I'm only a prompt away from asking my expert assistant to throw down some knowledge to bring me back up to speed.

Whilst my hands on programming has reduced, the variety of Software I create has increased. I used to avoid writing complex automation scripts in bash because I kept getting blocked trying to remember its archaic syntax, so I'd typically use bun/node for complex scripts, but with AI I've switched back to writing most of my scripts in bash (it's surprising at what's capable in bash), and have automated a lot more of my manual workflows since it's so easy to do.

I also avoided Python because the lack of typing and api discovery slowed me down a lot, but with AI autocomplete whenever I need to know how to do something I'll just write a method stub with comments and AI will complete it for me. I', now spending lots of time writing Python, to create AI Tools and Agents, ComfyUI Custom Nodes, Image and Audio Classifiers, PIL/ffmpeg transformations, etc. Things I'd never consider before AI.

I also don't worry about its effects as I view it as inevitable, with the pendulum having swung towards code now being dispensable/cheap to create, what's more important is velocity and being able to execute your ideas quickly, for me that's using AI where I can.

baq

You can’t and that’s the new normal. We’re probably the only generation which was given an opportunity to get properly good at coding. No such luxury will be available in a few years optimistically; pessimistically it’s been taken away with GPT 5.2 and Opus 4.5.

omnicognate

If that's the case (and I'm not convinced it is), shouldn't retaining that skill be the priority for anyone who has already acquired it? I've yet to see any evidence AI can turn someone who can't code into a substitute for someone who can. If the supply of that skill is going to dry up, surely it will only become more valuable. If using AI erodes it, the logical thing would be not to use AI.

aleph_minus_one

> If that's the case [...], shouldn't retaining that skill be the priority for anyone who has already acquired it?

Indeed I believe that, but in my experience these skills get more and more useless in the job market. In other words: retaining such (e.g. low-level coding) skills is an intensively practises hobby of such people that is (currently) of "no use" in the job market.

baq

That's the correct diagnosis IMHO, but getting good as software engineering is ~3 years of serious studying and ~5-10 years of serious work and that's after you've learned to code, which is easier to some and more difficult to others.

Compare ROI of that to being able to get kinda the software you need in a few hours of prompting; it's a new paradigm, progress is (still) exponential and we don't know where exactly things will settle.

Experts will get scarce and very sought after, but once they start to retire in 10-20-30 years... either dark ages or AI overlords await us.

michaellee8

i think cs students should force themselves to learn the real thing and write the code themselves, at least for their assignments. i have seen that a lot of recent cs grads that has gpt in most of their cs life basically cannot write proper code, with or without ai.

mysterydip

In the same way we make kids learn addition and multiplication even though they have access to calculators

wan23

This is part of the learning curve. When you vibe code you produce something that is as if someone else wrote it. It’s important to learn when that’s appropriate versus using it in a more limited way or not at all.

carlosneves

You can always ask it to nudge you in the right direction instead of giving the solution right away. I suspect this way of using it is not very popular though.

This is not a new problem I think. How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?

If you're not careful and always rely on them as a crutch, they'll remain just that; without actually "incrementing" you.

I think this is a very good question. How should we actually be using our tools such that we're not degenerating, but growing instead?

aleph_minus_one

> How do you use Google, translator, (even dictionaries!), etc without "degenerating" your own abilities?

By writing down every foreign word/phrase that I don't know, and adding a card for it to my cramming card box.

hu3

I worry about that too.

But at this point, it's like refusing to use vehicles to travel long distances in fear of becoming physicaly unfit. We go to the gym.

nottorp

Some engineers don't dismiss LLMs.

They dismiss the religion like hype machine.

You want to market to engineers, stick to provable statements. And address some of their concerns. With something other than "AI is evolving constantly, all your problems will be solved in 6 months, just keep paying us."

Oh by the way, what is the OP trying to sell with these FOMO tactics? Yet another ChatGPT frontend?

cons0le

I think we should label devs overreliant on AI as "Engineers who dismiss themselves"

falcor84

I'll take that, but don't see how it's so different from the intent I've always had of "automating myself out of the job". When I want to do "engineering", I can always spin up Factorio or Turing Complete. But for the rest of the time, I care about the result rather than the process. For example, before starting to implement a tool, I'll always first search online for whether there is already a good tool that would address my need, and if so, I'll generally utilize that.

nottorp

The nondeterminism is what makes LLMs different.

You download a tool written by a human, you can reasonably expect that it does what the author claims it does. And more, you can reasonably expect that if it fails it will fail in the same way in the same conditions.

DonHopkins

Cracktorio! ;) I also love Dyson Sphere Program.

I wrote some Turing Machine programs back in my Philosophy of Computer Science class during the 80's, but since then my Turing Machine programming skills have atrophied, and I let LLMs write them for me now.

squidbeak

Perhaps in that case the critics should bend their ire on the marketing departments, rather than trashing the tech?

Really though, the potential in this tech is unknown at this point. The measures we have suggest there's no slowdown in progress, and it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go, or how we might need to adjust our societies around it.

nottorp

> it isn't unreasonable for any enthusiast or policy maker to speculate about where it could go

What is posted to HN daily is beyond speculation. I suppose a psychologist has a term for what it is, I don't.

Edit: well, guess what? I asked an "AI":

Psychological Drivers of AI Hype:

  Term                   | Driver                | Resulting Behavior

  -----------------------|-----------------------|------------------------------

  ELIZA Effect           | Symbolic projection   | Treating a script like a person.

  Automation Bias        | Cognitive offloading  | Trusting AI over your own eyes.

  Techno-Optimism        | Confirmation bias     | Ignoring risks for "progress."

  Interface Familiarity  | Fluency heuristic     | Friendly UI = "Smart" system.
By the way, the text formatting is done by the "AI" as well. Asked it to make the table look like a table on HN specifically.

jennyholzer2

idiocy, delusion, propaganda, lies, and manipulation are a few terms i came up with off the top of my head

hulitu

> You want to market to engineers, stick to provable statements.

And that's where the "AI" is lacking.

"AI can write a testcase". Can it write a _correct_ test case (i.e. one that i only have to review, like i review my colleague work) ?

"AI can write requirements". Now, that i'm still waiting to see.

nottorp

> Can it write a _correct_ test case

And is the test case useful for something? On non textbook code?

jennyholzer2

spoiler: it is not

AI developers are 0.1x "engineers"

mholm

Is it possible you're not the target audience if you are aware that LLMs are impressive and useful? Regardless of the inane hype and bubble around them.

jennyholzer2

You have to be particularly gullible to fall for these tactics. Especially when the quality of LLM products has declined over the last 18 months.

Allow me to repeat myself: AI is for idiots.

nottorp

Nah, LLMs have fixed search. For now. I use them daily for that.

Fully expect them to include youtube levels of advertising in 1-2 years though. Just to compensate for the results being somewhat not spammy.

jennyholzer2

[flagged]

TGower

"AI coding is so much better now that any skepticism from 6 months ago is invalid" has been the refrain for the last 3 years. After the first few cycles of checking it out and realizing that it's still not meeting your quality bar, it's pretty reasonable to dismiss the AI hype crowd.

raincole

Because it has been true for the last 3 years. Just because a saying is repeated a lot doesn't mean it's wrong.

sothatsit

A year ago I could get o1-mini to write tests some of the time that I would then need to fix. Now I can get Opus 4.5 to do fairly complicated refactors with no mistakes.

These tools are seriously starting to become actually useful, and I’m sorry but people aren’t lying when they say things have changed a lot over the last year.

TGower

It might even be true this time, but there is no real mystery why many aren't inclined to invest more time figuring it out for themselves every few months. No need for the author of the original article to reach for "they are protecting their fragile egos" style of explanation.

Tripping5292

The productivity improvements speak for themselves. Over time, those who can use ai well and those who cannot will be rewarded or penalized by the free market accordingly.

null

[deleted]

jeppester

Meanwhile I'm getting a 5000 lines PR with code that's all clearly AI generated.

It's full of bloat; Unused http endpoints, lots of small utility functions that could have been inlined (but now come with unit tests!), missing translations, only somewhat correct design...

The quality wasn't perfect before, now it has taken a noticeable dip. And new code is being added faster than ever. There is no way to keep up.

I feel that I can either just give in and stop caring about quality, or I'll be fixing everyone else's AI code all of my time.

I'm sure that all my particular colleagues are just "holding it wrong", but this IS a real experience that I'm having, and it's been getting worse for a couple of years now.

I am also using AI myself, just in a much more controlled way, and I'm sure there's a sweet spot somewhere between "hand-coding" and vibing.

I just feel that as you inch in on that sweet spot, the advertised gains slowly wash away, and you are left with a tangible, but not as mindblowing improvement.

pwillia7

Just normal Luddite things, which attracts those most threatened in their personal identity by the new technology.

You see it obviously with the artists and image/video generators too.

We did this with Dadaism and Impressionism and photography before this too with art.

Ultimately, it's just more abstraction that we have to get used to -- art is stuff people create with their human expression.

It is funny to see everyone argue so vehemently without any interest in the same arguments that happened in the past.

Exit through the giftshop is a good movie that explores that topic too, though with near-plagiarized mass production, not LLMs, but I guess that's pretty similar too!

https://daily.jstor.org/when-photography-was-not-art/

https://www.youtube.com/watch?v=IqVXThss1z4

https://en.wikipedia.org/wiki/Dada

harimau777

I mean, luddites have consistently been correct. Technological advancements have consistently been used to benefit the rich at the expense of regular people.

The early Industrial Revolution that the original Luddites objected to resulted in horrible working conditions and a power shift from artisans to factory workers.

Dadism was a reaction to WWI where the aristocracy's greed and petty squabbling led to 17 million deaths.

pwillia7

I don't disagree with that, just that there's anything that can be done about it. Which technology did we successfully roll back? Nukes are the closest I think you can get and those are very hard to make and still exist in abundance, we just somewhat controlled who can have them

username223

> Which technology did we successfully roll back?

Quite a few come to mind: chemical and biological weapons, beanie babies, NFTs, garbage pail kids... Some take real effort to eradicate, some die out when people get bored and move on.

Today's version of "AI," i.e. large language models for emitting code, is on the level of fast fashion. It's novel and surprising that you can get a shirt for $5, then you realize that it's made in a sweatshop, and it falls apart after a few washings. There will always be a market for low-quality clothes, but they aren't "disrupting non-nudity."

jennyholzer2

This guy's knowledge of art history is the Dada wikipedia page and the Banksy movie from 20 years ago.

Allow me to repeat myself: AI is for idiots.

pwillia7

Since you're a real established artist, I want to make my point more clear: I am not an artist and while AI image tools let me make fun pictures and not be reliant on artists for projects, it doesn't imbue me with the creativity to create artistic works that _move_ people or comment on our society. AI doesn't give or take that from you, and I argue that is what truly separates art and artists from doodles and doodlers.

pwillia7

gottem Jenny

TimorousBestie

> Just normal Luddite things, which attracts those most threatened in their personal identity by the new technology.

I feel like “Luddite” is a misunderstood term.

https://en.wikipedia.org/wiki/Luddite

> Malcolm L. Thomas argued in his 1970 history The Luddites that machine-breaking was one of the very few tactics that workers could use to increase pressure on employers, undermine lower-paid competing workers, and create solidarity among workers. "These attacks on machines did not imply any necessary hostility to machinery as such; machinery was just a conveniently exposed target against which an attack could be made." [emph. added] Historian Eric Hobsbawm has called their machine wrecking "collective bargaining by riot", which had been a tactic used in Britain since the Restoration because manufactories were scattered throughout the country, and that made it impractical to hold large-scale strikes. An agricultural variant of Luddism occurred during the widespread Swing Riots of 1830 in southern and eastern England, centring on breaking threshing machines.

Luddites were closer to “class struggle by other means” than “identity politics.”

ericlamb89

> "The engineers refusing to try aren’t protecting themselves; quite the opposite, they’re falling behind. The gap is widening between engineers who’ve integrated these tools and engineers who haven’t. The first group is shipping faster, taking on bigger challenges. The second group is… not."

Honest question for the engineers here. Have you seen this happening at your company? Are strong engineers falling behind when refusing to integrate AI into their workflow?

gordonhart

As before, the big gap I still see is between engineers who set something up the right way and engineers who push code up without considering the bigger picture.

One nice change however is that you can guide the latter towards a total refactor during code review and it takes them a ~day instead of a ~week.

timbaboon

If anything I’ve learnt more because I’m having to go and find bugs in areas that I’m not super clued up on… yet ;)

halfcat

No. The opposite. The people who “move faster” are literally just producing tech debt that they get a quick high five for, then months later we limp along still dealing with it.

A guy will proudly deploy something he vibe coded, or “write the documentation” for some app that a contractor wrote, and then we get someone in the business telling us there’s a bug because it doesn’t do what the documentation says, and now I’m spending half a day in meetings to explain and now we have a project to overhaul the documentation (meaning we aren’t working on other things), all because someone spent 90 seconds to have AI generate “documentation” and gave themselves a pat on the back.

I look at what was produced and just lay my head down on the desk. It’s all crap. I just see a stream of things to fix, convention not followed, 20 extra libraries included when 2 would have done. Code not organized, where this new function should have gone in a different module, because where it is now creates tight coupling between two modules that were intentionally built to not be coupled before.

It’s a meme at this point to say, ”all code is tech debt”, but that’s all I’ve seen it produce: crap that I have to clean up, and it can produce it way faster than I can clean it up, so we literally have more tech debt and more non-working crap than we would have had if we just wrote it by hand.

We have a ton of internal apps that were working, then someone took a shortcut and 6 months later we’re still paying for the shortcut.

It’s not about moving faster today. It’s about keeping the ship pointed in the right direction. AI is a guy a guy on a jet ski doing backflips, telling is we’re falling behind because our cargo ship hasn’t adopted jet skis.

AI is a guy on his high horse, telling everyone how much faster they could go if they also had a horse. Except the horse takes a dump in the middle of the office and the whole office spends half their day shoveling crap because this one guy thinks he’s going faster.

harimau777

What worries me is how AI impacts neurodivergent programmers. I have ADHD and it simply doesn't work for me to constantly be switching context between the code I'm writing and the AI chat. I am terrified that I will be forced out of the industry if I can't keep up with people who are able to use AI.

hu3

Fellow diagnosed ADHD here. And I know every ADHD is different and people are different.

What helps me is:

- Prefer faster models like VSCode's Copilot Raptor Mini which, despite the name, is like 80% capable of what Sonnet 4.5 is. And is much faster. It is a fine tunned GPT 5 mini.

- Start writting the next prompt while LLMs work or keep pondering about the current problem at hand. This helps our chaotic brain to keep focused.

yunwal

I find that any additional overhead caused by the separate AI chat is saved 20x over by basically never having to use a browser to look at documentation and S/O while coding.

harimau777

That makes sense. I do use AI for questions like "what's the best way to flatten a list of lists in Python" or "what is the interface for this library function". I just don't use it the way I see some people do where they have it write the rough draft of their code or identify where a bug is.

amrocha

And studies find that that 20x is actually a 0.8x

mmcnl

I'm not an AI fanatic, but I do use ChatGPT often. In my experience, ChatGPT now is only marginally better than it was in 2022. The only real improvements is due to "thinking" abilities, i.e. searching the web and spending more tokens (basically prompting itself). The underlying model still to me feels largely the same.

I feel like I'm living in a different world when every time a new model comes out, everyone is in awe, and it scores exceptionally well on some benchmark that no one heard of before before the model even launched. And then when I use it, it feels like it's exactly the same as all models before, and makes the same stupid mistakes as always.

everdrive

This story ends up being relevant in a metaphorical way.

My aunt was born in the 1940s, and was something of an old fashioned feminist. She didn't know why wasn't allowed to wear pants, or why she had to wait for the man to make the first move, etc. She tells a story about a man who ditched her at a dance once because she didn't know the "latest dance." Apparently in the 1950s, some idiot was always inventing a new dance that everyone _just had follow_. The young man was so embarrassed that he left her at the dance.

I still think about this story, and think about how awful it would have been to live in the 40s. There always has been social pressure and change, but the "everyone's got to learn new stupid dances all the time" sort of pressure feels especially awful.

This really reminds me of the last 10-20 years in technology. "Hey, some dumb assholes have built some new technology, and you don't really have the choice to ignore it. You either adopt it too, or are left behind."

falcor84

As I see it, this is an inherent part of the tech industry. Unless you expressly choose to focus your career on maintaining legacy code, your value as a dev depends on your ability and willingness to continuously learn new tech.

kksweet

> If you’ve actually tried modern tools and they didn’t work for you, that’s a conversation worth having. But “I tried ChatGPT in 2022” isn’t that conversation.

How many people are actually saying this? Also how does one use modern coding tools in heavily regulated contexts, especially in Europe?

I can't disagree with the article and say that AI has gotten worse because it truly hasn't, but it still requires a lot of hand holding. This is especially true when you're 'not allowed' to send the full context of a specific task (like in health care). For now at least.

Fred27

I just don't find it interesting. The only thing less interesting is the constant evangelism about it.

I also find that the actual coding is important. The typing may not be the most ineresting bit, but it's one of the steps that helps refine the architecture I had in my head.

halfcat

100% agree. My only super power is weaponized “trying to understand”, spending a Saturday night in an obsessive fever dream of trying to wrap my head around some random idea.

That happens to produce good code as a side effect. And a chat bot is perfect for this.

But my obsession is not with output. Every time I use AI agents, even if it does exactly what I wanted, it’s unsatisfying. It’s not sometning I’m ever going to obsess over in my spare time.

samuelknight

It's good to be skeptical of new ideas as long as you don't box yourself in with dogmatism. If you're young you do this by looking at the world with fresh eyes. If you are experienced you do it by identifying assumptions and testing them.