Skip to content(if available)orjump to list(if available)

Will AI Replace Human Thinking? The Case for Writing and Coding Manually

jjallen

I have gone from using Claude Code all day long since the day it was launched to only using the separate Claude app. In my mind that is a nice balance of using it, but not too much, not too fast.

there is the temptation to just let these things run in our codebases, which I think for some projects is totally fine. For most websites I think this would usually be fine. This is for two reasons: 1) these models have been trained on more websites than probably anything else and 2) if a div/text is off by a little bit then usually there will be no huge problems.

But if you're building something that is mission critical, unless you go super slowly, which again is hard to do because these agents are tempting to go super fast. That is sort of the allure of them: to be able to write sofware super fast.

But as we all know, in some programs you cannot have a single char wrong or the whole program may not work or have value. At least that is how the one I am working on is.

I found that I lost the mental map of the codebase I am working on. Claude Code had done too much too fast.

I found a function this morning to validate futures/stocks/FUT-OPT/STK-OPT symbols where the validation was super basic and terrible that it had written. We had implemented some very strong actual symbol data validation a week or two ago. But that wasn't fully implemented everywhere. So now I need to go back and do this.

Anyways, I think finding where certain code is written would be helpful for sure and suggesting various ways to solve problems. But the separate GUI apps can do that for us.

So for now I am going to keep just using the separate LLM apps. I will also save lots of money in the meantime (which I would gladly spend for a higher quality Claude Code ish setup).

simianwords

The reality is that you can't have AI do too much for you or else you completely lose track of what is happening. I find it useful to let it do small stupid things and use it for brainstorming.

I don't like it to do complete PR's that span multiple files.

pilooch

Losing the mental map is the number one issue for me. I wonder if there could be a way to keep track of it, even at a high level. Keeping the ability to dig in is crucial.

cesarvarela

You need to spend more time in Plan mode. Ask it to make diagrams or pseudocode of whats and hows, iterate on that and then Accept Edits.

serbuvlad

I think the whole AI vs non. AI debate is a bit besides the point. Engineers are stuck in the old paradigm of "perfect" algorithms.

I think the image you post at the beginning basically sums it up for me: ChatGPT o3/5 Thinking can one-shot 75% of most reasonably sized tasks I give it without breaking a sweat, but struggles with tweaks to get it to 100%. So I make those tweaks myself and I have cut my code writing task in half or one third of the time.

ChatGPT also knows more idioms and useful libraries than I do so I generally end up with cleaner code this way.

Ferrari's are still hand assembled but Ford's assembly line and machines help save up human labor even if the quality of a mass-produced item is less than a hand-crafted one. But if everything was hand-crafted, we would have no computers at all to program.

Programming and writing will become niche and humans will still be used where a quality higher than what AI can produce is needed. But most code will be done by minotaur human-ai teams, where the human has a minimal but necessary contribution to keep the AI on track... I mean, it already is.

lallysingh

Hard disagree. We'll be able to use more expressive languages with better LLM support for understanding how to express ourselves and to understand compiler results. LLMs are only good at stuff that better languages don't require you to do. After that they fall off the cliff quickly.

LLMs are a communication technology, with a huge trained context of conversation. They have a long way to go before becoming anything intelligent.

simianwords

This comment captures it.

AI can do 80% of the work. I can review it later. And I spend much less time reviewing than I would have typing up everything manually.

I recently used it to add some logging and exception handling. It had to be done in multiple places.

A simple 2 line prompt one shotted it. Why do I need to waste time writing boring code?

roblh

Are you still going to have the skills to review it a year from now? Or 5 years from now when you’ve become accustomed to only writing <20% of the code? I’m already witnessing my coworkers skills degrading because of this, and it’s only going to get worse. Programming is a language, and when you don’t use it, it fades.

mattmaroon

If that’s a real effect, the best programmers absolutely will. You could spend 10% of your working time doing exercises, and still have double the productivity you used to have.

simianwords

What will happen is that we as developers will move one layer up in the abstraction. In the future it would seem a bit nonsensical to focus on individual lines of code and syntax because AI can more or less deal with it.

We will be focusing more higher level design - which database, where the data flows, which service is used where and so on. So you will just need different skills. Coding as a skill won't be that important.

kazinator

> Why do I need to waste time writing boring code?

The better question is: should that boring code be written? Code should only be non-boring.

The boredom of writing the code is not the only problem. The subsequent continued indefinite existence of that code is also a problem.

qaq

Yes who needs logging :)

ok_dad

> Why do I need to waste time writing boring code?

Some people actually enjoy that, believe it or not.

dingnuts

seriously! excuse me while I cry with the artists about the robot doing the fun part. automate my goddamned dishes, not the coding! I chose this field because I liked that part ;_;

honestly I've largely stopped coding for fun since Claude Code got popular. It's too expensive to use for personal projects, and it does the fun part. I don't want to pay but if I'm not using it, all I can think about is how inefficient doing everything manually is

..

I'm going to get into gardening or something :(

null

[deleted]

sureglymop

What you shouldn't forget also is that, while AI may be good at coming up with a "first shot" solution, it may be much worse if you want to change/correct parts of it.

In my experience, AI very often gets into a sort of sunk-cost fallacy (sunk prompt?) and then it is very hard to get it to make significant changes, especially architecturally.

I recently wrote an extension for a popular software product and gave AI the same task. It created a perfectly working version however it was 5x the lines of code of my version because it didn't know the extension API as well, even though I gave it the full documentation. It also hard coded some stuff/solutions to challenges that we totally don't want to be hard coded. A big reason why I arrived at a much better solution was that I used a debugger to step through the code and noted down just the API interactions I needed.

The AI also was convinced that some things were entirely impossible. By stepping through the code I saw that they would be possible by using parts of the internal API. I suggested a change to make the public API better for my use case in a GitHub issue and now it is totally not impossible.

At the end of the day I have to conclude that, the amount of time invested guiding and massaging the AI was too much and not really worth it. I would've been better off debugging the code right away and then creating my own version. The potential for AI to do the 80% is there. At this time though I personally can't accept its results yet but that may also be due to my personal flavour of perfectionism.

zerocharge

Depends on what you do, and what systems you develop for I would reckon. If it's another TODO app or some kind of table + form system that's been done to death - AI can probably have a go at creating a barebones minimal viable product. Targeting code that's outside the sweet spot of the training data ("blurry" area), you'll start to stumble. I've also found agents to be useless in large code bases with distributed logic where parts are in react, web back-end, service system). Slow and unreliable for large systems. Good for small tasks and scaffolding up proof of concepts.

bcrosby95

The analogy seems to fall apart because the quality of an assembly line produced car is higher than the hand crafted one. Fords lose out because they engineer to a price point, a Ferrari doesn't have that "problem" - arguably, the more expensive the better.

vonneumannstan

>The analogy falls apart because the quality of an assembly line produced car is higher than the hand crafted one.

What? So Your Jeep Compass is higher quality than a 458?

recursive

Engineering isn't stuck on perfect algorithms. Management is. There's lip service for AI code gen, but if my name is on a module, I still have to vouch for its correctness. If it's wrong, that might become my problem. I don't always write perfect code, but I aspire to. If I see evidence that these tools are writing more correct and reliable code than I do, then I will start to take it more seriously. For some code, it matters whether it's robust basically.

Davidzheng

Why are you assuming the cases where humans can code better than AI still exists after three years say -- I think in some industries today artisanal products are also not higher quality than machine made ones.

BriggyDwiggs42

Progress under current paradigms has gotten much slower

Zanfa

Extraordinary claims require extraordinary evidence and if there's one thing we've learned is that progress in AI is not linear nor predictable. We've been a few years away from fully self-driving cars for a really long time now.

rustystump

Another hard disagree. The crux here is that if u are not an expert in the given domain you do not know where that missing 25% is wrong. You think you do but you dont.

I have seen people bring in thousands of lines of opencv lut code in ai slop form because they didnt understand how to interpolate between two colors and didnt have the experience to know that is what they needed to do. This is the catch 20/20 of the ai expert narrative.

The other part is that improvement has massively stagnated in the space. It is painfully obvious too.

A4ET8a8uTh0_v2

<< you do not know where that missing 25% is wrong

I think there is something to this line of thinking. I just finished a bigger project and without going into details, one person from team supposedly dedicated to providing viable data about data was producing odd results. Since the data was not making much sense, I asked for info on how the data was produced. I was given SQL script and 'and then we applied some regex' explanation.

Long story short, I dig in and find that applied regex apparently messed with dates in an unexpected way and I knew because I knew the 'shape' that data was expected to have. I corrected it, because we were right around the deadline, but.. I noted it.

Anyway, I still see llm as a tool, but I think there is some reckoning on the horizon as:

1. managers push for more use and speed given that new tool 2. getting there faster wronger, because people go with 1 and do not check the output ( or don't know how to check it or don't know when its wrong )

It won't end well, because the culture does not reward careful consideration.

rustystump

Exactly. I use ai tools daily and they bite me. Not enough to stop but enough to know. Recently was building a ws merger of sorts based on another libs sub protocol. I wasnt familiar with the language or protocol but ai sure was. However the ai used a wrong id when repacking messages. Unless i knew the spec (which i didnt) i never would have known. Eventually, i did read the spec and figured it out.

To be clear here i give the spec to ai many times asking what was off and it never found the issue.

Once i did get it working, ai one shotted converting it from python to go with the exception of the above mistake being added back in again.

You dont know what you dont know. That final 25% or 5% or whatever is where the money is at, not the 80%. Almost doesnt count.

null

[deleted]

jrm4

I find that all of these discussions are rendered somewhat goofy by our very binary view of "programming" and "not programming."

It's like asking -- "will robots be good for building things?"

Sure, some things. What things?

Personally, I'm hoping for the revival of the idea that Hypercard was intended for; yes, let us enable EVERYONE to build little tools for themselves.

citizenpaul

This sounds great in theory but my my experience is that non tech people make horrible SE's. Even if they don't do the coding only participating in the spec. They simply don't even know what they don't know. Which is why SE exists and why these type of projects always fail to gain traction.

In my life of the thousands of non tech people I've worked with I can count in the low double digits that were capable of working in SE without exp/edu in it. Even then they were all up and coming driven people that still often missed what to me were obvious things, because its not their area of expertise. (They were all brilliant in their own area)

jonahx

> yes, let us enable EVERYONE to build little tools for themselves.

It will enable more people, but "everyone" is never, ever going to happen.

This is both because (1) many people don't want to do this, no matter how easy it is -- probably the primary reason and (2) many people won't have the ability to do this in a way that is net profitable for them, because their "tool ideas" just won't be good.

jrm4

Oh sure. But it doesn't need to be everyone; my go-to analogy is how we used to do "cars" vs a lot of them now?

50 years ago, you don't have to be a car guy, but if you know one, that's all you need to save a LOT of money and headache.

Today, that kind of works -- unless you own e.g. a Tesla.

msephton

Everyone who wants to. Right now the barrier is too high for some people who want to.

jonahx

> Everyone who wants to.

Yes, and that's a good thing.

Though I'd argue the bar has been on a downward trajectory for decades, is now plummeting with AI, and still we don't see a huge influx of the newly enabled. At the margins, yes. Some small business owners, some curious students, etc, are making things they wouldn't have before. But it doesn't feel like a revolution of self-made tools by the non-technical. And I don't think one is coming. And I don't think it's because it's still too hard.

absolute_unit22

> The case for writing and coding manually

I share much of the same ideas about this as the author.

For a long time, to make coding more natural (before and after LLMs) and not having to think about certain keywords or syntax, I would create little Anki decks (10-20 cards) with basic exercises for a particular language or tool I was using. One to two weeks of 5-10 minutes/day of doing these exercises (like, how to redirect both stout and strrr into a file, how to read from a channel in go, etc) and I was working without having to pause.

Writing code became less disruptive and much easier to get my ideas into a text editor.

I’m also the creator of typequicker.com (disclaimer) and we added Code mode as a typing exercise.

At first I thought folks wouldn’t be interested. I was pleasantly surprised though; many people are using it specifically for the same reason that the author is talking about.

OldGreenYodaGPT

Tools like Claude Code and OpenAI’s Codex CLI have boosted my productivity massively. They already handle about 90% of the coding work, and I just step in to finish the last 10%. Every month they get better, maybe in a year it’s 95%, in two years 97%, in three years 98%. We can all see where this is going.

ewf

You're an early adopter, so you're seeing massive gains. But eventually everyone gets the same productivity boost and that becomes the new baseline expectation.

Any clever prompting techniques that give you an edge today will evaporate quickly. People figure out the tricks, models absorb them, and tools automate them away.

There's no substitute for actually coding to learn software development. For new engineers, I'd strongly recommend limiting AI code generation on real work. Use it to explain concepts, not do the work for you. Otherwise you'll never develop the judgment to know what that 10% actually is.

wrs

You are fortunate to have the pre-AI background experience to know how to recognize the 10%. People graduating college right now may never be so fortunate.

zeropointsh

AI can churn out code that works, but usable, secure code is written for humans first; clear, predictable, and safe by design. Until AI can reason like a team lead and think like an attacker, it’s still just guessing. Usability and security aren’t side effects, they’re intentions, and AI doesn’t have those yet. Code without security is useless[0]. [0] https://www.npr.org/2025/08/02/nx-s1-5483886/tea-app-breach-...

coffeecoders

I think we’ll need to rethink what it means to be “skilled.” Fewer people will write every line of code themselves, but more will need to understand systems end-to-end, ask the right questions, and guide AIs to meaningful outcomes.

AI isn’t replacing thinking, it’s changing what we think about. Coding skill won’t disappear; it’ll just evolve.

Colony8409

This is the direction it was (and should be) heading in anyways. We're pumping out so many CS grads that pure technical skill is less valued than everything surrounding it.

apparent

It would be good if there were a mode where AI trained the human operator as it worked, to reduce future reliance. Instead of just writing a document or editing a document, it would explain in a good amount of detail what it was doing, and tailor the information to the understanding level of the operator. It might even quiz the operator to assure understanding.

This would take more time in the short run, but in the long run it would result in more well-rounded humans.

When there are power/internet/LLM outages, some people are going to be rendered completely helpless, and others will be more modestly impacted — but will still be able to get some work done.

We should aim to have more people in the latter camp.

Dlanv

Claude cli has this, called learning mode, and you can make custom modes to tweak it more

pessimizer

> some people are going to be rendered completely helpless,

This I don't have any hope about. Tech companies have been trying to make their customers ignorant (and terrified) of the functioning of their own computers as a moat to prevent them from touching anything, and to convince them to allow every violation or imposition. They've convinced many that their phones aren't even computers, and must operate by different, more intrusive and corporate-friendly guidelines. Now, they're being taught that their computers are actually phones, that mere customers aren't responsible enough to control those as well, and that the people who should have control are the noble tech CEOs and the government. The companies can not be shamed out of doing this, they genuinely think people are just crops to be harvested. You being dumber means that you have to pay them as a necessity rather than a convenience.

In 1985, they would teach children who could barely tie their shoes what files and directories were on computers, and we'd get to program in LOGO. These days? The panicked look I often see on normal people's faces when I ask them where they saved the file that their life depends on and is missing or broken makes me very sad. "Better to trick them into saving to OneDrive," the witches cackle. "Then if they miss a monthly payment, we can take their files away!"

throwmeaway222

Yes.

I've been learning how to build houses as my escape hatch. A lot of people like to talk about how AI isn't capable of this and that - the ONLY thing you will be complaining about in 5 years is the Basic Income programs needing to expand quickly and reform so we people can keep their 1.8M mortgages with $2000/mo taco bell income.

Animats

And get off my lawn.

That paper has far too many graphs which are totally made up.

It's been less than three years since ChatGPT launched. What's this going to be be like in a decade or two?

unethical_ban

The argument against relying on AI for everything is that the humans who curate and architect systems learned what they did through their experience at lower levels.

Overutilization of AI is pulling the ladder up and preventing the next generation of software architects and engineers from learning through experience.

ewf

well said

crinkly

At best it churns out mediocre to poor code so it’ll produce mediocre to poor thinking.

I wonder if some of the proponents know where the line is in the art. I suspect not.