Skip to content(if available)orjump to list(if available)

AI Coding

AI Coding

85 comments

·September 13, 2025

bdcravens

I'm almost 50, and have been writing code professionally since the late 90s. I can pretty much see projects in my head, and know exactly what to build. I also get paid pretty well for what I do. You'd think I'd be the prototype for anti-AI.

I'm not.

I can build anything, but often struggle with getting bogged down with all the basic work. I love AI for speed running through all the boring stuff and getting to the good parts.

I liken AI development to a developer somewhere between junior and mid-level, someone I can given a paragraph or two of thought out instructions and have them bang out an hour of work. (The potential for then stunting the growth of actual juniors into tomorrow's senior developers is a serious concern, but a separate problem to solve)

onion2k

I love AI for speed running through all the boring stuff and getting to the good parts.

In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.

Companies need to realise that AI to go faster is great, but there's still a cognitive impact on the people. A little respite from the hardcore stuff is genuinely useful sometimes. Taking all of that away will be bad for people.

That said, some devs hate the boring easy bits and will thrive. As with everything, individuals need to be managed as individuals.

FeepingCreature

That makes me think of https://store.steampowered.com/app/2262930/Bombe/ which is a version of Minesweeper where instead of clicking on squares you define (parametric!) rules that propagate information around the board automatically. Your own rules skip all the easy parts for you. As a result, every challenge you get is by definition a problem that you've never considered before. It's fun, but also exhausting.

sothatsit

I remember listening to a talk about Candy Crush and how they designed the game to have a few easy levels in between the hard ones, to balance feeling like you're improving while also challenging players. If all the levels get progressively harder, then a lot of people lose motivation to keep playing.

Yoric

Oooohhh....

That looks like plenty of hours of fun! Thanks for the link :)

Yoric

Interesting point.

There's also the fact that, while you're coding the easy stuff, your mind is thinking about the hard stuff, looking things up, seeing how they articulate. If you're spending 100% of your time on hard stuff, you might be hurting these preliminaries.

bdcravens

> In some cases, especially with the more senior devs in my org, fear of the good parts is why they're against AI. Devs often want the inherent safety of the boring, easy stuff for a while. AI changes the job to be a constant struggle with hard problems. That isn't necessarily a good thing. If you're actually senior by virtue of time rather than skill, you can only take on a limited number of challenging things one after another before you get exhausted.

The issue of senior-juniors has always been a problem; AI simply means they're losing their hiding spots.

raincole

> AI changes the job to be a constant struggle with hard problems.

Very true. I think AI (especially Claude Code) forced me to actually think about the problem at hand before implementing the solution. And more importantly, write down my thoughts before they fleet away from my feeble mind. A discipline that I wished I had before.

pydry

>AI changes the job to be a constant struggle with hard problems

I find this hilarious. From what I've seen watching people do it, it changes the job from deep thought and figuring out a good design to pulling a lever on a slot machine and hoping something good pops out.

The studies that show diminished critical thinking have matched what i saw anecdotally pairing with people who vibe coded. It replaced deep critical thinking with a kind of faith based gambler's mentality ("maybe if i tell it to think really hard it'll do it right next time...").

The only times ive seen a notable productivity improvement is when it was something not novel that didnt particularly matter if what popped out was shit - e.g. a proof of concept, ad hoc app, something that would naturally either work or fail obviously, etc. The buzz people get from these gamblers' highs when it works seems to make them happier than if they didnt use it at all though.

bdcravens

Which was my original point. Not that the outcome is shit. So much of what we write is absolutely low-skill and low-impact, but necessary and labor-intensive. Most of it is so basic and boilerplate you really can't look at it and know if it was machine- or human-generated. Why shouldn't that work get cranked out in seconds instead of hours? Then we can do the actual work we're paid to do.

To pair this with the comment you're responding to, the decline in critical thinking is probably a sign that there's many who aren't as senior as their paycheck suggests. AI will likely lead to us being able to differentiate between who the architects/artisans are, and who the assembly line workers are. Like I said, that's not a new problem, it's just that AI lays that truth bare. That will have an effect generation over generation, but that's been the story of progress in pretty much every industry for time eternal.

lukaslalinsky

I think there are two kinds of uses for these tools:

1) you try to explain what you want to get done

2) you try to explain what you want to get done and how to get it done

The first one is gambling, the second one has very small failure rate, at worst, the plan it presents shows it's not getting the solution you want it to do.

curl-up

Exactly. I tend to like Hotz, but by his description, every developer is also "a compiler", so it's a useless argument.

My life quality (as a startup cofounder wearing many different hats across the whole stack) would drop significantly if Cursor-like tools [1] were taken away from me, because it takes me a lot of mental effort to push myself to do the boring task, which leads to procrastination, which leads to delays, which leads to frustration. Being able to offload such tasks to AI is incredibly valuable, and since I've been in this space from "day 1", I think I have a very good grasp on what type of task I can trust it to do correctly. Here are some examples:

- Add logging throughout some code

- Turn a set of function calls that have gotten too deep into a nice class with clean interfaces

- Build a Streamlit dashboard that shows some basic stats from some table in the database

- Rewrite this LLM prompt to fix any typos and inconsistencies - yeah, "compiling" English instructions into English code also works great!

- Write all the "create index" lines for this SQL table, so that <insert a bunch of search usecases> perform well.

[1] I'm actually currently back to Copilot Chat, but it doesn't really matter that much.

wwweston

What’s the tooling you’re using, and the workflow you find yourself drawn to that boosts productivity?

bdcravens

I've used many different ones, and find the result pretty similar. I've used Copilot in VS Code, Chat GPT stand-alone, Warp.dev's baked in tools, etc. Often it's a matter of what kind of work I'm doing, since it's rarely single-mode.

haute_cuisine

Would love to see a project you built with the help of AI, can you share any links?

bdcravens

Most of my work is for my employer, but the bigger point is that you wouldn't be able to tell my "AI work" from my other work because I primarily use it for the boring stuff that is labor-intensive, while I work on the actual business cases. (Most of my work doesn't fall under the category of "web application", but rather, backend and background-processing intensive work that just happens to have an HTML front-end)

williamcotton

https://github.com/williamcotton/webpipe

Shhh, WIP blog post (on webpipe powered blog)

https://williamcotton.com/articles/introducing-web-pipe

Yes, I wrote my own DSL, complete with BDD testing framework, to write my blog with. In Rust!

  GET /hello/:world
    |> jq: `{ world: .params.world }`
    |> handlebars: `<p>hello, {{world}}</p>`

  describe "hello, world"
    it "calls the route"
      when calling GET /hello/world
      then status is 200
      and output equals `<p>hello, world</p>`
My blog source code written in webpipe:

http://github.com/williamcotton/williamcotton.com

null

[deleted]

demirbey05

I started fully coding with Claude Code. It's not just vibe coding, but rather AI-assisted coding. I've noticed there's a considerable decrease in my understanding of the whole codebase, even though I'm the only one who has been coding this codebase for 2 years. I'm struggling to answer my colleagues' questions.

I am not defending we should drop AI, but we should really measure its effects and take actions accordingly. It's more than just getting more productivity.

apercu

I wrote a couple python scripts this week to help me with a midi integration project (3 devices with different cable types) and for quick debugging if something fails (yes, I know there are tools out there that do this but I like learning).

I’m could have used an LLM to assist but then I wouldn’t have learned much.

But I did use an LLM to make a management wrapper to present a menu of options (cli right now) and call the scripts. That probably saved me an hour, easily.

That’s my comfort level for anything even remotely “complicated”.

ionwake

I keep wanting to go back to using claudecode but I get worried about this issue. How best to use it to complement you, without it rewriting everything behidn the scenes? whats the best protocol? constnat commit requests and reviews?

numbers_guy

This is the chief reason I don't use integrations. I just use chat, because I want to physically understand and insert code myself. Else you end up with the code overtaking your understanding of it.

pmg101

Yes. I'm happy to have a sometimes-wrong expert to hand. Sometimes it provides just what I need, sometimes like with a human (who are also fallible), it helps to spur my own thinking along, clarify, converge on a solution, think laterally, or other productivity boosting effects.

zkmon

Ofcourse, there is some truth in what you say. But business is desperate for new tech where they can redefine the order (who is big and who is small). There are floating billions which chase short term returns. Fund managers will be fired if they are not jumping on the new fad in the town. CIO's and CEO's will be fired if they are not jumping on AI. It's just nuclear arms race. It's good for none. but the other guy is on it, so you need to be too.

Think about this. Before there were cars on roads, people were just as much happy. Cars came, cities were redesigned for cars with buildings miles apart, and commuting miles became the new norm. You can no longer say cars are useless because the context around them has changed to make the cars a basic need.

AI does same thing. It changes the context in which we work. Everyone expects you use AI (and cars). It becomes a basic need, though a forced one.

To go further, hardly anything produced by science or technology is a basic need for humans. The context got twisted, making them basic needs. Tech solutions create the problems which they claim to solve. The problem did not exist before the solution came around. That's core driving force of business.

sunir

Code has a lot of bits of information the compiler users to construct the program. But not all because software needs iteration to get right both in bugs and in solving the intended problem.

The llm prompt has even fewer bits of information specifying the system than code. The model has a lot more bits but still finite. A perfect llm cannot build a perfect app in one shot.

However AIs can research, inquire, and iterate to gain more bits than when you started.

So the comparison to a compiler is not apt because the compiler can’t fix bugs or ask the user for more information about what the program should be.

Most devs are using ai at the autocomplete level which is like this compiler analogy which makes sense in 2025 but that isn’t where we will be in 2030.

What we don’t know is how good the technology will be in the future and how cheap and how fast. But it’s already very different than a compiler.

vmg12

I think this gets to a fundamental problem with the way the AI labs have been selling and hyping AI. People keep on saying that the AI is actually thinking and it's not just pattern matching. Well, as someone that uses AI tools and develops AI tools, my tools are much more useful when I treat the AI as a pattern matching next-token predictor than an actual intelligence. If I accidentally slip too many details into the context, all of a sudden the AI fails to generalize. That sounds like pattern matching and next token prediction to me.

> This isn’t to say “AI” technology won’t lead to some extremely good tools. But I argue this comes from increased amounts of search and optimization and patterns to crib from, not from any magic “the AI is doing the coding”

* I can tell claude code to crank out some basic crud api and it will crank it out in a minute saving me an hour or so.

* I need an implementation of an algorithm that has been coded a million times on github, I ask the AI to do it and it cranks out a correct working implementation.

If I only use the AI in its wheelhouse it works very well, otherwise it sucks.

KoolKat23

I think this comes down to levels of intelligence. Not knowledge, I mean intelligence. We often underestimate the amount of thinking/reasoning that goes into a certain task. Sometimes the AI can surprise you and do something very thoughtful, this often feels like magic.

giveita

I have a boring opinion. A cold take? served straight from the freezer.

He is right, however AI is still darn useful. He hints at why: patterns.

Writing a test suite for a new class when an existing one is in place is a breeze. It even can come up with tests you wouldnt have thought of or would have been too time pressed to check.

It also applies to non-test code too. If you have the structure it can knock a new one out.

You could have some lisp contraption that DRYs all the WETs so there is zero boilerplate. But in reality we are not crafting these perfect cosebases, we make readable, low-magic and boilerplatey code on tbe whole in our jobs.

matt3D

This is a more extreme example of the general hacker news group think about AI.

Geohot is easily a 99.999 percentile developer, and yet he can’t seem to reconcile that the other 99.999 percent are doing something much more basic than he can ever comprehend.

It’s some kind of expert paradox, if everyone was as smart and capable as the experts, then they wouldn’t be experts.

I have come across many developers that behave like the AI. Can’t explain codebases they’ve built, can’t maintain consistency.

It’s like a aerospace engineer not believing that the person that designs the toys in an Kinder egg doesn’t know how fluid sims work.

joefourier

Vibe coding large projects isn’t feasible yet, but as a developer here’s how I use AI to great effect, to the point where losing the tool greatly decreases my productivity:

- Autocomplete in Cursor. People think of AI agents first when they talk about AI coding but LLM-powered autocomplete is a huge productivity boost. It merges seamlessly with your existing workflow, prompting is just writings comments, it can edit multiple lines at once or redirect you to the appropriate part of the codebase, and if the output isn’t what you need you don’t waste much time because you can just choose to ignore it and write code as you usually do.

- Generating coding examples from documentation. Hallucination is basically a non-problem with Gemini Pro 2.5 especially if you give it the right context. This gets me up to speed on a new library or framework very quickly. Basically a stack overflow replacement.

- Debugging. Not always guaranteed to work, but when I’m stuck at a problem for too long, it can provide a solution, or give me a fresh new perspective.

- Self contained scripts. It’s ideal for this, like making package installers, cmake configurations, data processing, serverless micro services, etc.

- Understanding and brainstorming new solutions.

- Vibe coding parts of the codebase that don’t need deep integration. E.g. create a web component with X and Y feature, a C++ function that does a well defined purpose, or a simple file browser. I do wonder if a functional programming paradigm would be better when working with LLMs since by avoiding side effects you can work around their weaknesses when it comes to large codebases.

piker

Pretty much nailed it. Once you’re at about 40k LOC you can just turn off the autocomplete features and use Claude or GPT to evaluate specific high-level issues. My sense is 40k LOC is the point at which the suggestions are offset by the rabbit holes they sometimes send you down, but, more importantly by obscuring from you the complexity of the thing you’re building—temporarily.

runningmike

Great short read. But this “ It’s why the world wasted $10B+ on self driving car companies that obviously made no sense.”

Not everything should make sense. Playing , trying and failing is crucial to make our world nicer. Not overthinking is key, see later what works and why.

saejox

> AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this?

True in the long run. Like a car with a high acceleration but low top speed.

AI makes you start fast, but regret later because you don't have the top speed.

net01

This is shown in figure 5 of the paper. https://arxiv.org/pdf/2507.09089

isaacremuant

People repeating articles or papers. I know myself. I know from my own experiences what the good and bad of practice A or database B is. I don't need to read a conclusion by some Muppet.

Chill. Interesting times. Learn stuff, like always. Iterate. Be mindful and intentional and don't just chase mirrors but be practical.

The rest is fluff. You know yourself.

ChrisMarshallNY

> AI makes you feel 20% more productive but in reality makes you 19% slower. How many more billions are we going to waste on this?

Adderall is similar. It makes people feel a lot more productive, but research on its effectiveness[0] seems to show that, at best, we get only a mild improvement in productivity, and marked deterioration of cognitive abilities.

[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC6165228/

joefourier

I’m someone with ADHD who takes prescribed stimulants and they don’t make me work faster or smarter, they just make me work. Without them I’ll languish in an unfocused haze for hours, or zone in on irrelevant details until I realise I have an hour left in the day to get anything done. It could make me 20% less intelligent and it would still be worth it; this is obviously an extreme, but given the choice, I’d rather be an average developer that gets boring, functional code done on time than a dysfunctional genius who keeps missing deadlines and cannot be motivated to work on anything but the most exciting shiny new tech.

ChrisMarshallNY

I have family that had ADHD, as a kid (they called it “hyperactivity,” back then). He is also dyslexic.

The ADHD was caught early, and treated, but the dyslexia was not. He thought he was a moron, for much of his early life, and his peers and employers did nothing to discourage that self-diagnosis.

Since he learned of his dyslexia, and started treating it, he has been an engineer at Intel, for most of his career (not that I envy him, right now).

diarrhea

Note that the study is just n=13 and on subjects without ADHD.

ChrisMarshallNY

That’s the deal.

People without ADHD take it, believing that it makes them “super[wo]men.”

bdcravens

I had a problem client that I ended up firing and giving money back to about 15 years ago. Lot of red flags, but the breaking point was when they offered me adderall so I could "work faster".

That said, I'll leave the conclusions about whether it's valuable for those with ADHD to the mental health professionals.

gobdovan

Thanks again, diarrhea

luckylion

Research on _13_ people, that's a very important caveat when evaluating something like adderal.

Eikon

It’s interesting how science can become closer to pseudoscience than proper research through paper-milling.

It seems like that with such small groups and effects you could run the same “study” again and again until you get the result that you initially desired.

ChrisMarshallNY

So it should be easy to find studies that prove that non-ADHD people that take it, have dramatically improved productivity.

ChrisMarshallNY

I’m quite sure that there’s a ton more research on it. The drug’s been around for decades. Lots of time for plenty of studies.

If legitimate research had found it to be drastically better, that study would definitely have been published in a big way.

Unscientifically, I personally know quite a number of folks that sincerely believed that they couldn’t function without it, but have since learned that they do far better on their own. I haven’t met a single one that actually had their productivity decline (after an adjustment period, of course), after giving up Adderall. In fact, I know several, that have had their careers really take off, after giving it up.

luckylion

My point is that micro-studies like that on a tiny random (or even counter-indicated, "healthy") selection of the general population don't tell you much for drugs that do specific things.

"Antibiotics don't improve your life, but can damage your health" would likely be the outcome on 13 randomly selected healthy individuals. But do the same study on 13 people with a bacterial infection susceptible to antibiotics and your results will be vastly different.