Skip to content(if available)orjump to list(if available)

AI Coding: A Sober Review

AI Coding: A Sober Review

12 comments

·September 17, 2025

willahmad

Here's my experience with these tools:

Good: I can prototype things very quickly thanks to these tools

Bad: After couple of vibe coding iterations, I don't have a mental model of the project.

Good: When I open my past projects where I have very good mental models, I can come up with a nice prompt and build anything quickly again.

Bad: After couple of iterations I become lazy, and eventually my mental models break.

There's definitely a use for these tools. But be careful, job of engineers are not only coding but also training their memory to build solutions and bridge real world problem with software solution. If you lose this skill of thinking, you will be obsolete quickly

accrual

This matches my experience as well. When I'm working on a codebase that I started and know well, it feels like magic to chat with an AI and watch patches appear on the screen to accept/deny. I only accept about 50% of the AI patches before tweaks because it's my project and I care about keeping on the track I laid out.

When I'm vibe coding something from scratch I don't have the mental model, I don't always review everything closely, and eventually it becomes an "AI project" that I'm just making requests against to hopefully achieve my goal.

softwaredoug

And when you lose your mental model it’s harder to prompt the LLM for good code.

softwaredoug

This space is filled with personal anecdotes and studies from providers. It's hard to get objective perspectives from independent labs.

shikharbhardwaj

Hi! Author of the blog post here.

I completely agree, getting an objective measure for the developer experience from these various tools is not easy. On one hand, you have a series of benchmarks from LLM providers. While reflecting some degree of fitness to specific tasks, they often fail to translate to real-world usage. On the other hand, you have the tool providers with different features and product claims, and user anecdotes for very different use-cases.

The attempt with this post was to summarize my experience across some of these tools and highlight some specific features which worked better for me vs others. Given how quickly things are changing in this space, the primary conclusion is that using a tool day-to-day, discovering its strengths and deficiencies and working to eliminate the ones with high hit-rate is best at this point.

ozgune

(Disclaimer: Ozgun from Ubicloud)

I agree with you. I feel the challenge is that using AI coding tools is still an art, and not a science. That's why we see many qualitative studies that sometimes conflict with each other.

In this case, we found the following interesting. That's why we nudged Shikhar to blog about his experience and put a disclaimer at the top.

* Our codebase is in Ruby and follows a design pattern uncommon industry * We don't have a horse in this game * I haven't seen an evaluation that evaluates coding tools in (a) coding, (b) testing, and (c) debugging dimension

troupo

It's hard to go beyond anecdotes because it's impossible to measure outcomes objectively.

CuriouslyC

Is it? Tests turn green seems pretty objective, as does time/tokens to test green, code delta size, patch performance, etc. Not sure why people have such a hard time with agent evals.

Just remember to keep a holdout test set for validation.

CuriouslyC

A vibe article on vibe coding.

ExxKA

I am none the wiser. How do I get my 5 minutes back?

null

[deleted]

GardenLetter27

This reads like an advert for Continue.dev