I stopped using AI code editors
80 comments
·April 3, 2025wolframhempel
flowerthoughts
I'd classify this as theoretical skills vs tool skills.
Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."
My point being that theory (and thus what is considered foundational) has progressed as well.
politelemon
> horse-riding, sword fighting or my inability to navigate by the stars.
Some better more suitable examples would be warranted here, none of these were as widespread or common as you'd assume. Little to no metaphorical scoffing would happen for those. Now, sewing and darning, and subsistence, while mundane, are uncommon for many of us.
sshine
For some strange reason, I’m better at sewing than both my wife and mother-in-law. I learned it in public school when both genders learned both woodworking and sewing, and maintained an interest so that I could wear “grunge” in the 1990s. The teachers I had remembered that those classes were gendered while they worked.
codebra
“still far from being able to do the latter” These models have been in wide use for under three years. AI IDEs barely a year. Gemini 2.5 Pro is shockingly good at architecture if you make it into a conversation rather than expecting a one-shot exercise. I share your native skepticism, but the pace of improvement has taken me aback and made me reluctant to stake much on what LLMs can’t do. Give it 6 months.
sceptic123
Taking your SQL example, if you don't properly understand the SQL dialect how can you know that what the AI gives you is correct?
LiKao
I'd say because psychologically (and also based on CS Theory) creating something and verifying draw from similar but also unrelated skills.
It's like NP. Solving an NP problem is very hard. Verifying that the solution is correct is very easy.
You might not know the statements required, but once the AI reminds you of which statements are available, you can check the logic using these statements makes sense.
Yes, there is a pitfall of being lazy and forgetting to verify the output. That's where a lot of vibe coding problems come from in my opinion.
sceptic123
The biggest problem with LLMs is that they are very good at presenting something that looks like a correct solution without having the required knowledge to confirm if it is indeed correct.
So my concern is more "do you know how to verify" rather than "did you forget to verify".
globular-toast
This is a great comment and says what I've been thinking but hadn't put into words yet.
Too many people think what I do is "write code". That is incorrect. What I do is listen, read, watch and think. If code needs writing then it already basically writes itself because at that point I've already done the thinking. The typing part is an inconvenience that I'd happily give up if I could get my thoughts into the computer directly somehow.
AI tools make the easy stuff easier. They don't help much with hard stuff. The most useful thing I've found them for is getting an initial orientation in a completely unfamiliar area. But after that, when I need hard details, it's books, manuals, blogs etc just like before. I find juniors are already lacking in their ability to find and assimilate knowledge and I feel like having AI isn't going to help here.
namaria
Abstracting away the software paraphernalia makes this more clear in my view: our job is to understand and specify abstract symbolic systems. Making them work with the current computer architectures is incidental.
This is why I don't see LLM assisted coding as revolutionary. At best I think it's a marginal improvement on indexing, search and code completion as they have existed for at least a decade now.
NLP is a poor medium for specifying abstract symbolic systems. And LLMs work by finding patterns in latent space, I think. But the latent space doesn't represent reality, it represents language as recorded in the training data. It's easy to underestimate just how much training data were used for the current state-of-the-art foundational models. And it's easy to overestimate the ability these tools have to weave language and by induction attribute reasoning abilities to them.
The intuition I have about these LLM-driven tools is that we're adding degrees of freedom to the levers we use. When you're near an attractor congruent with your goals it feels like magic. But I think this is over fitting: the things we do now are closely mirrored by the data we used to train these models. But as we move forward in terms of tooling, domains, technology, culture etc, the data available will become increasingly obsolete, relevant data increasingly scarce.
Besides there's the problem of unknown unknowns: lots of people using these tools are assuming that the attractors they see pulling on their outcome is adequate because they can only see some arbitrary surface of it. And since they don't know what geometries lie beneath, they end up creating and exposing systems with several unknown issues that might have implications in security, legality, morality, etc. And since there's a time delay between their feeling of accomplishment and the surfacing of issues, and they will be likely to use the same approach, we might be heading for one hell of a bullwhip effect across dimension we can't anticipate at all.
satvikpendem
I do the same now, I don't use Cursor or similar edit-level AI tools anymore, I just use inline text completions and chat to talk through a problem, and then, I'll copy-paste anything needed (or rather type it in manually just to have more control).
I literally felt myself getting AI brain rot, as one Ask HN put it recently, where it felt like I started losing brain cells and depended too much on the AI over my own thinking and felt my skills atrophy. At the end of the day, in the future, I sense there will be a much wider gap between those that truly know how to code, and those that, well, don't, due to such over-reliance on AI.
greyman
I also stopped using Cline as well as Claude Desktop + MCPs. Gemini for example is rushing forward, Google surely is putting huge resources into developing it, and if in the matter of months AI will be able to implement additional feature itself in 0-shot, why bother with IDE?
satvikpendem
And what will you do when that zero shot doesn't work, and continues not to work? It will always be necessary to dig in and manually change things, hence an editor or IDE will continue to be needed.
greyman
Yes this happens. Then I use "dumb" IDE like Goland... or, I didn't stop using it. My point is that I currently do not invest my time into learning "agentic IDE" like Cursor, since I am not sure this is something useful in the future.
mentalgear
I also do most of my coding artisanal, but use LLM for semantic search, to enrich the research part.
Definitely never trust an LLM to write entire files for you, at least if you don't want to spend more time in code review than writing or you expect maintaining it.
Also, a good quote regarding the AI tools market:
> A lot of companies are creating FOMO as a sales tactic to get more customers, to show traction to their investors, to get another round of funding, to generate the next model that will definitely revolutionize everything.
alfiedotwtf
> I also do most of my coding artisanal
Off-topic, but I just wanted to say I love this as a statement!
nesk_
I've recently disabled code completions, it's too much mental workload to read all those suggestions for so little quality.
I still use the chat whenever I need it.
specproc
Nicholas Carr has a nice book on the dynamic the author is describing [0], i.e. that our skills atrophy the more we rely on automation.
Like a lot of others in the thread, I've also turned off Copilot and have been using chat a lot less during coding sessions.
There are two reasons for this decision, actually. Firstly, as noted above, in the original post and throughout this thread, it's making my already fair-to-middling skills worse.
The more important thing is that coding feels less fun. I think there are two reasons for this:
- Firstly, I'm not doing so much of the thinking for myself, and you know what? I really like thinking.
- Secondly, as a collary to the skill loss, I really enjoy improving. I got back into coding again later in life, and it's been a really fun journey. It's so satisfying feeling an incremental improvement with each project.
Writing code "on my own" again has been a little slower (line by line), but it's been a much more pleasant experience.
acron0
This feels similar to articles with titles such as "Why every developer should learn Assembly" or "Relying on NPM packages considered harmful". I appreciate the core of truth inside the sentiment, and the author isn't _wrong_, but it won't matter over time. AI coding ability will improve, whether it's writing, debugging or planning. It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will. And yes, it won't be optimal or totally secure, or the abstractions might be questionable but...how is that really different than most real software projects anyway?
fieldcny
Software is the connective tissue of the world, generating mediocre quality results (which will be the best outcome if you don’t really understand what you are looking at) is not just lazy it can be dangerous, do the worlds best engineers make mistakes? Of course they do, but that’s why building high quality software is collaborative process you have to work with others to build better systems. If you aren’t, you are wasting your time.
As of now (and this could change, but that doesn’t change the moral and ethical obligations), software engineers are richly rewarded specifically because they should be able to write and understand high quality code, the code written is the foundation of how our entire modern world is built.
tauchunfall
>It will be good enough to produce 90% of the solution with very little input, and 90% is more than enough to go to market, so it will.
What backs up this claim? And when will it reach it?
We could be very well reached a plateau right now, which means looking at previous trends in improvements does not allow us to predict future improvements. If I understand it correctly.
null
yapyap
That is a hellish look toward the future. To be clear I don’t think you’re wrong, if companies can squeeze more out of devs by forcing them to use AI I bet they will, move fast and break stuff and all that, but it’s still quite the bummer.
futuraperdita
I'd argue it's a hell many other people see daily, and we've been privileged to have the space to care about craft. Corporations have never cared about the craft. The business is paying me to make, and the moment they can get my level of quality and expertise from someone much cheaper, or a machine itself, I'm gone. That dystopia has always been present and we just haven't had to stare it down as much as some other industries have.
satvikpendem
I don't think it's really any different than how most products are made currently, do you think most startups are caring about security and things that would slow down their initial release? All the rest is tech debt that can be solved once product market fit is solved.
The only thing I'd worry about is when no one knows how to solve these when everyone relies on AI.
mentalgear
> All the rest is tech debt that can be solved once product market fit is solved.
Even then, it's mostly never.
ghaff
I don't have a real opinion of the value at this point but, to the degree that there are significant productivity enhancement tools available for developers (or many other functions), and they refuse to use them, companies should properly mark those folks down as low performers with the associated consequences.
"I don't want to use the web."
reshlo
“It would enhance productivity” is not a sufficient justification for requiring someone to do something. Ignoring safety regulations would often enhance productivity, but I’m sure you understand why we shouldn’t do that.
rahkiin
I onky use line-completion AI that comes with Rider. I think it is a reasonable mix of classic code completion but with a bit more smart to it, like suggesting a string for a Console.Write. But it does not write new lines, as indicated by the author.
didip
Why? Clearly AI tools make life easier.
I could drive a manual car, but why? Automatic transmission is so much more convenient. Furthermore, for some use-cases FSD is even more convenient.
Another example: I don't want to think about gait movement of my robots, I just want it move, from A to B.
With programming, same thing: I don't want to waste time typing `if err != nil {}`, I want to think about the real problem. Ditto on happy-case unit tests. I don't want to waste my carpal tunnel prone wrists on those.
So on and so forth. Technology exists to make life more convenient. So why reject technology?
null
alfiedotwtf
The fun factor: as an example, I specifically bought a manual car because automatics are boring to drive :)
jamez1
Skill loss works both ways. You might miss out on forming early skills in using llms effectively, and end up playing catch up in 3-5 years from now if LLMs mark all the skills you hold to be void.
It is also likely LLMs will change programming languages, we will probably move to more formal, type safe languages that the LLM can work with better. You might be good at your language but find the world shifts to a new one that everyone has to use LLMs to be effective for.
sceptic123
Is there really that much skill involved in using LLMs effectively, most of the criticisms I see end up being countered with something along the lines of "you're not using the right model". That implies that much of the skill people talk about is less important than picking the correct model to use (which tend to be the more expensive ones).
And in your LLM future, who will maintain all of the legacy systems that are written in languages the LLMs don't end up assimilating? It's reasonably safe to assume there will be plenty of work left there.
comrade1234
It’s replaced google search for me when trying to look up a specific problem. I’d actually rather use google because the results from ai are too long and wordy and give too many answers/options, but something has happened to google that has made it useless. It started when they began putting reddit at the top of the results and it’s just getting worse over time.
paroneayea
I mean isn't what happened to Google leading people to use generative ai tools instead of searching that web searches started filling with generative AI garbage, both in terms of web content as in terms of Google itself generating it
shmichael
Inventing the automobile has clearly made humanity less fit. Should we stop driving?
No. Going back to the stone age is not the solution. For the majority of our day, commuting without a vehicle will be impractical. So will coding without AI, especially as AI improves.
To retain human competency, we will have to find a novel solution. For walking, we created concentrated practice time - gyms/outdoor runs. Some evolution of leetcode, or even an AI guided training, might be the solution for coding skill preservation.
hyperjeff
> Inventing the automobile has clearly made humanity less fit. Should we stop driving?
Perhaps an apt analogy. One could argue that the lure of convenience of automobiles led to one of the worst decisions of the 20th century, to restructure society around automobiles, causing a self-perpetuating reliance feedback loop with many destructive side-effects (physical, environmental and cultural). We should pause a bit and not rush head-long into AI without trying to think the path forward through. It's a decision that we will all make together as a culture. There are many current troubles with AI already, even if they make no mistakes at all.
tasuki
> Inventing the automobile has clearly made humanity less fit. Should we stop driving?
Yes, pretty please.
I live in a town of 400 thousand, it's basically 10 kilometers accross. Very easily walkable. Why does everyone drive? I'm about as fast on foot as when they're stuck in morning traffic. I'm also enjoying my time more than the people stuck in traffic. (And I'd enjoy it even more if there weren't so many cars around!)
I don't understand people who drive to the gym to walk there. They could just walk to the gym and back, instead of going to the gym...
greyman
I also stopped using AI code editors, but for different reasons. I realized, that with advances like Gemini 2.5 Pro, AI will soon be able to implement whole features, with correct prompt. So the real skills is how to prompt the AI a maintain the overall architecture of the project. I wonder if the IDEs like Cursor or Cline will ever be needed in the future; as for myself, I stopped investing into learning them. I currently use 2.5 Pro + Repo prompt app which prepares prompt and then apply the result to codebase semi-automatically.
I believe there are two kinds of skill: standalone and foundational.
Over the centuries we’ve lost and gained a lot of standalone skills. Most people throughout history would scoff at my poor horse-riding, sword fighting or my inability to navigate by the stars.
My logic, reasoning and oratory abilities on the other hand, as well as my understanding of fundamental mechanics and engineering principles would probably hold up quite well (language barrier notwithstanding) back in ancient Greece or in 18th century France.
I believe AI is fine to use for standalone skills in programming. Writing isolated bits of logic, e.g. a getRandomHexColor() function in JavaScript or a query in an SQL dialect you’re not deeply familiar with is a great help and timesaver.
On the other hand, handing over the fundamental architecture of your project to an AI will erode your foundational problem solving and software design abilities.
Fortunately, AI is quite good at the former, but still far from being able to do the latter. So, to me at least, AI based code editors are helpful without the risk of long term skill degradation.