People Who Hype Cursor Usually Lack Technical Skills
33 comments
·May 10, 2025tom_m
phreno
I look forward to your fascinating work on phrenology next.
It’s always amazing such smurt people like you always seem to forget your own statistical irrelevance.
“Oh no the sound forms don’t come out just right.” What a giant foot going to squish us? Minus the anxiety of being smiting you’re projecting some bizarre obligation to your lived experience.
Why don’t you say things in the old Latin language? Yeah that’s right we move on from written and spoken traditions, just need to properly apply physical statistics to the real engineering. The stupid babble is in a YT video is far removed from actually building a bridge or another context where the physics matters more than the Anglicanized circumlocution.
yoyohello13
Please read the article first. This is not “AI bad” despite what the title may imply.
I do think we are in a transitional period now. Eventually all editors will have the same agentic capability. Thats why editor agnostic tools like Claude code and aider are much more exciting to me.
mattnewton
I read the article and was more confused. The author seemed to make a lot of assumptions about cursor without actually trying it and then used those assumptions to justify not trying it.
tough
People who criticize something without knowing or having first hand experience lack common sense
tough
So the problem is the hype, not the AI
ko_pivot
> In fact, Cursor’s code completion isn’t much better than GitHub Copilot’s. They both use the same underlying models
Not sure that this is true. cursor agent mode is different from cursor code completion and cursor code completion is a legitimately novel model I believe.
tough
According to this article, Cursor's tab completion is SOTA, and was built by a genius that also created some other important foundational libs on AI and worked briefly at OpenAI
https://www.coplay.dev/blog/a-brief-history-of-cursor-s-tab-...
tom_m
You can have Copilot use Claude now but I'm not sure it's the default. I found the latest Gemini Pro 2.5 to be much better than Claude Sonnet anyway...but yes, these are all more or less the same at this point.
One thing people don't realize is there's also randomness in the answers. Also, some of the editors allow you to tweak temperature and others do not. This is why I found Roo Code extension to be better.
alook
They definitely do train their own models, the founders have described this in several interviews.
I was surprised to learn this, but they made some interesting choices (like using sparse mixture-of-experts models for their tab completion model, to get high throughput/low latency).
Originally i think they used frontier models for their chat feature, but I believe theyve recently replaced that with something custom for their agent feature.
varsketiz
I'm really curious in what problems the codebases of startups of today will have in a few years. The internet is already full of memes about working with legacy code. What will be the legacy codebases where half of the code is generated with AI tools?
tom_m
I think about this a lot. Then I realize many are already bad to begin with. So they may not be much worse as it turns out. We'll have to see.
One thing is certain though - it's an amazing time for security professionals. We already have careless developers without AI, but now? Oh boy oh boy.
tough
and 99% of them end up in startup cementery anyways, does it matter much if those codebases stink a lil more?
trowawee
They'll be trash, but after a decade bouncing around startups, that's not exactly a problem unique to LLMs. There's probably going to be more startups with more trash than there used to be, but hey: that's job security.
plandis
Any claims about functionality with coding assistants should be looked upon skeptically without including the prompts, IMO.
It could very well be that Cursor isn’t very helpful. It could also be the case that the person prompting is not providing enough context for the problem at hand.
It’s impossible to tell without the full chat history.
mystraline
People who hype IDEs usually lack technical skills.
People who hype autocomplete usually lack technical skills.
People who hype memory-safe languages usually lack technical skills.
People who hype compilers usually lack technical skills.
There was some wordplay, which is just attacking the current technology that makes ttechnology more attainable by more people. However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that. But the really bad side is when the LLM companies put pricetags to use the 'think-for-you' machine. And that represents a great de-skilling and anti-critical-thought.
I'm not saying "don't use LLMs". I am saying to run them yourselves, and learn how to work with them as an interactive encyclopedia, and also not to let them have core control over intellectual thought.
yladiz
> However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that.
Not sure what you mean. Using a local LLM or a cloud one can get you addicted in the same way?
mystraline
Using a local LLM opens all the guarded secrets bare.
For example, you can run multiple LLMs to understand each outputs.
You can issue system messages (commands) to do specific actions, like ignore arbitrary moralities, or to process first and second derivative results.
By running and commanding an LLM locally, you become a actual tool-user, rather than a service user at the whim of whatever the company (ChatGPT, etc) wishes.
cjoelrun
Useful LLM architectures for working on complex codebases seem to be out the reach of consumer hardware.
win32lover
[dead]
mellosouls
Ironically, considering the title, this seems like a junior dev/young man take.
A little too sure of itself, incautious and overly generalising.
mattnewton
Cursor does actually train their own models, most importantly models that apply edits to files, and models that assemble context for the llm agents. Has the author actually used the tools they are writing about?
Cursor/windsurf/roo/kilo/zed are about smoothing over rough edges in actually getting work done with agenetic models. And somewhat surprisingly, the details matter a lot in unlocking value.
rattlesnakedave
Low quality clickbait article. “I like my editor because I’m used to it” ok man, do you want an award? The claims about the limitations of vscode and cursor’s code navigation abilities aren’t even accurate. The author just doesn’t know how to use them. There’s a reason it’s popular, and it’s not “everyone is dumber and less talented than me.”
rexarex
Cursor with Gemini 2.5 pro has been really great.
tom_m
Roo Code with Gemini 2.5 Pro has been really great and FREE. I'm super curious to see how this landscape changes. I'm still surprised Windsurf managed to be acquired for $3B too. Give it a few years and there won't be a point in paying for these editors. I don't think there is currently.
tmpz22
Big fan of Gemini 2.5 + Cursor but its far from a panacea.
After using Cursor heavily the past few weeks I agree with the authors points. The ability to work outside of Cursor/AI is paramount within small software teams because you will periodically run into things it can't do - or worse it will lead you in a direction that wastes a lot of developer time.
Cursor will get better at this over time, and the underlying model, but the executive vision of this is absolutely broken and at this point I can only laugh at the problems this generation of startups will inevitably go through when they realize their teams no longer have the expertise to solve things in more traditional manners.
hliyan
I installed and played around with Cursor for all of perhaps 2 hours before giving up in disgust. The first few generations are generally quite good. After that, problems started to stack up so exponentially that I found myself rubbing my temples. You would probably have the same reaction if you could read the generated code. I decided it's better to stick to my current approach of using LLMs as an expert system, helping me figure out which functions, libraries, algorithms, data structures or patterns to use, and occasionally asking them to write a standalone function.
Very true. You can easily tell from the YouTubers. When you listen to them, it's clear they don't know what they're talking about because of half the things they pronounce incorrectly or incorrectly reference something.