People Who Hype Cursor Usually Lack Technical Skills
52 comments
·May 10, 2025yoyohello13
mattnewton
I read the article and was more confused. The author seemed to make a lot of assumptions about cursor without actually trying it and then used those assumptions to justify not trying it.
lubujackson
There is a lot of definitive "o1 is better" etc. Lots of strawmen and sweeping statements - not sure who is upvoting this beyond the "AI sucks" crowd who didn't RTFA.
tough
People who criticize something without knowing or having first hand experience lack common sense
varsketiz
I'm really curious in what problems the codebases of startups of today will have in a few years. The internet is already full of memes about working with legacy code. What will be the legacy codebases where half of the code is generated with AI tools?
Quekid5
Sooooo much boilerplate and pointless generated tests is my prediction.
EDIT: Oh, and as a sibling poster mentioned: A huge number of security vulnerabilities -- except now they can be purposefully injected by just posting random subtly-wrong code on the interweb. Not that you couldn't do that before, but reach would be much more limited unless you got your 'seemingly correct' code posted on SO.
tom_m
I think about this a lot. Then I realize many are already bad to begin with. So they may not be much worse as it turns out. We'll have to see.
One thing is certain though - it's an amazing time for security professionals. We already have careless developers without AI, but now? Oh boy oh boy.
trowawee
They'll be trash, but after a decade bouncing around startups, that's not exactly a problem unique to LLMs. There's probably going to be more startups with more trash than there used to be, but hey: that's job security.
ko_pivot
> In fact, Cursor’s code completion isn’t much better than GitHub Copilot’s. They both use the same underlying models
Not sure that this is true. cursor agent mode is different from cursor code completion and cursor code completion is a legitimately novel model I believe.
alook
They definitely do train their own models, the founders have described this in several interviews.
I was surprised to learn this, but they made some interesting choices (like using sparse mixture-of-experts models for their tab completion model, to get high throughput/low latency).
Originally i think they used frontier models for their chat feature, but I believe theyve recently replaced that with something custom for their agent feature.
tom_m
You can have Copilot use Claude now but I'm not sure it's the default. I found the latest Gemini Pro 2.5 to be much better than Claude Sonnet anyway...but yes, these are all more or less the same at this point.
One thing people don't realize is there's also randomness in the answers. Also, some of the editors allow you to tweak temperature and others do not. This is why I found Roo Code extension to be better.
tough
According to this article, Cursor's tab completion is SOTA, and was built by a genius that also created some other important foundational libs on AI and worked briefly at OpenAI
https://www.coplay.dev/blog/a-brief-history-of-cursor-s-tab-...
rattlesnakedave
Low quality clickbait article. “I like my editor because I’m used to it” ok man, do you want an award? The claims about the limitations of vscode and cursor’s code navigation abilities aren’t even accurate. The author just doesn’t know how to use them. There’s a reason it’s popular, and it’s not “everyone is dumber and less talented than me.”
mystraline
People who hype IDEs usually lack technical skills.
People who hype autocomplete usually lack technical skills.
People who hype memory-safe languages usually lack technical skills.
People who hype compilers usually lack technical skills.
There was some wordplay, which is just attacking the current technology that makes ttechnology more attainable by more people. However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that. But the really bad side is when the LLM companies put pricetags to use the 'think-for-you' machine. And that represents a great de-skilling and anti-critical-thought.
I'm not saying "don't use LLMs". I am saying to run them yourselves, and learn how to work with them as an interactive encyclopedia, and also not to let them have core control over intellectual thought.
yladiz
> However with LLMs, there is one major worry - in that it encourages de-skilling and getting addicted to having a LLM think for us. Now, if you can run your own LLMs, youre resilient to that.
Not sure what you mean. Using a local LLM or a cloud one can get you addicted in the same way?
mystraline
Using a local LLM opens all the guarded secrets bare.
For example, you can run multiple LLMs to understand each outputs.
You can issue system messages (commands) to do specific actions, like ignore arbitrary moralities, or to process first and second derivative results.
By running and commanding an LLM locally, you become a actual tool-user, rather than a service user at the whim of whatever the company (ChatGPT, etc) wishes.
fuzzzerd
Where do you do recommend someone start the journey of being a llm service user to a llm tool user?
What kind of hardware is needed to get something useful running locally?
cjoelrun
Useful LLM architectures for working on complex codebases seem to be out the reach of consumer hardware.
v3ss0n
Just try qwen3 even at 14B you can turn off internet
win32lover
[dead]
mattnewton
Cursor does actually train their own models, most importantly models that apply edits to files, and models that assemble context for the llm agents. Has the author actually used the tools they are writing about?
Cursor/windsurf/roo/kilo/zed are about smoothing over rough edges in actually getting work done with agenetic models. And somewhat surprisingly, the details matter a lot in unlocking value.
plandis
Any claims about functionality with coding assistants should be looked upon skeptically without including the prompts, IMO.
It could very well be that Cursor isn’t very helpful. It could also be the case that the person prompting is not providing enough context for the problem at hand.
It’s impossible to tell without the full chat history.
tom_m
Very true. You can easily tell from the YouTubers. When you listen to them, it's clear they don't know what they're talking about because of half the things they pronounce incorrectly or incorrectly reference something.
satanfirst
> pronounce incorrectly
Knowing the correct pronunciation means you watch videos or have a social group that discusses it, that doesn't mean you are necessarily competent like someone who reads primary sources.
tom_m
Or you worked in an office. Or remote on Zoom. Or you read (projects and articles often explain how to pronounce).
So you know... experience in the industry.
But I'm not even talking about the weird names for things either.
synt4xoverload
Agreed. Especially when so much jargon in software these days is made up marketing speak that has little relevance to managing electromagnetic geometry in the machine.
It’s often a very intellectually dishonest industry.
phreno
I look forward to your fascinating work on phrenology next.
It’s always amazing such smurt people like you always seem to forget your own statistical irrelevance.
“Oh no the sound forms don’t come out just right.” What a giant foot going to squish us? Minus the anxiety of being smiting you’re projecting some bizarre obligation to your lived experience.
Why don’t you say things in the old Latin language? Yeah that’s right we move on from written and spoken traditions, just need to properly apply physical statistics to the real engineering. The stupid babble is in a YT video is far removed from actually building a bridge or another context where the physics matters more than the Anglicanized circumlocution.
tom_m
It's not a big deal, it's just ONE piece of evidence the YouTuber has no experience and is just being a schill for something. You have to be aware of being marketed to. This is how it works these days. No more TV ads.
skeledrew
> either Claude or GPT
Just a bit miffed at the usage here, given Claude is a GPT model, but OpenAI names their models in such a way that it's strange to say "ChatGPT model" to specify that category.
hamburglar
I get it. I’m supposed to not take advantage of a very powerful (yet often flawed) tool because I am insecure about my technical skills cred. Gotcha.
n_ary
To me, it sounds like, don't use the new prototype chainsaw, while it'll take down a tree real quick and many hobbyists use it for their first time tree chopping, overtime usage heats up too much and the blade may break open and severe you hand or face, whichever is closer.
hamburglar
Well I’ve done quite a bit of AI coding and the worst downside is that occasionally you waste as much time as you save. Equating it to chopping your hand off is rather dramatic.
rexarex
Cursor with Gemini 2.5 pro has been really great.
tmpz22
Big fan of Gemini 2.5 + Cursor but its far from a panacea.
After using Cursor heavily the past few weeks I agree with the authors points. The ability to work outside of Cursor/AI is paramount within small software teams because you will periodically run into things it can't do - or worse it will lead you in a direction that wastes a lot of developer time.
Cursor will get better at this over time, and the underlying model, but the executive vision of this is absolutely broken and at this point I can only laugh at the problems this generation of startups will inevitably go through when they realize their teams no longer have the expertise to solve things in more traditional manners.
nicce
Cursor might not have core extensions soon. The current versions are full of CVEs and Microsoft tries to block them from updating.
tom_m
Roo Code with Gemini 2.5 Pro has been really great and FREE. I'm super curious to see how this landscape changes. I'm still surprised Windsurf managed to be acquired for $3B too. Give it a few years and there won't be a point in paying for these editors. I don't think there is currently.
mellosouls
Ironically, considering the title, this seems like a junior dev/young man take.
A little too sure of itself, incautious and overly generalising.
Please read the article first. This is not “AI bad” despite what the title may imply.
I do think we are in a transitional period now. Eventually all editors will have the same agentic capability. Thats why editor agnostic tools like Claude code and aider are much more exciting to me.