Skip to content(if available)orjump to list(if available)

Ask HN: Cursor or Windsurf?

Ask HN: Cursor or Windsurf?

102 comments

·May 12, 2025

Things are changing so fast with these vscode forks I m barely able to keep up. Which one are you guys using currently? How does the autocomplete etc, compare between the two?

welder

Neither? I'm surprised nobody has said it yet. I turned off AI autocomplete, and sometimes use the chat to debug or generate simple code but only when I prompt it to. Continuous autocomplete is just annoying and slows me down.

kristopolous

Agreed. You may like the arms-length stuff over here: https://github.com/day50-dev/llmehelp . The shell-hook.zsh and screen-query have been the most life-changing things for me.

It's because I always forget the syntax for things. Let's say ssh port forwarding. Now I can just describe it.

$ ssh (take my local port 80 and forward it to 8080 on the machine betsy) user@betsy

then i just do ctrl+x x and it will replace my prompt with the suggestion. It's been a total game changer. I use it for ffmpeg, jq, and a bunch of other things that have forgettable syntax.

For the more involved stuff I just use screen-query and ctrl+a h. Funky git states, strange terminal errors, weird config scripts, it allows a joint investigation. I've usually got multiple of them open at any given time.

It doesn't run things or write code intentionally. These are tightly integrated reference and investigation tools.

raverbashing

Yeah

AI autocomplete is a feature, not a product (to paraphrase SJ)

I can understand Windsurf getting the valuation as they had their own Codeium model

$B for a VSCode fork? Lol

victorbjorklund

I'm with Cursor for the simple reason it is in practice unlimited. Honestly the slow requests after 500 per month are fast enough. Will I stay with Cursor? No, ill switch the second something better comes along.

rvnx

Cursor is acceptable because for the price it's unbeatable. Free, unlimited requests are great. But by itself, Cursor is not anything special. It's only interesting because they pay Claude or Gemini from their pockets.

Ideally, things like RooCode + Claude are much better, but you need infinite money glitch.

herbst

On weekend the slow requests regularly are faster than the paid requests.

danpalmer

Zed. They've upped their game in the AI integration and so far it's the best one I've seen (external from work). Cursor and VSCode+Copilot always felt slow and janky, Zed is much less janky feels like pretty mature software, and I can just plug in my Gemini API key and use that for free/cheap instead of paying for the editor's own integration.

vimota

I gave Zed an in-depth trial this week and wrote about it here: https://x.com/vimota/status/1921270079054049476

Overall Zed is super nice and opposite of janky, but still found a few of defaults were off and Python support still was missing in a few key ways for my daily workflow.

submeta

Consumes lots of resources on an M4 Macbook. Would love to test it though. If it didn’t freeze my Macbook.

_bin_

I'll second the zed recommendation, sent from my M4 macbook. I don't know why exactly it's doing this for you but mine is idling with ~500MB RAM (about as little as you can get with a reasonably-sized Rust codebase and a language server) and 0% CPU.

wellthisisgreat

Does it have Cursor’s “tab” feature?

dvtfl

eadz

It would be great if there was an easy way to run their open model (https://huggingface.co/zed-industries/zeta) locally ( for latency reasons ).

I don't think Zeta is quite up to windsurf's completion quality/speed.

I get that this would go against their business model, but maybe people would pay for this - it could in theory be the fastest completion since it would run locally.

khwhahn

I wish your own coding would just be augmented like somebody looking over your shoulder. The problem with the current AI coding is that you don't know your code base anymore. Basically, like somebody helping you figure out stuff faster, update documentation etc.

fastball

For the agentic stuff I think every solution can be hit or miss. I've tried claude code, aider, cline, cursor, zed, roo, windsurf, etc. To me it is more about using the right models for the job, which is also constantly in flux because the big players are constantly updating their models and sometimes that is good and sometimes that is bad.

But I daily drive Cursor because the main LLM feature I use is tab-complete, and here Cursor blows the competition out of the water. It understands what I want to do next about 95% of the time when I'm in the middle of something, including comprehensive multi-line/multi-file changes. Github Copilot, Zed, Windsurf, and Cody aren't at the same level imo.

solumunus

If we’re talking purely auto complete I think Supermaven does it the best.

fastball

Cursor bought Supermaven last year.

pembrook

For a time windsurf was way ahead of cursor in full agentic coding, but now I hear cursor has caught up. I have yet to switch back to try out cursor again but starting to get frustrated with Windsurf being restricted to gathering context only 100-200 lines at a time.

So many of the bugs and poor results that it can introduce are simply due to improper context. When forcibly giving it the necessary context you can clearly see it’s not a model problem but it’s a problem with the approach of gathering disparate 100 line snippets at a time.

Also, it struggles with files over 800ish lines which is extremely annoying

We need some smart deepseek-like innovation in context gathering since the hardware and cost of tokens is the real bottleneck here.

unsupp0rted

Asking HN this is like asking which smartphone to use. You'll get suggestions for obscure Linux-based modular phones that weigh 6 kilos and lack a clock app or wifi. But they're better because they're open source or fully configurable or whatever. Or a smartphone that a fellow HNer created in his basement and plans to sell soon.

Cursor and Windsurf are both good, but do what most people do and use Cursor for a month to start with.

joelthelion

Aider! Use the editor of your choice and leave your coding assistant separate. Plus, it's open source and will stay like this, so no risk to see it suddenly become expensive or dissappear.

Oreb

Approximately how much does it cost in practice to use Aider? My understanding is that Aider itself is free, but you have to pay per token when using an API key for your LLM of choice. I can look up for myself the prices of the various LLMs, but it doesn't help much, since I have no intuition whatsoever about how many tokens I am likely to consume. The attraction of something like Zed or Cursor for me is that I just have a fixed monthly cost to worry about. I'd love to try Aider, as I suspect it suits my style of work better, but without having any idea how much it would cost me, I'm afraid of trying.

anotheryou

Depends entirely on the API.

With deepseek: ~nothing.

tuyguntn

is deepseek fast enough for you? For me the API is very slow, sometimes unusable

mbanerjeepalmer

I used to be religiously pro-Aider. But after a while those little frictions flicking backwards and forwards between the terminal and VS Code, and adding and dropping from the context myself, have worn down my appetite to use it. The `--watch` mode is a neat solution but harms performance. The LLM gets distracted by deleting its own comment.

Roo is less solid but better-integrated.

Hopefully I'll switch back soon.

fragmede

I suspect that if you're a vim user those friction points are a bit different. For me, Aider's git auto commit and /undo command are what sells it for me at this current junction of technology. OpenHands looks promising, though rather complex.

movq

The (relative) simplicity is what sells aider for me (it also helps that I use neovim in tmux).

It was easy to figure out exactly what it's sending to the LLM, and I like that it does one thing at a time. I want to babysit my LLMs and those "agentic" tools that go off and do dozens of things in a loop make me feel out of control.

aitchnyu

Yup, choose your model and pay as you go, like commodities like rice and water. The others played games with me to minimize context and use cheaper models (such as 3 modes, daily credits etc, using most expensive model etc).

Also the --watch mode is the most productive interface of using your editor, no need of extra textboxes with robot faces.

fragmede

fwiw. Gemini-*, which is available in Aider, isn't Pay As You Go (payg) but post paid, which means you get a bill at the end of the month and not the OpenAI/others model of charging up credits before you can use the service.

camkego

I guess this is a good reason to consider things like openrouter. Turns it into a prepaid service.

SafeDusk

I am betting on myself.

I built a minimal agentic framework (with editing capability) that works for a lot of my tasks with just seven tools: read, write, diff, browse, command, ask and think.

One thing I'm proud of is the ability to have it be more proactive in making changes and taking next action by just disabling the `ask` tool.

I won't say it is better than any of the VSCode forks, but it works for 70% of my tasks in an understandable manner. As for the remaining stuff, I can always use Cursor/Windsurf in a complementary manner.

It is open, have a look at https://github.com/aperoc/toolkami if it interests you.

suninsight

https://nonbios.ai - [Disclosure: I am working on this.]

- We are in public beta and free for now.

- Fully Agentic. Controllable and Transparent. Agent does all the work, but keeps you in the loop. You can take back control anytime and guide it.

- Not an IDE, so don't compete with VSCode forks. Interface is just a chatbox.

- More like Replit - but full stack focussed. You can build backend services.

- Videos are up at youtube.com/@nonbios

octocop

Use vim

dkaleta

Since this topic is closely related to my new project, I’d love to hear your opinion on it.

I’m thinking of building an AI IDE that helps engineers write production quality code quickly when working with AI. The core idea is to introduce a new kind of collaboration workflow.

You start with the same kind of prompt, like “I want to build this feature...”, but instead of the model making changes right away, it proposes an architecture for what it plans to do, shown from a bird’s-eye view in the 2D canvas.

You collaborate with the AI on this architecture to ensure everything is built the way you want. You’re setting up data flows, structure, and validation checks. Once you’re satisfied with the design, you hit play, and the model writes the code.

Website (in progress): https://skylinevision.ai

YC Video showing prototype that I just finished yesterday: https://www.youtube.com/watch?v=DXlHNJPQRtk

Karpathy’s post that talks about this: https://x.com/karpathy/status/1917920257257459899

Thoughts? Do you think this workflow has a chance of being adopted?

michuk

Looks like an antidote for "vibe coding", like it. When are you planning to release something that could be tried? Is this open source?

dkaleta

I believe we can have a beta release in September, and yes, we plan to open-source the editor.

PS. I’m stealing the ‘antidote to “vibe coding”’ phrase :)

rkuodys

I quite liked the video. Hope you get to launch the product and I could try it out some day.

The only thing that I kept thinking about was - if there is a correction needed- you have to make it fully by hand. Find everything and map. However, if the first try was way off , I would like to enter from "midpoint" a correction that I want. So instead of fixing 50%, I would be left with maybe 10 or 20. Don't know if you get what I mean.

dkaleta

Yes, the idea is to ‘speak/write’ to the local model to fix those little things so you don’t have to do them by hand. I actually already have a fine-tuned Qwen model running on Apple’s MLX to handle some of that, but given the hard YC deadline, it didn’t make it into the demo.

Eventually, you’d say, ‘add an additional layer, TopicsController, between those two files,’ and the local model would do it quickly without a problem, since it doesn’t involve complicated code generation. You’d only use powerful remote models at the end.

ciaranmca

Just watched the demo video and thought it is a very interesting approach to development, I will definitely be following this project. Good Luck.