An update to our pricing
55 comments
·April 21, 2025rudedogg
graeme
Oh, that's not great. Cursor has a privacy mode where you can avoid this.
>If you enable "Privacy Mode" in Cursor's settings: zero data retention will be enabled, and none of your code will ever be stored or trained on by us or any third-party.
simonw
Yeah that's a bad look. If I have an API key visible in my code does that get packaged up as a "prompt" automatically? Could it be spat out to some other user of a model in the future?
(I assume that there's a reason that wouldn't happen, but it would be nice to know what that reason is.)
Workaccount2
Gemini doesn't use paid API prompts for training.[1]
I believe its just for free usage and the web app.
rudedogg
Yeah, I was referring to their webapp/Chat, aka Gemini Advanced. It uses your prompts for training unless you turn off chat history completely, or are in their “Workspace” enterprise version.
https://support.google.com/gemini/answer/13594961?hl=en
> What data is collected and how it’s used
> Google collects your chats (including recordings of your Gemini Live interactions), what you share with Gemini Apps (like files, images, and screens), related product usage information, your feedback, and info about your location. Info about your location includes the general area from your device, IP address, or Home or Work addresses in your Google Account. Learn more about location data at g.co/privacypolicy/location.
Google uses this data, consistent with our Privacy Policy, to provide, improve, and develop Google products and services and machine-learning technologies, including Google’s enterprise products such as Google Cloud.
Gemini Apps Activity is on by default if you are 18 or older. Users under 18 can choose to turn it on. If your Gemini Apps Activity setting is on, Google stores your Gemini Apps activity with your Google Account for up to 18 months. You can change this to 3 or 36 months in your Gemini Apps Activity setting.
Alifatisk
That's what I thought
kmeisthax
Without exception, every AI company is a play for your data. AI requires a continuing supply of new data to train on, it does not "get better" merely by using the existing trainsets with more compute.
Furthermore, synthetic data is a flawed concept. At a minimum, it tends to propagate and amplify biases in the model generating the data. If you ignore that, there's also the fundamental issue that data doesn't exist purely to run more gradient descent, but to provide new information that isn't already compressed into the existing model. Providing additional copies of the same information cannot help.
kadushka
it does not "get better" merely by using the existing trainsets with more compute.
Pretty sure it does - that’s the whole point of using more test time compute. Also, a lot of research efforts goes into improving data efficiency.
amelius
Windsurf: where the users provide the wind and they do all the surfing.
Alifatisk
No way Gemini Advanced user content is also being used for training?
blibble
it's the reason they bought it...
parliament32
> Same with Gemini Advanced (paid) training on your prompts
I'm not sure if this is true.
> 17. Training Restriction. Google will not use Customer Data to train or fine-tune any AI/ML models without Customer's prior permission or instruction.
https://cloud.google.com/terms/service-terms
> This Generative AI for Google Workspace Privacy Hub covers... the Gemini app on web (i.e. gemini.google.com) and mobile (Android and iOS).
> Your content is not used for any other customers. Your content is not human reviewed or used for Generative AI model training outside your domain without permission.
> The prompts that a user enters when interacting with features available in Gemini are not used beyond the context of the user trust boundary. Prompt content is not used for training generative AI models outside of your domain without your permission.
> Does Google use my data (including prompts) to train generative AI models? No. User prompts are considered customer data under the Cloud Data Processing Addendum.
simonw
Right, it's the free Gemini that has this: https://ai.google.dev/gemini-api/terms#unpaid-services
> When you use Unpaid Services, including, for example, Google AI Studio and the unpaid quota on Gemini API, Google uses the content you submit to the Services and any generated responses to provide, improve, and develop Google products and services and machine learning technologies, including Google's enterprise features, products, and services, consistent with our Privacy Policy.
rudedogg
That’s for Google Cloud APIs.
See my post here about Gemini Advanced (the web chat app) https://news.ycombinator.com/item?id=43756269
null
amiantos
Cursor and Windsurf pricing really turned me off. I prefer Claude Code's direct API costs, because it feels more quantifiable to me cost wise. I can load up Claude Code, implement a feature, and close it, and I get a solid dollar value of how much that cost me. It makes it easier for me to mentally write off the low cost of wasteful requests when the AI gets something wrong or starts to spin its wheels.
With Cursor/Windsurf, you make requests, your allowed credit quantity ticks down (which creates anxiety about running out), and you're trying to do some mental math to figure out how those requests that actually cost you. It feels like a method to obfuscate the real cost to the user and also create an incentive for the user to not use the product very much because of the rapidly approaching limits during a focus/flow coding session. I spent about an hour using Cursor Pro and had used up over 30% of my monthly credits on something relatively small, which made me realize their $20/mo plan likely was not going to meet my needs and how much it would really cost me seemed like an unanswerable question.
I just don't like it as a customer and it makes me very suspicious of the business model as a result. I spent about $50 on a week with Claude Code, and could easily spend more I bet. The idea that Cursor and Windsurf are suggesting a $20/mo plan could be a good fit for someone like me, in the face of that $50 in one week figure, further illustrates that there is something that doesn't quite match up with these 'credit' based billing systems.
adamgordonbell
I use windsurf, have the largest plan and still need to top up quite a bit.
So for me it has a price per task, sort of, because you are topping it up by paying another 10 dollars at a time as you run out.
The plans aren't the right size for professional work, but maybe they wanted to keep the price points low?
janpaul123
Yeah, my colleague just wrote about this exact problem of incentive misalignment with Cursor and Windsurf https://blog.kilocode.ai/p/why-cursors-flat-fee-pricing-will
The economist in me is says "just show the prices", though the psychologist in me says "that's hella stressful". ;)
ricksunny
Why does the psychologist in you say "that's hella stressful", e.g. Stressful for who? What is the source of their stress?
newlisp
I haven't used either but reading Cursor's website, they let you add your own Claude API key, do they still fiddle with your requests using your own key?
amiantos
When you go to add your own API key into Cursor, you get a warning message that several Cursor features cannot be used if you plug in your own API key. I would totally have done that if not for that message.
lherron
I chalk it up to VC subsidized pricing. I use my monthly Cursor quota, then switch to Claude Code when I run out.
bogtog
> I spent about an hour using Cursor Pro and had used up over 30% of my monthly credits
Sorry, but how is this possible? They give 500 credits in a month for the "premium" queries. I don't even think I'd be able to ask more than one question per minute even with tiny requests. I haven't tried the Agent mode. Does that burn through queries?
adamgordonbell
With an agent, one request could be 20 or more iterations
amiantos
I had to do a little digging to respond to this properly.
I was on the "Pro Trial" where you get 150 premium requests and I had very quickly used 34 for them, which admittedly is 22% and not 30%. Their pricing page says that the Free plan includes "Pro two-week trial", but they do not explain that on the pro trial you only get 150 premium requests and that on the real Pro plan you get 500 premium requests. So you're correct to be skeptical, I did not use 30% of 500 requests on the Pro plan. I used 22% of the 150 requests you get on the Trial Pro plan.
And yes, I think the agent mode can burn through credits pretty quickly.
asdsadasdasd123
How do you run out of premium requests, I've peeked at the stats across my company and never seen this happen.
SquareWheel
I tried Windsurf for the first time last week, and I had pretty mixed results. On the positive side, sometimes the tool would figure out exactly what I was doing, and it truly helped me. This was especially the case when when performing a repetitive action, like renaming something, or making the same change across multiple related files.
Most of the time though, it just got in the way. I'd press tab to indent a line, and it'd instead jump half-way down the file to delete some random code instead. On more than one occasion I'd by typing happily, and I'd see it had gone off and completely mangled some unrelated section without my noticing. I felt like I needed to be extremely attentive when reviewing commits to make sure nothing was astray.
Most of its suggestions seemed hyper-fixated on changing my indent levels, adding braces where they weren't supposed to go, or deleting random comments. I also found it broke common shortcuts, like tab (as above), and ctrl+delete.
The editor experience also felt visually very noisy. It was constantly popping up overlays, highlighting things, and generally distracting me while I was trying to write code. I really wished for a "please shut up" button.
The chat feature also seemed iffy. It was actually able to identify one bug for me, though many of the times I'd ask it to investigate something, it'd get stuck scanning through files endlessly until it just terminated the task with no output. I was using the unlimited GPT-4.1 model, so maybe I needed to switch to a model with more context length? I would have expected some kind of error, at least.
So I don't know. Is anyone else having this experience with Windsurf? Am I just "holding it wrong"? I see people being pretty impressed with this and Cursor, but it hasn't clicked for me yet. How do you get it to behave right?
mediaman
I find Cursor annoying too, with dumb suggestions getting in the way of my attempts to tab-indent. They should make shift-tab the default way to accept its suggestion, instead of tab, or at least let shift-tab indent without accepting anything if they really want to keep tab as default autocomplete.
jawns
I find it's very model-dependent. You would think that the more powerful models would work the best, but that hasn't been the case for me. Claude Sonnet tends to do the best job of understanding my intent and not screwing things up.
I've also found that test-driven development is even more critical for these tools than for human devs. Fortunately, it's also far less of a chore.
jawns
As an avid Windsurf user, I support this simplification of the pricing.
In nearly all cases, I don't care how many individual steps the model needs to take to accomplish the task. I just want it to do what I've asked it to do.
It is curious, however, that this move is coinciding with rumors of OpenAI attempting to acquire Windsurf. If an acquisition were imminent, it would seem strange to mess with the pricing structure soon beforehand.
Jcampuzano2
Curious - what makes you pick, windsurf over other editors. I currently use Cursor but have seen more news about windsurf, especially after the recent news with respect to OpenAI. Do you find it better, worse, etc. And are there things it does better for you than other editors.
leobuskin
I don't like the "vibe" term nowadays, but when you mix two pretty abstract domains (AI and development), it's all about vibes and aura. Some model/agent works perfectly for one of us (let's keep in mind, we have a bunch of factors, from language to the complexity of the implementation), and does everything wrong for others.
You just can't measure it properly, outside of experiments and building your own assessment within your context. All the recommendations here just don't work. "Try all of them, stick with one for a while, don't forget to retry others on a regular basis" - that's my moto today.
Cursor (as an agent/orchestrator) didn't work for me at all (Python, low-level, no frameworks, not webdev). I fell in love with Windsurf ($60 tier initially). Switched entirely to JetBrains AI a few days ago (vscode is not friendly for me, PyCharm rocks), so happy about the price drop.
jelling
Clarifying the pricing would make it easier to value the revenue from current and future users. And naturally they rounded up to leave themselves literal margin for error.
braza
Honest question: what does it mean this new pricing that every product has today in terms of “credits”.
Sounds quite apache given the fact that you will need to track utilisation all the time for something like prompting.
Does someone has any insight on why the old flat pricing or utilisation prices are not in those new AI products where we have this abstract concept as credit?
ariejan
With Windsurf I'm able to pick any of the premium language models. I.e. Claude 3.7 Sonnet costs 1 credit / prompt, whereas the thinking model costs 1.25 credits, and o3 costs a whopping 7.5 credits.
It's simply passing on the cost of respective model's costs, I think. I can image it's hard to come up with an affordable / interesting flat rate _and_ support all those differently priced models.
croes
dang
Thanks! Macroexpanded:
Why is OpenAI buying Windsurf? - https://news.ycombinator.com/item?id=43743993 - April 2025 (218 comments)
OpenAI looked at buying Cursor creator before turning to Windsurf - https://news.ycombinator.com/item?id=43716856 - April 2025 (115 comments)
OpenAI in Talks to Buy Windsurf for About $3B - https://news.ycombinator.com/item?id=43708725 - April 2025 (44 comments)
elashri
I was a user on one of their pro plans with some discount. I remember getting confused between the two tokens limits (flow action and the other one). I am puzzled now trying to figure out if the change corresponds to effective decrease or increase of the pricing. They eliminated the flow action now. To be honest I think it might be that I couldn't understand the change.
The only thing I notice is that for 250 additional credit you pay $10 and at this point itis cheaper and better to get another subscription for $15 which will give you another 500 instead of $20. That is if you think you will need this number.
rohanphadte
The previous two token limits were: user prompt credits (500) flow actions credits (1500)
For power users, flow actions would deplete much more quickly (every time the LLM analyzed a file, edited, etc), so Windsurf removed the flow action limit, so you're only getting charged for 500 messages to the AI, which is strictly better for the user.
sunaookami
How does Windsurf compare to Cursor? Does anybody have enough experience with both? I'm only using Cursor right now but it seems Windsurf is now a bit cheaper?
Abde-Notte
i’ve used both a bit. cursor has more features and feels more polished, but windsurf is faster and cheaper. windsurf also has a cleaner UI. if you don’t need all the extra stuff in cursor, windsurf’s a pretty solid option.
Jcampuzano2
With all the new news about Windsurf and it being thrown into the spotlight, the allure of the lower price than Cursor is definitely there. But does it actually work on par or better?
Anybody use Windsurf as their daily driver and have experience with other editors who can chime in, for those of us who are considering it as an alternative?
null
dockercompost
This is a welcome change. It feels bad when your 250 line doc eats up 3 credits just being read and analyzed.
cirrus3
Anything the requires me to use a different IDE is a non-starter for me.
I can imagine it is a lot easier to develop these things as a custom version of VSCode instead of plugins/extensions for a handful of the popular existing IDEs, but is that really a good long term plan? Is the future going to be littered with a bunch of one-off custom IDEs? Does anyone want that future?
ramesh31
>Anything the requires me to use a different IDE is a non-starter for me.
Windsurf is, ultimately, just an IDE extension. They shipped a forked VSCode with their branding for... some reason. But the extension is available in practically every IDE/editor.
newlisp
The reason for forking is the restrictions of the vscode API, so no, the extensions and the fork are not the same.
> To train, develop, and improve the artificial intelligence, machine learning, and models that we use to support our Services. We may use your Log and Usage Information and Prompts and Outputs Information for this purpose.
https://windsurf.com/privacy-policy
Am I the only one bothered by this? Same with Gemini Advanced (paid) training on your prompts. It feels like I’m paying with money, but also handing over my entire codebase to improve your products. Can’t you do synthetic training data generation at this point, along with the massive amount of Q/A online to not require this?