Skip to content(if available)orjump to list(if available)

Cursor Composer: Building a fast frontier model with RL

solarkraft

People on here love to be contrarian about Cursor, but I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline) and found Cursor’s overall experience to just be unmatched. A big part of that is its speed, another its reliability.

It’s the only coding agent I’m actually really motivated to use out of the box because it really does make me feel more productive while the others keep messing up the project, from way too large changes I didn’t ask for all the way to constant syntax and request errors.

It’s the only coding agent I’ve used that feels serious about being a product rather than a prototype. Their effort in improving their stack is totally paying off.

pqdbr

I dropped cursor for the precise reason you mention: reliability.

Countless times my requests in the AI chat just hang there for 30+ seconds more until I can retry them.

When I decided to give Claude Code a try (I thought I didn't need it because I used Claude in Cursor) I couldn't believe how faster it was, and literally 100% reliable.

EDIT: given today's release, decided to give it a go. The Composer1 model _is_ fast, but right at the second new agent I started I got this:

> Connection failed. If the problem persists, please check your internet connection or VPN

infecto

Sounds like you have a network problem. Did you try checking the network diagnostic in settings? They default to http2 which can throw a wrench in some corporate networks.

I would be willing to bet money your issue is on your side. I am a daily user since the beginning and cannot recall when I have had issues like you describe unless it was related to my corp network.

cleak

This is the exact reason I left Cursor for Claude Code. Night and day difference in reliability. The Windows experience might be especially bad, but it would get constantly hung or otherwise fail when trying to run commands. I also had to babysit Cursor and tell it to continue for mid sized tasks.

jonasnelle

They've improved performance dramatically in the last few weeks, might have fixed your issues.

chasebank

I use cursor daily, my business partner uses CC. Without a doubt, CC is certainly better, I'm just not willing to let go of the flow I spent the last year fine tuning. I'll probably make the leap after we finish the latest release.

davidgomes

A lot of progress is being made here on the Cursor side I encourage you to try it again.

(Cursor dev)

infecto

I too have tried them all and have settled with Cursor being the best. That said I see the current space split between folks like me who know generally what I want built and appreciate a tool that helps me get to goal quicker and on the otherwise of the spectrum, folks who want the tool to orchestrate most of the engineering. I have no opinion on which is better but for me I sit on the first camp. In that camp Cursor is by far the best tool.

psygn89

Yep, it just works seamlessly. Sure, it hangs sometimes, but their UI allows you to retry or undo changes to an earlier point in the conversation easily. The autocompletion is nice as well and pretty satisfying to tab through the small and menial things when refactoring.

rtfeldman

> I’ve tried all the popular alternatives (Copilot, Claude Code, Codex, Gemini CLI, Cline)

Can't help but notice you haven't tried Zed!

ramon156

You tried Claude and still prefer cursor?

solarkraft

Absolutely. CC can be tuned to not do too much crap on its own, but even with the new extension its IDE integration and multi thread management are still significantly worse, as is its status reporting, which I find to be very important.

Also, somehow magically, I’ve found Cursor’s Auto mode to be significantly faster than the specific models I’ve tried, Claude being among them.

infecto

Auto is pretty amazing and I think most folks that have issues or complain about cost are simply not using Auto.

infecto

Absolutely. I actually don’t understand the preference folks have for Claude code. I don’t find it that powerful. That said, I think some of it comes down to preference and work context.

neuronexmachina

For anyone else who was wondering, it looks like the within-Cursor model pricing for Cursor Composer is identical to gemini-2.5-pro, gpt-5, and gpt-5-codex: https://cursor.com/docs/models#model-pricing

($1.25 input, $1.25 cache write, $0.13 cache read, and $10 output per million tokens)

jonasnelle

Cursor has the best Tab model, and I feel like their lead there has kept growing - they're doing some really cool things there. https://cursor.com/blog/tab-rl

I wonder how much the methods/systems/data transfer, if they can pull off the same with their agentic coding model that would be exciting.

vidarh

I feel like that's like having a lead in producing better buggy whips.

I run Claude Code in the background near constantly for a variety of projects, with --dangerously-skip-permissions, and review progress periodically. Tabbing is only relevant when it's totally failing to make progress and I have to manually intervene, and that to me is a failure scenario that is happening less and less often.

srush

We also are big Tab users here at Cursor. In the blog we talk about the motivation for this project came from thinking about a Tab-like agent.

dagss

It's great. BUT: Wish they had selected another shortcut like shift+tab.

Every time I write code myself I find myself racing the AI to get an indentation in before the AI is done... gets annoying

RosalieCodes

You can change the key bind, I personally set it to ctrl+tab

enraged_camel

Tab model is fantastic but I wish it was somehow aware of the conversation happening in the currently active AI chat session.

srush

Hi everyone,

I am an ML researcher at Cursor, and worked on this project. Would love to hear any feedback you may have on the model, and can answer question about the blog post.

MysticFear

There is a youtube livestreamer building with it now, if you are looking for direct feedback: https://www.youtube.com/watch?v=1bDPMVq69ac

chaidhat

Which model did you distill it from? Great work! PS getting a few scenarios where it doesn't follow rules as well as sonnet 4.5

srush

The blog talks about the training process. Specifically we trained with RL post-training on coding examples.

chis

Makes sense, but what model was used for the base? Is it some open-source model, and you're not at liberty to disclose?

WanderPanda

Why did you stop training shy of the frontier models? From the log plot it seems like you would only need ~50% more compute to reach frontier capability

srush

We did a lot of internal testing and thought this model was already quite useful for release.

WanderPanda

Makes sense! I like that you guys are more open about it. The other labs just drop stuff from the ivory tower. I think your style matches better with engineers who are used to datasheets etc. and usually don't like poking a black box

alyxya

Is the new model trained from scratch? What training data went into it?

dfltr

Is it true that Cheetah is Grok Code Fast 2? Does this mean that the new Cursor model is also based on Grok?

srush

Cheetah was an earlier (and dumber) version of this model that we used to test production speed. They are both developed in-house. If you liked Cheetah, give this model a try.

carlosbaraza

This is nice. I liked Cheetah for grunt work that I want to get out quickly and is not too hard. The speed is really awesome. A model that would run at even higher speeds like the OSS models at groq/cerebras would really be workflow changing, because the slowness of SOTA models really breaks the flow. I find myself taking a ton of breaks and getting distracted while I wait for a model to complete a task (e.g. just now).

dfltr

Awesome, thanks for the clarification. So are the rumors around Cheetah being based on a Grok model just straight up untrue? I want to try Composer but have a pretty strict no X/Grok policy.

carlosbaraza

How do you work with multiple agents?

pdeva1

is Composer a fine tune of an existing open source base model?

srush

Our primary focus is on RL post-training. We think that is the best way to get the model to be a strong interactive agent.

comex

So, yes, but you won’t say what the base model is? :)

OsrsNeedsf2P

One thing no competitor is serious on is average response completion time. Cursor lapped everyone there

srush

There are lots of good models we like here. But we agree that getting the right point on the smart+fast graph can make agentic coding feel really good.

(Cursor researcher)

stared

While I am excited to see a new model, I am skeptical when there is so much vagueness - charts with "frontier models" without actually spelling out which ones, charts with no numbers (time axis, or in one chart - entirely).

srush

There is a footnote that should help with the models. Training is a harder thing to report on, but roughly our finding here is that RL scales.

jasonjmcghee

Maybe I'm an outlier but Sonnet 4.5 quality is about as low as I'm willing to go.

It's generation speed is not the problem or the time sink.

It's wrestling with it to get the right output.

---

And just to clarify as maybe I misunderstood again but people are comparing cursor to Claude Code and codex etc here- isn't this whole article all cursor just using different models?

alyxya

There’s two different kinds of users, on one side people are more hands off and want the model to autonomously handle longer tasks on its own with minimal guidance, and on the other side is users who want to interactively collaborate with the model to produce desired results. Speed matters much more for the second case, where you know what you want and just want the model to implement whatever you had in mind as quick as possible. Intelligence/ability matters more for the first case when you don’t have full understanding of all the code. I think it’s context dependent for me where more serious work tends to be more interactive. The intelligence of a model doesn’t make up for issues due to lack of context to me.

jasonjmcghee

I'm very solidly in the second group - but I review all the code. If it writes faster than I can read, that's fast enough.

swyx

> Sonnet 4.5 quality is about as low as I'm willing to go.

literally a 30 day old model and you've moved the "low" goalpost all the way there haha. funny how humans work

jasonjmcghee

Yup - just like sibling comment said - my "low bar" is going to be whatever the best model is that isn't unreasonably costly/expensive.

Speed of model just isn't the bottleneck for me.

Before it I used Opus 4.1, and before that Opus 4.0 and before that Sonnet 4.0 - which each have been getting slightly better. It's not like Sonnet 4.5 is some crazy step function improvement (but the speed over Opus is definitely nice)

vidarh

Yes? Because why should we settle for less now that it is available?

swyx

because engineering is the art of "good enough" and composer is clearly "good enough but a lot faster" which makes up for intelligence gaps in interesting ways

solarkraft

The reason I pulled out the comparison is to highlight how serious they are about all the important parts that make or break the AI coding experience - speed being very important to me. I’d rather catch my model doing the wrong thing quickly than having a higher chance of one-shotting it at the cost of having to do a lot of specification upfront.

srush

Agree that Sonnet 4.5 is an excellent model. Would be curious to hear your experience using Composer though, it's quite good.

jasonjmcghee

I'll try it out! I haven't yet - just generally conveying my opinion that I personally weigh "better model" much more important than speed, assuming some "fast enough"

Also, didn't realize you worked at Cursor - I'm a fan of your work - they're lucky to have you!

srush

Thanks! Yeah, been working here for 9 months now. Fascinated byt agentic coding both as a researcher and user.

Totally agree that "smart model" is the table stakes for usefulness these days.

nu11ptr

I love Cursor. I've tried Copilot/Claude/etc. but keep coming back to Cursor. I just want to work, and Cursor tab complete is dang accurate, esp. for refactoring tasks.

Sammi

I tried going back to VS Code + Copilot a month ago. I only lasted 4 days because it was to bad. It was super slow and gave poor suggestions, but mostly it just flat out did not suggest anything. Cursor feels snappy in comparison and the suggestions are more often than not useful. The most annoying thing about Cursor tab complete, is that it is so fast that when I am doing something unusual then it will keep on jumping in with useless suggestions. They have a snooze function for this though.

WanderPanda

Damn TIL, I always used > Cursor: disable completions and forgot to turn it on again I need to try snooze then!

carlosbaraza

Cursor 2.0 keeps crashing on me while having an agent running and opening the IDE part of the application. I might have to rollback.

amilich

Hey - really sorry to hear this - could you email me andrew@cursor.com? Here are 3 suggestions to try- 1. Reset your settings.json - if shared with vscode, sometimes settings can cause perf regressions 2. Could you try cmd-shift-p -> "capture and send debugging data"? Will send us some profiling data to debug 3. Clear your user data (will delete chats) as a last resort - cmd-shift-p, "reveal user data," close the app, then delete this folder and restart the app

carlosbaraza

Could anyone explain how to use multiple agents and subagents in Cursor, Claude Code, or others? It is already challenging to me taming one model doing work, let alone synchronizing multiple parallel workers.

Do you have to split the plan in parallelizable tasks that could be worked in parallel in one codebase without breaking and confusing the other agents?

asdev

you can use git worktrees and just have multiple Claude Code terminal instances working on each worktree. That way they don't clash, just delete the worktree when the task is done.

carlosbaraza

I have never leveraged git worktrees... That is such a crazy useful tool that I am almost ashamed of not having researched it before. Git is such a beautiful piece of software.

swyx

swyx

my very small nit is... why is the model called Composer?? of all things?? when there was already a Cursor Composer from 2024.

Cursor Cheetah wouldve been amazing. reusing the Composer name feels like the reverse OpenAI Codex move haha

srush

We like the name Composer and were sad to see it go. Excited to bring it back. (Agree Cheetah is a cool name too.)

kilroy123

What I can't stand about cursor is the constantly changing and confusing billing and usage.

I think competition in the space is a good thing, but I'm very skeptical their model will outperform Claude.