Skip to content(if available)orjump to list(if available)

Tracking Copilot vs. Codex vs. Cursor vs. Devin PR Performance

lukehoban

(Disclaimer: I work on coding agents at GitHub)

This data is great, and it is exciting to see the rapid growth of autonomous coding agents across GitHub.

One thing to keep in mind regarding merge rates is that each of these products creates the PR at a different phase of the work. So just tracking PR create to PR merge tells a different story for each product.

In some cases, the work to iterate on the AI generated code (and potentially abandon it if not sufficiently good) is done in private, and only pushed to a GitHub PR once the user decides they are ready to share/merge. This is the case for Codex for example. The merge rates for product experiences like this will look good in the stats presented here, even if many AI generated code changes are being abandoned privately.

For other product experiences, the Draft PR is generated immediately when a task is assigned, and users can iterate on this “in the open” with the coding agent. This creates more transparency into both the success and failure cases (including logs of the agent sessions for both). This is the case for GitHub Copilot coding agent for example. We believe this “learning in the open” is valuable for individuals, teams, and the industry. But it does lead to the merge rates reported here appearing worse - even if logically they are the same as “task assignment to merged PR” success rates for other tools.

We’re looking forward to continuing to evolve the notion of Draft PR to be even more natural for these use cases. And to enabling all of these coding agents to benefit from open collaboration on GitHub.

soamv

This is a great point! But there's an important tradeoff here about human engineering time versus the "learning in the open" benefits; a PR discarded privately consumes no human engineering time, a fact that the humans involved might appreciate. How do you balance that tradeoff? Is there such a thing as a diff that's "too bad" to iterate on with a human?

ambicapter

Do people where you work spend time reviewing draft PRs? I wouldn’t do that unless asked to by the author.

drawnwren

It’s hard enough for me to get time to review actual PRs, who are these engineers trawling through the drafts?

osigurdson

I've been underwhelmed with dedicated tools like Windsurf and Cursor in the sense that they are usually more annoying than just using ChatGPT. They have their niche but they are just so incredibly flow destroying it is hard to use them for long periods of time.

I just started using Codex casually a few days ago though and already have 3 PRs. While different tools for different purposes make sense, Codex's fully async nature is so much nicer. It does simple things like improve consistency and make small improvements quite well which is really nice. Finally we have something that operates more like an appliance for a certain classes of problems. Previously it felt more like a teenager with a learners license.

elliotec

Have you tried Claude code? I’m surprised it’s not in this analysis but in my personal experience, the competition doesn’t even touch it. I’ve tried them all in earnest. My toolkit has been (neo)vim and tmux for at least a decade now so I understand the apprehension for less terminal-inclined folks that prefer other stuff but it’s my jam and just crushes it.

cap11235

Right, after the Sonnet 4 release it was the first time I could tell an agent something and just let it run comfortably. As for the tool itself, I think a large part of its ability comes from how it writes recursive todo-lists for itself, which are shown to the user, so you can intervene early on the occasions it goes full Monkey's Paw.

deadbabe

You can just use Cursor as a chat assistant if you want.

threeseed

But then you're paying far more than just using Claude web which can be used for tasks other than coding.

tmvnty

Merge rates is definitely a useful signal, but there are certainly other factors we need consider (PR small/big edits, refactors vs deps upgrades, direct merges, follow up PRs correcting merged mistakes, how easy it is to setup these AI agents, marketing, usage fees etc). Similar to how NPM downloads alone don’t necessarily reflect a package’s true success or quality.

osigurdson

I suspect most are pretty small. But hey, that is fine as long as they are making code bases a bit better.

behnamoh

How about Google Jules?

also, of course OpenAI Codex would perform well because the tool is heavily tailored to this type of task, whereas Cursor is a more general-purpose (in the programming domain) tool/app.

dimitri-vs

This might be an obvious questions but why is Claude Code not included?

a_bonobo

I think the OP's page works because these coding agents identify themselves as the PR author so the creator can just search Github's issue tracker for things like is:pr+head:copilot or is:pr+head:codex

It seems like Claude Code doesn't do that? some preliminary searching reveals that PRs generated by people using Claude Code use their own user account but may sign that they used Claude, example https://github.com/anthropics/claude-code/pull/1732

cap11235

Claude does credit itself in the commit messages. eg:

feat: add progress bar for token probability calculation

- Add optional progress_cb parameter to get_token_probs function

- Integrate rich progress bar in CLI showing real-time token processing progress

- Add comprehensive tests for progress callback functionality

- Maintain backward compatibility with optional parameter

Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>

a_bonobo

OK then OP can slightly change their site by using a different search term:

https://github.com/search?q=is:pr+is:merged+Co-Authored-By:+...

Instead of looking at the author of the PR, look for that 'Co-Authored-By: Claude' text bit.

That way I get 753 closed PRs and '1k' PRs in total, that's a pretty good acceptance rate.

csallen

I believe these are all "background" agents that, by default, are meant to write code and issue pull requests without you watching/babysitting/guiding the process. I haven't used Claude Code in a while, but from what I recall, it's not that.

cap11235

If you enable it in permissions, Claude is very happy to do so. For personal fun/experimental projects (usually I give it arXiv papers to implement), I generally have a couple Claude instances (on different projects) just chugging along all day. I have them write really detailed plans at the start (50-100 steps in the implementation plan, plus actual specifications for project structure, dev practices, and what the actual goals are). I iterate on these plan documents by having Claude write QUESTIONS.md which has dev questions for me to clarify, which I fill out with answers, and then instruct Claude to update the plan docs with my answers. Then most of my interaction throughout the day is just saying something like "P18" to implement implementation plan step #18. I instruct it in CLAUDE.md to stop after each step, output what automated tests have been written for P18's features, and I require that the LLM write a demo script that I can run that shows the features, using real APIs. I'm having a great time with it.

ilteris

How much do you pay monthly? What kind of service do you use thanks

koakuma-chan

Claude Code can run in background and I don't see why it wouldn't be able to create pull requests if you gave it such a tool.

cap11235

The prompts in Claude Code have specific instructions on doing pull requests.

``` grep 'gh pr ' ~/.claude/local/node_modules/@anthropic-ai/claude-code/cli.js - Create PR using gh pr create with the format below. Use a HEREDOC to pass the body to ensure correct formatting. gh pr create --title "the pr title" --body "$(cat <<'EOF' 1. Use \`gh pr view --json number,headRepository\` to get the PR number and repository info 1. If no PR number is provided in the args, use ${O4.name}("gh pr list") to show open PRs 2. If a PR number is provided, use ${O4.name}("gh pr view <number>") to get PR details 3. Use ${O4.name}("gh pr diff <number>") to get the diff ```

pryelluw

Is it me or are there a lot of documentation related PRs? Not a majority, but enough to mask the impact of agent code.

throwaway314155

Is this data not somewhat tainted by the fact that there's really zero way to identify how much a human was or wasn't "in the loop" before the PR was created?

thorum

With Jules, I almost always end up making significant changes before approving the PR. So “successful merge” is not great indicator of how well the model did in my case. I’ve merged PRs that were initially terrible after going in and fixing all the mistakes.

tptacek

I kind of wondered about that re: Devin vs. Cursor, because the people I know that happen to use Devin are also very hands-on with the code they end up merging.

But you could probably filter this a bit by looking at PR commit counts?

zekone

thanks for posting my project bradda

cjbarber

Seems like the high order bit impacting results here might be how difficult the PR is?

zachlatta

Wow, this is an amazing project. Great work!

frognumber

Missing data: I don't make a codex PR if it's nonsense.

Poor data: If I make one, I either if I want to:

a) Merge it (success)

b) Modify it (sometimes success, sometimes not). In one case, Codex made the wrong changes in all the right places, but it was still easier to work from that by hand.

c) Pick ideas from it (partial success)

So simple merge rates don't say much.

osigurdson

It isn't so much "poor" data as it is a fairly high bar for value generation. If it gets merged it is a fairly clear indicator that some value is created. If it doesn't get merged then it may be adding some value or it may not.

TZubiri

Why is there 170k PR for a product released last month, but 700 for a product that has been around for like 6 months and was so popular it got acquired for 3B?

simoncion

It might be the case that "number of PRs" is roughly as good a metric as "number of lines of code produced".