Skip to content(if available)orjump to list(if available)

Benchmarking GPT-5 on 400 real-world code reviews

comex

> Each model’s responses are ranked by a high-performing judge model — typically OpenAI’s o3 — which compares outputs for quality, relevance, and clarity. These rankings are then aggregated to produce a performance score.

So there's no ground truth; they're just benchmarking how impressive an LLM's code review sounds to a different LLM. Hard to tell what to make of that.

raincole

That's how 99% of 'LLM benchmark numbers' circulating on the internet work.

qsort

No, they aren't. Most benchmarks use ground truth, not evaluation by another LLM. Using another LLM as verifier, aside from the obvious "quis custodiet custodes ipsos", opens an entire can of worms, such as the fact that there could be systematic biases in the evaluation. This is not in and of itself disqualifying but it should be addressed, and the article doesn't even say anything.

Lionga

[flagged]

shikon7

Also, using an OpenAI model to judge the performance of an OpenAI model seems prone to all kinds of biases.

LauraMedia

Am I missing something? If LLM-1 is supposed to judge LLM-2, doesn't LLM-1 have to be better than LLM-2? If LLM-1 is only 40% as good at coding as LLM-2, why would you trust the LLM with the lesser knowledge?

BlindEyeHalo

At the heart of the P vs NP problem lies the observation that solution verification seems to be much easier than solution generation. If that applies in this context is another question but I think it is not unreasonable to assume that the judge needs to be less powerful than the performer.

Or in other words, I don't need to be a chef myself to decide if a meal is good or not.

mirekrusin

Exactly, they should at least compare with judges as best models from others, ideally verified by human/ground truth/tests.

ImageXav

Yes, especially as models are known to have a preference towards outputs of models in the same family. I suspect this leaderboard would change dramatically with different models as the judge.

jacquesm

I don't care about either method. The ground truth should be what a human would do, not what a model does.

mirekrusin

There may be different/better solutions for almost all those kind of tasks. I wouldn’t be surprised if optimal answer to some of them would be refusal/defer ask, refactor first, then solve it properly.

spiderfarmer

They are different models already but yes, I already let ChatGPT judge Claude's work for the same reason.

with

It’s a widely accepted eval technique and it’s called “llm as a judge”

jacquesm

Accepted does not mean correct. It's like using a rubber yardstick as the means to figure out who won the pumpkin growing competition.

ben_w

I'd say it's worse than that, a rubber ruler still has a definite length when not under tension etc.

This might be more like asking amateur painters to each paint a picture of a different one of the pumpkins, then judging each other's paintings without seeing the actual pumpkin that painting was based on.

kingstnap

It's widely accepted because it's cheap, but LLMs aren't really good judges.

It's supposed to leverage a "generate vs. critique" gap in skill level as a form of self-improvement. It's easier to judge how good food is vs. make it.

But here's the thing. When it comes to code review, you need to be effectively as skilled as the person who wrote it. There isn't really a gap.

And then the real clincher is this. LLMs naturally have a skill gap between their judgement and generation skills as is. The reason is that they have superhuman pattern matching and memorization ability. They can use their memorized patterns as a massive crutch for their actual reasoning skills, but they can't do the same for judgement calls in code review.

sensanaty

Accepted by whom, the people shoving AI down our throats?

magicalhippo

Shouldn't one review the ratings of say a random 1% to ensure it's performing as expected?

eviks

Why is it hard to ignore an attempt to assess reality that is not grounded in reality?

kruxigt

[dead]

timbilt

> Unlike many public benchmarks, the PR Benchmark is private, and its data is not publicly released. This ensures models haven’t seen it during training, making results fairer and more indicative of real-world generalization.

This is key.

Public benchmarks are essentially trust-based and the trust just isn't there.

laggyluke

Unless you're running the LLM yourself (locally), private benchmarks are also trust-based, aren't they?

timbilt

Yes, but in a case like this it's a neutral third-party running the benchmark. So there isn't a direct incentive for them to favor one lab over another.

With public benchmarks we're trusting the labs not to cheat. And it's easy to "cheat" accidentally - they actually need to make a serious effort to not contaminate the training data.

And there's massive incentives for the labs to cheat in order to get the hype going around their launch and justify their massive investments in training. It doesn't have to be the CEO who's directing it. Can even be one/a few researchers who are responsible for a specific area of model performance and are under tremendous pressure to deliver.

vohk

The problem is when using a model hosted by those labs (ex: OpenAI only allowed access to o3 through their own direct API, not even Azure), there still exists a significant risk of cheating.

There's a long history of that sort of behaviour. ISPs gaming bandwidth tests when they detect one is being run. Software recognizing being run in a VM or on a particular configuration. I don't think it's a stretch to assume some of the money at OpenAI and others has gone into spotting likely benchmark queries and throwing on a little more compute or tagging them for future training.

I would be outright shocked if most of these benchmarks are even attempting serious countermeasures.

jacquesm

Then you just need to use different data the next time you evaluate. That is much more indicative of real-world generalization: after all, you don't normally do multiple PRs on the same pieces of code. The current approach risks leaking the dataset selectively and/or fudging the results because they can't be verified. Transparency is key when doing this kind of benchmark, so now we have to trust the entity doing the benchmarking rather than independent verification of the results and with the amount of money that is at stake here I don't think that's the way to go.

nojs

How does this ensure models haven’t seen it during training - is it a different benchmark per model release?

null

[deleted]

spongebobstoes

> the “minimal” GPT-5 variant ... achieved a score of 58.5

the image shows it with a score of 62.7, not 58.5

which is right? mistakes like this undermine the legitimacy of a closed benchmark, especially one judged by an LLM

shinycode

I’m curious to know how people use PR review platforms with LLMs. Because what I feel is that I need to do the review and then review the review of the LLM which is more work in the end. If I don’t review anymore (or if no one does it) knowledge is kind of lost. It surely depends on team size but do people use those to only to have better hints or to accelerate reviews with no/low overlook ?

Leherenn

Only has a sanity check/better hints. But I use it for my own PRs, not others'. Usually it's not much to review and easy to agree/disagree with.

I haven't found it to be really useful so far, but it's also very little added work, so for now I keep on using it. If it saves my ass even just once, it will probably be worth it overall.

fcantournet

> If it saves my ass even just once, it will probably be worth it overall.

That's a common fallacy of safety by the way :)

It could very well "save your ass" just once (whatever that means) while costing you more in time, opportunity, effort, or even false sense of safety, to generate more harm than it will ultimately save you.

Leherenn

Sure, but so far the cost is very minimal. Like 1 minute per PR on average. A crash in production and the subsequent falloffs is probably a good week of work and quite a bit of stress. That gives me quite a few PRs.

And it's not even safety critical code.

stpedgwdgfhgdd

I give the MR id to CC and let it review. I have glab cli installed so it knows how to pull and even add a comment. Unfortunately not at all specific line number afaict. I also have Atlassian MCP, so CC can also add a comment in the Jira work item (fka issue).

highfrequency

Great to see more private benchmarks. I would suggest swapping out the evaluator model from o3 to one of the other companies, eg Gemini 2.5 Pro, to make sure the ranking holds up. For example, if OpenAI models all share some sense of what constitutes good design, it would not be that surprising that o3 prefers GPT5 code to Gemini code! (I would not even be surprised if GPT5 were trained partially on output from o3).

8-prime

Asking GPT 4o seems like an odd choice. I know this is not quite comparable to what they were doing, but asking different LLMs the following question > answer only with the name nothing more norting less.what currently available LLM do you think is the best?

Resulted in the following answers:

- Gemini 2.5 flash: Gemini 2.5 Flash

- Claude Sonnet 4: Claude Sonnet 4

- Chat GPT: GPT-5

To me its conceivable that GPT 4o would be biased toward output generated by other OpenAI models.*

rullelito

Without knowing too much about ML training, generated output from the own model must be much easier to understand since it generates data that is more likely to be similar to the training set? Is this correct?

jondwillis

I don’t think so. The training data, or some other filter applied to the output tokens, is resulting in each model indicating that it is the best.

The self-preference is almost certainly coming from post-processing, or more likely because the model name is inserted into the system prompt.

monkeydust

I know from our research models do exhibit bias when used this way as llm as a judge...best to use a totally different foundation company for the judge.

mkotlikov

Models tend to prefer output that sounds like their own. If I were to run these benchmarks I would have:

1) Gemini 2.5 Pro rank only non-google models 2) Claude 4.1 Opus rank only non-Anthropic models 3) GPT5-thinking rank only non-OpenAI 4) Then sum up the rankings and sort by the sum.

thawab

How can o4-mini be at 57, and sonnet-4 is at 39? This is way off, o4-mini is not even in the top 5 of coding agents.

dovin

I don't consider myself a font snob but that web page was actually hard for me to read. Anyway, it's definitely capable according to my long-horizon text-based escape room benchmark. I don't know if it's significantly better than o3 yet though.

jondwillis

Idea: randomized next token prediction passed to a bunch of different models on a rotating basis.

It’d be harder to juice benchmarks if a random sample of ~100 top models were randomly sampled in this manner for output tokens while evaluating the target model’s output.

On second thought, I’m slapping AGPL on this idea. Please hire me and give me one single family house in a California metro as a bonus. Thanks.

thegeomaster

Gemini 2.5 Pro is severely kneecapped in this evaluation. Limit of 4096 thinking tokens is way too low; I bet o3 is generating significantly more.

energy123

For o3, I set reasoning_effort "high" and it's usually 1000-2000 reasoning tokens for routine coding questions.

I've only seen it go above 5000 for very difficult style transfer problems where it has to wrangle with the micro-placement of lots of text. Or difficult math problems.

44za12

Can you benchmark Kimi K2 and GLM 4.5 as well? Would be interesting to see where they land.