Skip to content(if available)orjump to list(if available)

Top model scores may be skewed by Git history leaks in SWE-bench

ofirpress

[I'm on the SWE-bench team] Multiple people have looked into this, for example right in that thread: https://github.com/SWE-bench/SWE-bench/issues/465#issuecomme...

This issue had affected a tiny fraction of existing agents in a tiny fraction of their runs. And we've now issued a fix.

This is a natural part of running a benchmark, I'm sure tiny things like this will keep on getting discovered and we'll keep on fixing them. This doesn't change the overall picture or trends at all.

piskov

Not “may be”: just look how swe-bench scores drop to single digits once it in C#

https://arxiv.org/html/2506.12286v3

fine_tune

I was going to argue "LLM's need code samples to-do well on languages and if we are honest C# is a language mostly held in private repo's" but Github's 2024 report[0] says its the 5th most used language (I'm to lazy to check if this report includes private repo's but I'll assume it doesn't).

So kinda neat to see this paper!

[0]https://github.blog/news-insights/octoverse/octoverse-2024/#...

yieldcrv

5th most used language based on private repos that the group making the report has the exclusive direct access to seeing

I don't see that contradicting your assumption

stefan_

So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

I don't get it, who is so opposed to doing the bare minimum of manual work and check what these models are doing? At least back in the day grad students doing an easy meta-paper understood it meant doing some repetitive manual work. Now we got benchmarks by hype vendors who think they can use the thing they are benchmarking to .. mark the bench.

yorwba

The "Verified" part of "SWE-Bench Verified" means that there was plain "SWE-Bench" before it, which had actually not been verified at all and included a lot of tasks that didn't really make sense for use as a benchmark: https://openai.com/index/introducing-swe-bench-verified/#ada...

Data contamination stemming from the fact that it's based on already-solved problems in public repositories is a different issue that cannot be addressed by verifying the benchmark questions harder, but only by putting stricter limits on the model under test.

sebzim4500

The verified refers to the fact that the benchmark problems were verified by human experts to be reasonable.

It says nothing about data contamination, which would depend on the model and would not be the fault of the benchmark.

jsheard

> So the "Verified" part of "SWE Bench Verified" means.. not "Verified" at all.

Seems on-brand for an LLM-related thing to claim that it has verified something without actually checking.

geekymartian

that was my exact thought. how fitting

teaearlgraycold

Personally I don't look at or respect LLM benchmarks at all. I've seen SOTA models fail in incredibly shocking ways even recently. Those moments immediately bring me out of the delusion that LLMs have thinking capacity or an understanding of code.

mustaphah

I speculate something similar (or even worse) is going on with Terminal-Bench [1].

Like, seriously, how come all these agents are beating Claude Code? In practice, they are shitty and not even close. Yes. I tried them.

[1] https://www.tbench.ai/leaderboard

slacktivism123

Fascinating case showing how LLM promoters will happily take "verified" benchmarks at their word.

It's easy to publish "$NEWMODEL received an X% bump in SWE-Bench Verified!!!!".

Proper research means interrogating the traces, like these researchers did (the Gist shows Claude 4 Sonnet): https://gist.github.com/jacobkahn/bd77c69d34040a9e9b10d56baa...

Commentary: https://x.com/bwasti/status/1963288443452051582, https://x.com/tmkadamcz/status/1963996138044096969

mbowcut2

I'm not surprised. People really thought the models just kept getting better and better?

guerrilla

Maybe. How would I know?

OtherShrezzing

That the answers have been available to them in the environment, and they’re still not hitting 100% on this benchmark is a damning indictment of SOTA model performance.

raincole

It really isn't. Do you expect SOTA models to answer any answered question on the internet with 100% accuracy? Congrats you just compressed the whole internet (at least a few zettabytes) into a model (a few TB at most?).

aurareturn

Are you going to rail on humans for making this mistake in the first place?

themafia

No because that's the baseline. It's what you do when you have no other choice. Railing against that would be pointless.

jasonjmcghee

Very interested to see the updated results. This could really shake up the leaderboard.

macawfish

I hope it does. These coding benchmarks have often seemed frustratingly out of touch with my experience.

3abiton

Because I would argue there is no benchmark to rule them all. It highly depends on individual use cases.

zaptheimpaler

It's honestly ridiculous they left git history lying around during a benchmark, and this benchmark made to ICLR in Jan 2024 and no one has detected this issue until now. I don't really trust any benchmarking or tools or claims from this space when they can make such huge basic errors.

Nijikokun

There was a lot of speculation whether or not the model would use them or even if it would attempt to use them and they noted this months ago. Now they have clear evidence of them doing so. Seems reasonable.

epolanski

This is beyond sad.

Traster

Man I feel so dumb. Why haven't I been doing this in my job, if I could just see the commit that fixed my issue this would all be so easy.

Noumenon72

Someone did comment that it's actually smart to check if something is fixed on the unstable branch, or I suppose in your coworkers' branches. A good task for an LLM.

belter

In the meawhile, Oracle stock went up 40% in one one day, based on what Wall Street thinks AI might be...in 4 years...Not a bubble at all...

candiddevmike

I think Oracle's stock mostly popped due to a delayed reaction with the US GSA contract it secured in July and the revenue guidance probably related to it:

https://www.oracle.com/news/announcement/blog/oracle-cloud-c...

ksherlock

The real bubble will come once interest rates start dropping.