Skip to content(if available)orjump to list(if available)

Gemini-2.5-pro-preview-06-05

Gemini-2.5-pro-preview-06-05

232 comments

·June 5, 2025

johnfn

Impressive seeing Google notch up another ~25 ELO on lmarena, on top of the previous #1, which was also Gemini!

That being said, I'm starting to doubt the leaderboards as an accurate representation of model ability. While I do think Gemini is a good model, having used both Gemini and Claude Opus 4 extensively in the last couple of weeks I think Opus is in another league entirely. I've been dealing with a number of gnarly TypeScript issues, and after a bit Gemini would spin in circles or actually (I've never seen this before!) give up and say it can't do it. Opus solved the same problems with no sweat. I know that that's a fairly isolated anecdote and not necessarily fully indicative of overall performance, but my experience with Gemini is that it would really want to kludge on code in order to make things work, where I found Opus would tend to find cleaner approaches to the problem. Additionally, Opus just seemed to have a greater imagination? Or perhaps it has been tailored to work better in agentic scenarios? I saw it do things like dump the DOM and inspect it for issues after a particular interaction by writing a one-off playwright script, which I found particularly remarkable. My experience with Gemini is that it tries to solve bugs by reading the code really really hard, which is naturally more limited.

Again, I think Gemini is a great model, I'm very impressed with what Google has put out, and until 4.0 came out I would have said it was the best.

joshmlewis

o3 is still my favorite over even Opus 4 in most cases. I've spent hundreds of dollars on AI code gen tools in the last month alone and my ranking is:

1. o3 - it's just really damn good at nuance, getting to the core of the goal, and writing the closest thing to quality production level code. The only negative is it's cutoff window and cost, especially with it's love of tools. That's not usually a big deal for the Rails projects I work on but sometimes it is.

2. Opus 4 via Claude Code - also really good and is my daily driver because o3 is so expensive. I will often have Opus 4 come up with the plan and first pass and then let o3 critique and make a list of feedback to make it really good.

3. Gemini 2.5 Pro - haven't tested this latest release but this was my prior #2 before last week. Now I'd say it's tied or slightly better than Sonnet 4. Depends on the situation.

4. Sonnet 4 via claude Code - it's not bad but needs a lot of coaching and oversight to produce really good code. It will definitely produce a lot of code if you just let it go do it's thing but it's not the quality, concise, and thoughtful code without more specific prompting and revisions.

I'm also extremely picky and a bit OCD with code quality and organization in projects down to little details with naming, reusability, etc. I accept only 33% of suggested code based on my Cursor stats from last month. I will often revert and go back to refine the prompt before accepting and going down a less than optimal path.

spaceman_2020

I use o3 a lot for basic research and analysis. I also find the deep research tool really useful for even basic shopping research

Like just today, it made a list of toys for my toddler that fit her developmental stage and play style. Would have taken me 1-2 hrs of browsing multiple websites otherwise

jml78

Gemini deep research runs circles around OpenAI deep research. It goes way deeper and uses way more sources.

vendiddy

I find o3 to be the clearest thinker as well.

If I'm working on a complex problem and want to go back and forth on software architecture, I like having o3 research prior art and have a back and forth on trade-offs.

If o3 was faster and cheaper I'd use it a lot more.

I'm curious what your workflows are !

monkpit

Have you used Cline with opus+sonnet? Do you have opinions about Claude code vs cline+api? Curious to hear your thoughts!

jonplackett

How do you find o3 vs o4-mini?

joshmlewis

For coding at least, I don't bother with anything less than the top thinking models. They do have their place for some tasks in agentic systems but time is money and I don't want to waste time trying to coral less skilled models when there are more powerful ones available.

throwaway314155

It's interesting you say that because o3, while being a considerable improvement over OpenAI's other models, still doesn't match the performance of Opus 4 and Gemini 2.5 Pro by a long shot for me.

However, o3 resides in the ChatGPT app, which is still superior to the other chat apps in many ways, particularly the internet search implementation works very well.

svachalek

If you're coding through chat apps you're really behind the times. Try an agent IDE or plugin.

jorvi

What's most annoying about Gemini 2.5 is that it is obnoxiously verbose compared to Opus 4. Both in explaining the code it wrote and the amount of lines it writes and comments it adds, to the point where the output is often 2-3x more than Opus 4.

You can obviously alleviate this by asking it to be more concise but even then it bleeds through sometimes.

joshmlewis

What languages do you use it with and IDE? I use it in Cursor mainly with Max reasoning on. I spent around $300 on token based usage for o3 alone in May still only accepting around 33% of suggestions though. I made a post on X about this the other day but I expect that amount of rejections will go down significantly by the end of this year at the rate things are going.

pqdbr

How do you choose which model to use with Claude Code?

joshmlewis

I have the Max $200 plan so I set it to Opus until it limits me to Sonnet 4 which has only happened in two out of a few dozen sessions so far. My rule of thumb in Cursor is it's worth paying for the Max reasoning models for pretty much every request unless it's stupid simple because it produces the best code each time without any funny business you get with cheaper models.

jasonjmcghee

In case you're asking for the literal command...

/model

VeejayRampay

we need to stop it with the anecdotal evidence presented by one random dude

seeEllArr

[dead]

batrat

What I like about Gemini is the search function that is very very good compared to others. I was blown away when I asked to compose me an email for a company that was sending spam to our domain. It literally searched and found not only the abuse email of the hosting company but all the info about the domain and the host(mx servers, ip owners, datacenters, etc.). Also if you want to convert a research paper into a podcast it did it instantly for me and it's fun to listen.

baq

I’ve been giving the same tasks to claude 4 and gemini 2.5 this week and gemini provided correct solutions and claude didn’t. These weren’t hard tasks either, they were e.g. comparing sql queries before/after rewrite - Gemini found legitimate issues where claude said all is ok.

Szpadel

in my experience this highly depends case by case. For some cases Gemini crushed my problem, but in next one stuck and couldn't figure out simple bug.

the same with o3 and sonnet (I didn't tested 4.0 much yet to have opinion)

I feel thet we need better parallel evaluation support. where u could evaluate all top models and decide with one provided best solution

varunneal

Have you tried o3 on those problems? I've found o3 to be much more impressive than Opus 4 for all of my use cases.

johnfn

To be honest, I haven't, because the "This model is extremely expensive" popup on Cursor makes me a bit anxious - but given the accolades here I'll have to give it a shot.

cwbriscoe

I haven't tried all of the favorites, just what is available with Jetbrains AI, but I can say that Gemini 2.5 is very good with Go. I guess that makes sense in a way.

zamadatix

I think the only way to be particularly impressed with new leading models lately is to hold the opinion all of the benchmarks are inaccurate and/or irrelevant and it's vibes/anecdotes where the model is really light years ahead. Otherwise you look at the numbers on e.g. lmarena and see it's claiming a ~16% preference win rate for gpt-3.5-turbo from November of 2023 over this new world-leading model from Google.

johnfn

Not sure I follow - Gemini has ELO 1470, GPT3.5-turbo is 1206, which is an 86% win rate. https://chatgpt.com/share/6841f69d-b2ec-800c-9f8c-3e802ebbc0...

zamadatix

gpt-3.5-turbo-1106 from November 2023 was 1170, 1206 is for the March variant.

Change that and you get ~84%, flip the order (i.e. the win rate of GPT-3.5 is ~16%). I.e. the point is a two year old model still wins far too often to be excited about each new top model for the last two years, not that the two year old model is better.

Workaccount2

People can ask whatever they want on LMarena, so a question like "List some good snacks to bring to work" might elicit a win for a old/tiny/deprecated model simply because it lists the snack the user liked more.

AstroBen

are you saying that's a bad way to judge a model? Not sure why we'd want ones that choose bad snacks

null

[deleted]

chollida1

I'd start to worry about OpenAI, from a valuation standpoint. The company has some serious competition now and is arguably no longer the leader.

its going to be interesting to see how easily they can raise more money. Their valuation is already in the $300B range. How much larger can it get given their relatively paltry revenue at the moment and increasingly rising costs for hardware and electricity.

If the next generation of llms needs new data sources, then Facebook and Google seem well positioned there, OpenAI on the other hand seems like its going to lose such race for proprietary data sets as unlike those other two, they don't have another business that generates such data.

When they were the leader in both research and in user facing applications they certainly deserved their lofty valuation.

What is new money coming into OpenAI getting now?

At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

Or at an extremely lofty P/E ratio of say 100 that would be $3B in annual earnings, that analysts would have to expect you to double each year for the next 10ish years looking out, ala AMZN in the 2000s, to justify this valuation.

They seem to have boxed themselves into a corner where it will be painful to go public, assuming they can ever figure out the nonprofit/profit issue their company has.

Congrats to Google here, they have done great work and look like they'll be one of the biggest winners of the AI race.

jstummbillig

There is some serious confusion about the strength of OpenAIs position.

"chatgpt" is a verb. People have no idea what claude or gemini are, and they will not be interested in it, unless something absolutely fantastic happens. Being a little better will do absolutely nothing to convince normal people to change product (the little moat that ChatGPT has simply by virtue of chat history is probably enough from a convenience standpoint, add memories and no super obvious path to export/import either and you are done here).

All that OpenAI would have to do, to easily be worth their evaluation eventually, is to optimize and not become offensively bad to their, what, 500 million active users. And, if we assume the current paradigm that everyone is working with is here to stay, why would they? Instead of leading (as they have done so far, for the most part) they can at any point simply do what others have resorted to successfully and copy with a slight delay. People won't care.

aeyes

Google has a text input box on google.com, as soon as this gives similar responses there is no need for the average user to use ChatGPT anymore.

I already see lots of normal people share screenshots of the AI Overview responses.

jstummbillig

You are skipping over the part where you need to bring normal people, specially young normal people, back to google.com for them to see anything at all on google.com. Hundreds of millions of them don't go there anymore.

paxys

> as soon as this gives similar responses

And when is that going to be? Google clearly has the ability to convert google.com into a ChatGPT clone today if they wanted to. They already have a state of the art model. They have a dozen different AI assistants that no one uses. They have a pointless AI summary on top of search results that returns garbage data 99% of the time. It's been 3+ years and it is clear now that the company is simply too scared to rock the boat and disrupt its search revenue. There is zero appetite for risk, and soon it'll be too late to act.

askafriend

As the other poster mentioned, young people are not going there. What happens when they grow up?

candiddevmike

ChatGPT is going to be Kleenex'd. They wasted their first mover advantage. Replace ChatGPT's interface with any other LLM and most users won't be able to tell the difference.

ComplexSystems

"People have no idea what claude or gemini are"

One well-placed ad campaign could easily change all that. Doesn't hurt that Google can bundle Gemini into Android.

jstummbillig

If it were that simple to sway markets through marketing, we would see Pepsi/Coca-Cola or McDonalds/BurgerKing swing like crazy all the time from "one well-placed ad campaign" to the next. We do not.

chollida1

Chatgpt has no moat of any kind though.

I can switch tomorrow to use gemini or grok or any other llm, and I have, with zero switching cost.

That means one stumble on the next foundational model and their market share drops in half in like 2 months.

Now the same is true for the other llms as well.

potatolicious

I think this pretty substantially overstates ChatGPT's stickiness. Just because something is widely (if not universally) known doesn't mean it's universally used, or that such usage is sticky.

For example, I had occasion to chat with a relative who's still in high school recently, and was curious what the situation was in their classrooms re: AI.

tl;dr: LLM use is basically universal, but ChatGPT is not the favored tool. The favored tools are LLMs/apps specifically marketed as study/homework aids.

It seems like the market is fine with seeking specific LLMs for specific kinds of tasks, as opposed to some omni-LLM one-stop-shop that does everything. The market has already and rapidly moved beyond from ChatGPT.

Not to mention I am willing to bet that Gemini has radically more usage than OpenAI's models simply by virtue of being plugged into Google Search. There are distribution effects, I just don't think OpenAI has the strongest position!

I think OpenAI has some first-mover advantage, I just don't think it's anywhere near as durable (nor as large) as you're making it out to be.

lizardking

Xerox was a verb too

PantaloonFlames

> At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

Oops I think you may have flipped the numerator and the denominator there, if I’m understanding you. Valuation of 300B , if 2x sales, would imply 150B sales.

Probably your point still stands.

jadbox

Currently I only find OpenAI to be clearly better for image generation: like illustrations, comics, or photo editing for home project ideation.

bufferoverflow

And open-source Flux.1 Kontext is already better than it.

energy123

Even if they're winning the AI race, their search business is still going to be cannibalized, and it's unclear if they'll be able to extract any economic rents from AI thanks to market competition. Of course they have no choice but to compete, but they probably would have preferred the pre-AI status quo of unquestioned monopoly and eyeballs on ads.

xmprt

Historically, every company has failed by not adapting to new technologies and trying to protect their core business (eg. Kodak, Blockbuster, Blackberry, Intel, etc). I applaud Google for going against their instincts and actively trying to disrupt their cash cow in order to gain an advantage in the AI race.

orionsbelt

I think it’s too early to say they are not the leader given they have o3 pro and GPT 5 coming out within the next month or two. Only if those are not impressive would I start to consider that they have lost their edge.

Although it does feel likely that at minimum, they are neck and neck with Google and others.

ed_mercer

Source for gpt 5 coming out soon?

orionsbelt

https://www.reddit.com/r/singularity/comments/1l1fi7a/gpt5_i...

There’s been other stuff from Sam Altman that puts it around this summer, so even if it gets delayed past July, it seems pretty clear it’s coming within the next few months.

sebzim4500

>At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

What? Apple has a revenue of 400B and a market cap of 3T

Rudybega

I think OpenAI has projected 12.7B in revenue this year and 29.4B in 2026.

Edit: I am dumb, ignore the second half of my post.

eamag

isn't P/E about earnings, not revenue?

Rudybega

You are correct. I need some coffee.

raincole

> At even a $300B valuation a typical wall street analysts would want to value them at 2x sales which would mean they'd expect OpenAI to have $600B in annual sales to account for this valuation when they go public.

Even Google doesn't have $600B revenue. Sorry, it sounds like numbers pulled from someone's rear.

vthallam

As if 3 different preview versions of the same model is not confusing enough, the last two dates are 05-06 and 06-05. They could have held off for a day:)

tomComb

Since those days are ambiguous anyway, they would have had to hold off until the 13th.

In Canada, a third of the dates we see are British, and another third are American, so it’s really confusing. Thankfully y-m-d is now a legal format and seems to be gaining ground.

layer8

> they would have had to hold off until the 13th.

06-06 is unambiguously after 05-06 regardless of date format.

Sammi

The problems is that I mentally just panic and abort without even trying when I see 06-06 and 05-06. The ambiguity just flips my brain off.

dist-epoch

> the last two dates are 05-06 and 06-05

they are clearly trolling OpenAI's 4o and o4 models.

oezi

Don't repeat the same mistake if you want to troll somebody.

It makes you look even more stupid.

fragmede

ChatGPT itself suggests better names than that!

UncleOxidant

At what point will they move from Gemini 2.5 pro to Gemini 2.6 pro? I'd guess Gemini 3 will be a larger model.

null

[deleted]

declan_roberts

Engineers are surprisingly bad at naming things!

jacob019

I rather like date codes as versions.

wiradikusuma

I have two issues with Gemini that I don't experience with Claude: 1. It RENAMES VARIABLE NAMES even in places where I don't tell it to change (I pass them just as context). and 2. Sometimes it's missing closing square brackets.

Sure I'm a lazy bum, I call the variable "json" instead of "jsonStringForX", but it's contextual (within a closure or function), and I appreciate the feedback, but it makes reviewing the changes difficult (too much noise).

xtracto

I have a very clear example of Gemini getting it wrong:

For a code like this, it keeps changing processing_class=tokenizer to "tokenizer=tokenizer", even though the parameter was renamed and even after adding the all caps comment.

    #Set up the SFTTrainer
    print("Setting up SFTTrainer...")
    trainer = SFTTrainer(
    model=model,
    train_dataset=train_dataset,
    args=sft_config,
    processing_class=tokenizer, # DO NOT CHANGE. THIS IS NOW THE CORRECT PROPERTY NAME
    )
    print("SFTTrainer ready.")
I haven't tried with this latest version, but the 05-06 pro still did it wrong.

diggan

Do you have in the system prompt to actually not edit lines that has comments about not editing them? Had that happen to me too, that code comments been ignored, and adding instructions about actually following code comments helped for that. But different models so YMMV.

AaronAPU

I find o1-pro, which nobody ever mentions, is in the top spot along with Gemini. But Gemini is an absolute mess to work with because it constantly adds tons of comments and changes unrelated code.

It is worth it sometimes, but usually I use it to explore ideas and then have o1-pro spit out a perfect solution ready diff test and merge.

danielbln

Gemini loves to add idiotic non-functional inline comments.

"# Added this function" "# Changed this to fix the issue"

No, I know, I was there! This is what commit messages for, not comments that are only relevant in one PR.

macNchz

I love when I ask it to remove things and it doesn't want to truly let go, so it leaves a comment instead:

   # Removed iterMod variable here because it is no longer needed.
It's like it spent too much time hanging out with an engineer who doesn't trust version control and prefers to just comment everything out.

Still enjoying Gemini 2.5 Pro more than Claude Sonnet these days, though, purely on vibes.

oezi

And it sure loves removing your carefully inserted comments for human readers.

sweetjuly

It feels like I'm negotiating with a toddler. If I say nothing, it adds useless comments everywhere. If I tell it to not add comments, it deletes all of my comments. Tell it to put the comments back, and it still throws away half of my comments and rewrites the reset in a less precise way.

Workaccount2

I think it is likely that the comments are more for the model than for the user. I would not be even slightly surprised if verbose coding versions outperformed light commenting versions.

xmprt

On the other hand, I'm skeptical if that has any impact because these models have thinking tokens where they can put all those comments and attention shouldn't care about how close the tokens are as long as they're within the context window.

PantaloonFlames

Have you tried modifying the system instructions to get it to stop doing that?

93po

i've noticed with ChatGPT is will 100% ignore certain instructions and I wonder if it's just an LLM thing. For example, I can scream and yell in caps at ChatGPT to not use em or en dashes and if anything it makes it use them even more. I've literally never once made it successfully not use them, even when it ignored it the first time, and my follow up is "output the same thing again but NO EM or EN DASHES!"

i've not tested this thoroughly, it's just my ancedotal experience over like a dozen attempts.

creesch

There are some things so ubiquitous in the training data that it is really difficult to tell models to not so them. Simply because it is so ingrained in their core training. Em dashes are apparently one of those things.

It's something I read a lottle while ago in a larger article but can't remember which article it was.

tacotime

I wonder if using the character itself in the directions, instead of the name for the character, might help with this.

Something like, "Forbidden character list: [—, –]" or "Do NOT use the characters '—' or '–' in any of your output"

EnPissant

I have had 95% success rate telling it not to use emdash or semicolon.

hu3

I pay for both ChatGPT Plus and Gemini Pro.

I'm thinking of cancelling my ChatGPT subscription because I keep hitting rate limits.

Meanwhile I have yet to hit any rate limit with Gemini/AI Studio.

HenriNext

AI Studio uses your API account behind the scenes, and it is subject to normal API limits. When you signup for AI Studio, it creates a Google Cloud free tier project with "gen-lang-client-" prefix behind the scenes. You can link a billing account at the bottom of the "get an api key page".

Also note that AI studio via default free tier API access doesn't seem to fall within "commercial use" in Google's terms of service, which would mean that your prompts can be reviewed by humans and used for training. All info AFAIK.

sysoleg

> AI Studio uses your API account behind the scenes

This is not true for the Gemini 2.5 Pro Preview model, at least. Although this model API is not available on the Free Tier [1], you can still use it on AI Studio.

[1] https://ai.google.dev/gemini-api/docs/pricing

PantaloonFlames

> AI studio via default free tier API access doesn't seem to fall within "commercial use" in Google's terms of service, which would mean that your prompts can be reviewed by humans and used for training. All info AFAIK.

Seconded.

oofbaroomf

I think AI Studio uses the API, so rate limits are extremely high and almost impossible for a normal human to reach if using the paid preview model.

staticman2

As far as I know AI Studio is always free, even on pay accounts, and you can definetly hit the rate limit.

Squarex

I much prefer Gemini over chapgpt, but they recently introduced a limit of 100 messages a day on a pro plan :( aistudio is probably still fine

MisterPea

I've heard it's only on mobile? I was using gemini for work on desktop for at least 6 hours yesterday (definitely over 100 back and forths) for work and did not get hit with any rate limits

Either way, Google's transparency with this is very poor - I saw the limits from a VP's tweet

fermentation

Is there a reason not to just use the API through openrouter or something?

abraxas

I found all the previous Gemini models somewhat inferior even compared to Claude 3.7 Sonnet (and much worse than 4) as my coding assistants. I'm keeping an open mind but also not rushing to try this one until some evaluations roll in. I'm actually baffled that the internet at large seems to be very pumped about Gemini but it's not reflective of my personal experience. Not to be that tinfoil hat guy but I smell at least a bit of astroturf activity around Gemini.

verall

I think it's just very dependent on what you're doing. Claude 3.5/3.7 Sonnet (thinking or not) were just absolutely terrible at almost anything I asked of it (C/C++/Make/CMake). Like constantly giving wrong facts, generating code that could never work, hallucinating syntax and APIs, thinking about something then concluding the opposite, etc. Gemini 2.5-pro and o3 (even old o1-preview, o1-mini) were miles better. I haven't used Claude 4 yet.

But everyone is using them for different things and it doesn't always generalize. Maybe Claude was great at typescript or ruby or something else I don't do. But for some of us, it definitely was not astroturf for Gemini. My whole team was talking about how much better it was.

bachmeier

> I'm actually baffled that the internet at large seems to be very pumped about Gemini but it's not reflective of my personal experience. Not to be that tinfoil hat guy but I smell at least a bit of astroturf activity around Gemini.

I haven't used Claude, but Gemini has always returned better answers to general questions relative to ChatGPT or Copilot. My impression, which could be wrong, is that Gemini is better in situations that are a substitute for search. How do I do this on the command line, tell me about this product, etc. all give better results, sometimes much better, on Gemini.

praveer13

I’ve honestly had consistently the opposite experiences for general questions. Also for images, Gemini just hallucinates crazily. ChatGPT even on free tier is giving perfectly correct answers, and I’m on Gemini pro. I canceled it yesterday because of this

dist-epoch

You should try Grok then. It's by far the best when searching is required, especially if you enable DeepSearch.

Take8435

I don't really want to use the X platform. What's the best alternative? Claude?

strobe

I'm switching a lot between Sonnet and Gemini in Aider - for some reason some of my coding problems only one of models capable to solve and I don't see any pattern which cold give answer upfront which I should to use for specific need.

3abiton

> I found all the previous Gemini models somewhat inferior even compared to Claude 3.7 Sonnet (and much worse than 4) as my coding assistants.

What are your usecases? Really not my experience, Claude disappoints in Data Science and complex ETL requests in python. O3 on the other hand really is phenomenal.

abraxas

Backend python code, postgres database. Front end: Reeact/NextJS. Very common stack in 2025. Using LLMs in assist mode (not as agents) for enhancing the existing code base that weighs in under 1MM LoC. So not a greenfield project anymore but not a huge amount of legacy cruft either.

3abiton

I still have the Claude subscription, so I will take a look again and see.

Fergusonb

I think they are fairly interchangeable, In Roo Code, Claude uses the tools better, but I prefer gemini's coding style and brevity (except for comments, it loves to write comments) Sometimes I mix and match if one fails or pursues a path I don't like.

vikramkr

I mean, they're cheaper models and they aren't as much if a pain about rate limiting as Claude was/they have a pretty solid depenresesrch without restrictive usage limits. IDK how it is for long running agentic stuff, would be surprised if it was anywhere near the other models, but for a general chatgpt competitor it doesn't matter if it's not as good as opus 4 if it's way cheaper and won't use up your usage limit

nprateem

Gemini sucks for its stupid comment verbosity like others have mentioned but wins on price to value.

tiahura

As a lawyer, Claude 4 is the best writer, and usually, but not always, the leader in legal reasoning. That said, o3 often grinds out the best response, and Gemini seems to be the most exhaustive researcher.

unpwn

I feel like instead of constantly releasing these preview versions with different dates attached they should just add a patch version and bump that.

impulser_

They can't because if someone has built something around that version they don't want to replace that model with a new model that could provide different results.

jfoster

In what way are dates better than integers at preventing that kind of mistake?

dist-epoch

Except google did exactly that with the previous release, where they silently redirect 03-25 requests to 05-06.

nsriv

Looking at you Anthropic. 4.0 markedly different from 3.7 in my experience.

Aeolun

The model name is completely different? How do you accidentally switch from 3.7 to 4.0?

jcuenod

82.2 on Aider

Still actually falling behind the official scores for o3 high. https://aider.chat/docs/leaderboards/

sottol

Does 82.2 correspond to the "Percent correct" of the other models?

Not sure if OpenAI has updated O3, but it looks like "pure" o3 (high) has a score of 79.6% in the linked table, "o3 (high) + gpt-4.1" combo has a the highest score of 82.7%.

The previous Gemini 2.5 Pro Preview 05-06 (yea, not current 06-05!) was at 76.9%.

That looks like a pretty nice bump!

But either way, these Aider benchmarks seem to be most useful/trustworthy benchmarks currently and really the only ones I'm paying attention to.

vessenes

But so.much.cheaper.and.faster. Pretty amazing.

hobofan

That's the older 05-06 preview, not the new one from today.

energy123

They knew that. The 82.2 comes from the new benchmarks in the OP not from the aider url. The aider url was supplied for comparison.

hobofan

Ah, thanks for clearing that up!

Workaccount2

Apparently 06-05 bridges the gap that people were feeling between the 03-25 and 05-06 release[1]

[1]https://nitter.net/OfficialLoganK/status/1930657743251349854...

unsupp0rted

Curious to see how this compares to Claude 4 Sonnet in code.

This table seems to indicate it's markedly worse?

https://blog.google/products/gemini/gemini-2-5-pro-latest-pr...

gundmc

Almost all of those benchmarks are coding related. It looks like SWE-Bench is the only one where Claude is higher. Hard to say which benchmark is most representative of actual work. The community seems to like Aider Polyglot from what I've seen

Alifatisk

Finally Google is advertising their ai studio, it's a shame they didn't push that beautiful app before.

zone411

Omproves on the Extended NYT Connections benchmark compared to both Gemini 2.5 Pro Exp (03-25) and Gemini 2.5 Pro Preview (05-06), scoring 58.7. The decline observed between 03-25 and 05-06 has been reversed - https://github.com/lechmazur/nyt-connections/.