Skip to content(if available)orjump to list(if available)

Gemini 3 Flash: frontier intelligence built for speed

Obertr

At this point in time I start to believe OAI is very much behind on the models race and it can't be reversed

Image model they have released is much worse than nano banana pro, ghibli moment did not happen

Their GPT 5.2 is obviously overfit on benchmarks as a consensus of many developers and friends I know. So Opus 4.5 is staying on top when it comes to coding

The weight of the ads money from google and general direction + founder sense of Brin brought the google massive giant back to life. None of my companies workflow run on OAI GPT right now. Even though we love their agent SDK, after claude agent SDK it feels like peanuts.

avazhi

"At this point in time I start to believe OAI is very much behind on the models race and it can't be reversed"

This has been true for at least 4 months and yeah, based on how these things scale and also Google's capital + in-house hardware advantages, it's probably insurmountable.

mmaunder

Yeah the only thing standing in Google's way is Google. And it's the easy stuff, like sensible billing models, easy to use docs and consoles that make sense and don't require 20 hours to learn/navigate, and then just the slew of bugs in Gemini CLI that are basic usability and model API interaction things. The only differentiator that OpenAI still has is polish.

dieortin

Is there anything pointing to Brin having anything to do with Google’s turnaround in AI? I hear a lot of people saying this, but no one explaining why they do

random9749832

This is obviously trained on Pro 3 outputs for benchmaxxing.

samyok

Don’t let the “flash” name fool you, this is an amazing model.

I have been playing with it for the past few weeks, it’s genuinely my new favorite; it’s so fast and it has such a vast world knowledge that it’s more performant than Claude Opus 4.5 or GPT 5.2 extra high, for a fraction (basically order of magnitude less!!) of the inference time and price

thecupisblue

Oh wow - I recently tried 3 Pro preview and it was too slow for me.

After reading your comment I ran my product benchmark against 2.5 flash, 2.5 pro and 3.0 flash.

The results are better AND the response times have stayed the same. What an insane gain - especially considering the price compared to 2.5 Pro. I'm about to get much better results for 1/3rd of the price. Not sure what magic Google did here, but would love to hear a more technical deep dive comparing what they do different in Pro and Flash models to achieve such a performance.

Also wondering, how did you get early access? I'm using the Gemini API quite a lot and have a quite nice internal benchmark suite for it, so would love to toy with the new ones as they come out.

mmaunder

Thanks, having it walk a hardcore SDR signal chain right now --- oh damn it just finished. The blog post makes it clear this isn't just some 'lite' model - you get low latency and cognitive performance. really appreciate you amplifying that.

unsupp0rted

How good is it for coding, relative to recent frontier models like GPT 5.x, Sonnet 4.x, etc?

esafak

What are you using it for and what were you using before?

epolanski

Gemini 2.0 flash was good already for some tasks of mine long time ago..

freedomben

Cool! I've been using 2.5 flash and it is pretty bad. 1 out of 5 answers it gives will be a lie. Hopefully 3 is better

samyok

Did you try with the grounding tool? Turning it on solved this problem for me.

Davidzheng

what if the lie is a logical deduction error not a fact retrieval error

jauntywundrkind

Just to point this out: many of these frontier models cost isn't that far away from two orders of magnitude more than what DeepSeek charges. It doesn't compare the same, no, but with coaxing I find it to be a pretty capable competent coding model & capable of answering a lot of general queries pretty satisfactorily (but if it's a short session, why economize?). $0.28/m in, $0.42/m out. Opus 4.5 is $5/$25 (17x/60x).

I've been playing around with other models recently (Kimi, GPT Codex, Qwen, others) to try to better appreciate the difference. I knew there was a big price difference, but watching myself feeding dollars into the machine rather than nickles has also founded in me quite the reverse appreciation too.

I only assume "if you're not getting charged, you are the product" has to be somewhat in play here. But when working on open source code, I don't mind.

happyopossum

Two orders of magnitude would imply that these models cost $28/m in and $42/m out. Nothing is even close to that.

Sincere6066

[flagged]

meetpateltech

simonw

For anyone from the Gemini team reading this: these links should all be prominent in the announcement posts. I always have to hunt around for them!

meetpateltech

Google actually does something similar for major releases - they publish a dedicated collection page with all related links.

For example, the Gemini 3 Pro collection: https://blog.google/products/gemini/gemini-3-collection/

But having everything linked at the bottom of the announcement post itself would be really great too!

minimaxir

Documentation for Gemini 3 Flash in particular: https://ai.google.dev/gemini-api/docs/gemini-3

__jl__

This is awesome. No preview release either, which is great to production.

They are pushing the prices higher with each release though: API pricing is up to $0.5/M for input and $3/M for output

For comparison:

Gemini 3.0 Flash: $0.50/M for input and $3.00/M for output

Gemini 2.5 Flash: $0.30/M for input and $2.50/M for output

Gemini 2.0 Flash: $0.15/M for input and $0.60/M for output

Gemini 1.5 Flash: $0.075/M for input and $0.30/M for output (after price drop)

Gemini 3.0 Pro: $2.00/M for input and $12/M for output

Gemini 2.5 Pro: $1.25/M for input and $10/M for output

Gemini 1.5 Pro: $1.25/M for input and $5/M for output

I think image input pricing went up even more.

Correction: It is a preview model...

sunaookami

This is a preview release.

srameshc

Thanks that was a great breakup of cost. I just assumed before that it was the same pricing. The pricing probably comes from the confidence and the buzz around Gemini 3.0 as one of the best performing models. But competetion is hot in the area and it's not too far where we get similar performing models for cheaper price.

mips_avatar

I'm more curious how Gemini 3 flash lite performs/is priced when it comes out. Because it may be that for most non coding tasks the distinction isn't between pro and flash but between flash and flash lite.

uluyol

Are these the current prices or the prices at the time the models were released?

__jl__

Mostly at the time of release except for 1.5 Flash which got a price drop in Aug 2024.

Google has been discontinuing older models after several months of transition period so I would expect the same for the 2.5 models. But that process only starts when the release version of 3 models is out (pro and flash are in preview right now).

YetAnotherNick

For comparison, GPT-5 mini is $0.25/M for input and $2.00/M for output, so double the price for input and 50% higher for output.

AuthError

flash is closer to sonnet than gpt minis though

fariszr

These flash models keep getting more expensive with every release.

Is there an OSS model that's better than 2.0 flash with similar pricing, speed and a 1m context window?

Edit: this is not the typical flash model, it's actually an insane value if the benchmarks match real world usage.

> Gemini 3 Flash achieves a score of 78%, outperforming not only the 2.5 series, but also Gemini 3 Pro. It strikes an ideal balance for agentic coding, production-ready systems and responsive interactive applications.

The replacement for old flash models will be probably the 3.0 flash lite then.

aoeusnth1

I think it's good, they're raising the size (and price) of flash a bit and trying to position Flash as an actually useful coding / reasoning model. There's always lite for people who want dirt cheap prices and don't care about quality at all.

thecupisblue

Yes, but the 3.0 Flash is cheaper, faster and better than 2.5 Pro.

So if 2.5 Pro was good for your usecase, you just got a better model for about 1/3rd of the price, but might hurt the wallet a bit more if you use 2.5 Flash currently and want an upgrade - which is fair tbh.

fullstackwife

cost of e2e task resolution should be cheaper, even if single inference cost is higher, you need fewer loops to solve a problem now

fariszr

Sure, but for simple tasks that require a large context window, aka the typical usecase for 2.0 flash, it's still significantly more expensive.

simonsarris

Even before this release the tools (for me: Claude Code and Gemini for other stuff) reached a "good enough" plateau that means any other company is going to have a hard time making me (I think soon most users) want to switch. Unless a new release from a different company has a real paradigm shift, they're simply sufficient. This was not true in 2023/2024 IMO.

With this release the "good enough" and "cheap enough" intersect so hard that I wonder if this is an existential threat to those other companies.

bgirard

Why wouldn't you switch? The cost to switch is near zero for me. Some tools have built in model selectors. Direct CLI/IDE plug-ins practically the same UI.

azuanrb

Not OP, but I feel the same way. Cost is just one of the factor. I'm used to Claude Code UX, my CLAUDE.md works well with my workflow too. Unless there's any significant improvement, changing to new models every few months is going to hurt me more.

bgirard

I used to think this way. But I moved to AGENTS.md. Now I use the different UI as a mental context separation. Codex is working on Feature A, Gemini on feature B, Claude on Feature C. It has become a feature.

theLiminator

For me, the last wave of models finally started delivering on their agentic coding promises.

orourke

This has been my experience exactly. Even over just the last few weeks I’ve noticed a dramatic drop in having to undo what the agents have done.

calflegal

I asked a similar question yesterday:

https://news.ycombinator.com/item?id=46290797

catigula

Correct. Opus 4.5 'solved' software engineering. What more do I need? Businesses need uncapped intelligence, and that is a very high bar. Individuals often don't.

gaigalas

If Opus is one-size-fits-all, then why Claude keeps the other series? (rethorical).

Opus and Sonnet are slower than Haiku. For lots of less sophisticated tasks, you benefit from the speed.

All vendors do this. You need smaller models that you can rapid-fire for lots of other reasons than vibe coding.

Personally, I actually use more smaller models than the sophisticated ones. Lots of small automations.

nprateem

But for me the previous models were routinely wrong time wasters that overall added no speed increase taking the lottery of whether they'd be correct into account.

szundi

[dead]

mmaunder

I think about what would be most terrifying to Anthropic and OpenAI i.e. The absolute scariest thing that Google could do. I think this is it: Release low latency, low priced models with high cognitive performance and big context window, especially in the coding space because that is direct, immediate, very high ROI for the customer.

Now, imagine for a moment they had also vertically integrated the hardware to do this.

avazhi

"Now, imagine for a moment they had also vertically integrated the hardware to do this."

Then you realise you aren't imagining it.

zhyder

Glad to see big improvement in the SimpleQA Verified benchmark (28->69%), which is meant to measure factuality (built-in, i.e. without adding grounding resources). That's one benchmark where all models seemed to have low scores until recently. Can't wait to see a model go over 90%... then will be years till the competition is over number of 9s in such a factuality benchmark, but that'd be glorious.

caminanteblanco

Does anyone else understand what the difference is between Gemini 3 'Thinking' and 'Pro'? Thinking "Solves complex problems" and Pro "Thinks longer for advanced math & code".

I assume that these are just different reasoning levels for Gemini 3, but I can't even find mention of there being 2 versions anywhere, and the API doesn't even mention the Thinking-Pro dichotomy.

flakiness

It seems:

   - "Thinking" is Gemini 3 Flash with higher "thinking_level"
   - Prop is Gemini 3 Pro. It doesn't mention "thinking_level" but I assume it is set to high-ish.

peheje

I think:

Fast = Gemini 3 Flash without thinking (or very low thinking budget)

Thinking = Gemini 3 flash with high thinking budget

Pro = Gemini 3 Pro with thinking

sunaookami

It's this, yes: https://x.com/joshwoodward/status/2001350002975850520

>Fast = 3 Flash

>Thinking = 3 Flash (with thinking)

>Pro = 3 Pro (with thinking)

lysace

Really stupid question: How is Gemini-like 'thinking' separate from artificial general intelligence (AGI)?

When I ask Gemini 3 Flash this question, the answer is vague but agency comes up a lot. Gemini thinking is always triggered by a query.

This seems like a higher-level programming issue to me. Turn it into a loop. Keep the context. Those two things make it costly.

primaprashant

Pricing is $0.5 / $3 per million input / output tokens. 2.5 Flash was $0.3 / $2.5. That's 66% increase in input tokens and 20% increase in output token pricing.

For comparison, from 2.5 Pro ($1.25 / $10) to 3 Pro ($2 / $12), there was 60% increase in input tokens and 20% increase in output tokens pricing.

simonw

Calculating price increases is made more complex by the difference in token usage. From https://blog.google/products/gemini/gemini-3-flash/ :

> Gemini 3 Flash is able to modulate how much it thinks. It may think longer for more complex use cases, but it also uses 30% fewer tokens on average than 2.5 Pro.

outside2344

I don't want to say OpenAI is toast for general chat AI, but it sure looks like they are toast.

JumpCrisscross

Kara Swisher recently compared OpenAI to Netscape. It’s starting to look apt.