Skip to content(if available)orjump to list(if available)

Making 2.5 Flash and 2.5 Pro GA, and introducing Gemini 2.5 Flash-Lite

simonw

They don't mention it in the post, but it looks like this includes a price increase for the Gemini 2.5 Flash model.

For 2.5 Flash Preview https://web.archive.org/web/20250616024644/https://ai.google...

$0.15/million input text / image / video

$1.00/million audio

Output: $0.60/million non-thinking, $3.50/million thinking

The new prices for Gemini 2.5 Flash ditch the difference between thinking and non-thinking and are now: https://ai.google.dev/gemini-api/docs/pricing

$0.30/million input text / image / video (2x more)

$1.00/million audio (same)

$2.50/million output - significantly more than the old non-thinking price, less than the old thinking price.

Workaccount2

The blog post has more info about the pricing changes

https://developers.googleblog.com/en/gemini-2-5-thinking-mod...

jjani

The real news is that non-thinking output is now 4x more expensive, which they of course carefully avoid mentioning in the blog, only comparing the thinking prices.

How cute they are with their phrasing:

> $2.50 / 1M output tokens (*down from $3.50 output)

Which should be "up from $0.60 (non-thinking)/down from $3.50 (thinking)"

amazingamazing

Is it possible to get non-thinking only now, though? If not, why would that matter, since it's irrelevant?

drift_code

They seem just rebrand the non-thinking model to flash-lite, so it’s less expensive than before

dangoodmanUT

Good catch, that's a pretty notable change considering this was about to be the GOAT of audio-to-audio

irthomasthomas

"Soon, AI too cheap to meter" "Meantime, price go up".

skybrian

There are a lot more price drops, though.

llm_nerd

Not too long ago Google was a bit of a joke in AI and their offerings were uncompetitive. For a while a lot of their preview/beta models had a price of 0.00. They were literally giving it away for free to try to get people to consider their offerings when building solutions.

As they've become legitimately competitive they have moved towards the pricing of their competitors.

nicce

We have likely seen the cheapest prices already. Once we can’t function without them anymore - go as high as you can!

nico

Hopefully we get more competition and someone willing to undercut the more expensive options

tekno45

"will be too cheap to meter" means we're definitely metering it now.

rudedogg

A cool 2x+ price increase.

And Gemini 2.0 Flash was $0.10/$0.40.

__jl__

1.5 -> 2.0 was a price increase as well (double, I think, and something like 4x for image input)

Now 2.0 -> 2.5 is another hefty price increase.

jjani

4x price increase over preview output for non-thinking.

k8sToGo

You can also see this difference in open router.

But why is there only thinking flash now?

Tiberium

It might be a bit confusing, but there's no "only thinking flash" - it's a single model, and you can turn off thinking if you set thinking budget to 0 in the API request. Previously 2.5 Flash Preview was much cheaper with the thinking budget set to 0, now the price is the same. Of course, with thinking enabled the model will still use far more output tokens than the non-thinking mode.

hnuser123456

Apparently, you can make a request to 2.5 flash to not use thinking, but it will still sometimes do it anyways, this has been an issue for months, and hasn't been fixed by model updates: https://github.com/google-gemini/cookbook/issues/722

varun_chopra

At one point, when they made Gemini Pro free on AI Studio, Gemini was the model of choice for many people, I believe.

Somehow it's gotten worse since then, and I'm back to using Claude for serious work.

Gemini is like that guy who keeps talking but has no idea what he's actually talking about.

I still use Gemini for brainstorming, though I take its suggestions with several grains of salt. It's also useful for generating prompts that I can then refine and use with Claude.

therealmarv

not according to Aider leaderboard https://aider.chat/docs/leaderboards/

I use only the APIs directly with Aider (so no experience with AI Studio).

My feeling with Claude is that they still perform good with weak prompts, the "taste" is maybe a little better when the direction is kinda unknown by the prompter.

When the direction is known I see Gemini 2.5 Pro (with thinking) on top of Claude with code which does not break. And with o4-mini and o3 I see more "smart" thinking (as if there is a little bit of brain inside these models) at the expense of producing unstable code (Gemini produces more stable code).

I see problems with Claude when complexity increases and I would put it behind Gemini and o3 in my personal ranking.

So far I had no reason to go back to Claude since o3-mini was released.

macNchz

Using all of the popular coding models pretty extensively over the past year, I've been having great success with Gemini 2.5 Pro as far as getting working code the first time, instruction following around architectural decisions, and staying on-task. I use Aider and write mostly Python, JS, and shell scripts. I've spent hundreds of dollars on the Claude API over time but have switched almost entirely to Gemini. The API itself is also much more reliable.

My only complaint about 2.5 Pro is around the inane comments it leaves in the code (// Deleted varName here).

ZeWaka

If you use one of the AI static instructions methods (e.g., .github/copilot-instructions.md) and tell it to not leave the useless comments, that seems to solve the issue.

stavros

I just spent $35 for Opus to solve a problem with a hardware side-project (I'm turning an old rotary phone into a meeting handset so I can quit meetings by hanging up, if you must know). It didn't solve the problem, it churned and churned and spent a ton of money.

I was much more satisfied with o3 and Aider, I haven't tried them on this specific problem but I did quite a bit of work on the same project with them last night. I think I'm being a bit unfair, because what Claude got stuck on seems to be a hard problem, but I don't like how they'll happily consume all my money trying the same things over and over, and never say "yeah I give up".

alecco

Give them feedback.

willseth

Same experience here. I even built a Gem with am elaborate prompt instructing it how to be concise, but it still gives annoying long-winded responses and frequently expands the scope of its answer far beyond the prompt.

theturtletalks

I feel like this is part of the AI playbook now. Launch a really strong, capable model (expensive price inference) and once users think it’s SOTA, neuter it so the cost is cheaper and most users won’t notice.

The same happened with GPT-3.5. It was so good early on and got worse as OpenAI began to cut costs. I feel like when GPT-4.1 was cloaked as Optimus on Openrouter, it was really good, but once it launched, it also got worse.

carlos22

That is the capitalism' playbook all along. Its just much faster because its just software. But they do it for everything all the time.

unshavedyak

Yea, i had similar experiences. At first it felt like it solved complex problems really well, but then i realized i was having trouble steering it for simple things. It was also very verbose.

Overall though my primary concern is the UX, and Claude Code is the UX of choice for me currently.

jasonjmcghee

I have no inside information but feels like they quantized it. I've seen patterns that I usually only see in quantized models like getting stuck repeating a single character indefinitely

huevosabio

They made it talk like buzzfeed articles for every single interaction. It's absolutely horrible

chrismustcode

When I ask it do to do something in cursor it goes full sherlock thinking about every possible outcome.

Just claude 4 sonnet with thinking just has a bit think then does it

UncleOxidant

Used to be able to use Gemini Pro free in cline. Now the API limits are so low that you immediately get messages about needing to top up your wallet and API queries just don't go through. Back to using DeepSeek R1 free in cline (though even that eventually stops after a few hours and you have to wait until the next day for it to work again). Starting to look like I need to setup a local LLM for coding - which means it's time to seriously upgrade my PC (well, it's been about 10 years so it was getting to be time anyway)

Workaccount2

By the time you breakeven on whatever you spend on a decent LLM capable build, your hardware will be too far behind to run whatever is best locally then. It's something that feels cheaper but with the pace of things, unless you are churning an insane amount of tokens, probably doesn't make sense. Never mind that local models running on 24 or 48GB are maybe around flash-lite in ability while being slower than SOTA models.

Local models are mostly for hobby and privacy, not really efficiency.

FirmwareBurner

I found Gemini now terrible for coding. I gave it my code blocks and told it what to change and it added tonnes and tonnes of needles extra code plus endless comments. It turned a tight code into a Papyrus.

ChatGPT is better but tends to be too agreeable, never trying to disagree with what you say even if it's stupid so you end up shooting yourself in the foot.

Claude seems like the best compromise.

Just my two kopecks.

lvl155

I am very impressed with Gemini and stopped using OpenAI. Sometimes, I ping all three major models on OpenRouter but 90% is on Gemini now. Compare that to 90% ChatGPT last year.

codingwagie

I love to hate on google, but yeah their models are really good. The larger context window is huge

aatd86

Same. For now I have canceled my claude subscription. Gemini has been catching up.

jbellis

Love to see it, this takes Flash Lite from "don't bother" territory for writing code to potentially useful. (Besides being inexpensive, Flash Lite is fast -- almost always sub-second, to as low as 200ms. Median around 400ms IME.)

Brokk (https://brokk.ai/) currently uses Flash 2.0 (non-Lite) for Quick Edits, we'll evaluate 2.5 Lite now.

ETA: I don't have a use case for a thinking model that is dumber than Flash 2.5, since thinking negates the big speed advantage of small models. Curious what other people use that for.

sethkim

I run a batch inference/LLM data processing service and we do a lot of work around cost and performance profiling of (open-weight) models.

One odd disconnect that still exists in LLM pricing is the fact that providers charge linearly with respect to token consumption, but costs are actually quadratic with an increase in sequence length.

At this point, since a lot of models have converged around the same model architecture, inference algorithms, and hardware - the chosen costs are likely due to a historical, statistical analysis of the shape of customer requests. In other words, I'm not surprised to see costs increase as providers gather more data about real-world user consumption patterns.

candiddevmike

Curious to hear what folks are doing with Gemini outside of the coding space and why you chose it. Are you building your app so you can swap the underlying GenAI easily? Do you "load balance" your usage across other providers for redundancy or cost savings? What would happen if there was ever some kind of spot market for LLMs?

thimabi

In my experience, Gemini 2.5 Pro really shines in some non-coding use cases such as translation and summarization via Canvas. The gigantic context window and large usage limits help in this regard.

I also believe Gemini is much better than ChatGPT in generating deep research reports. Google has an edge in web search and it shows. Gemini’s reports draw on a vast number of sources, thus tend to be more accurate. In general, I even prefer its writing style, and I like the possibility of exporting reports to Google Docs.

One thing that I don’t like about Gemini is its UI, which is miles behind the competition. Custom instructions, projects, temporary chats… these things either have no equivalent in Gemini or are underdeveloped.

hnuser123456

If you're a power user, you should probably be using Gemini through AI studio rather than the "basic user" version. That allows you to set system instructions, temperature, structured output, etc. There's also NotebookLM. Google seems to be trying to make a bunch of side projects based on Gemini and seeing what sticks, and the generic gemini app/webchat is just one of those.

thimabi

My complaint is that any data within AI Studio can be kept by Google and used for training purposes — even if using the paid tier of the API, as far as I know. Because of that, I end up only using it rarely, when I don’t care about the fate of the data.

VeejayRampay

for translation you'll still be limited for longer texts by the 65K output limit though I suppose?

thimabi

Yes. I haven't had problems with the output limit so far, as I do translations iteratively, over each section of longer texts.

What I like the most about translating with Gemini is that its default performance is already good enough, and it can be improved via the one million tokens of the context window. I load to the context my private databases of idiomatic translations, separated by language pairs and subject areas. After doing that, the need for manually reviewing Gemini translations is greatly diminished.

ttul

I can throw a pile of NDAs at it and it neatly pulls out relevant stuff from them within a few seconds. The huge context window and excellent needle in a haystack performance is great for this kind of task.

spmurrayzzz

The NIAH performance is a misleading indicator for performance on the tasks people really want the long context for. It's great as a smoke/regression test. If you're bad on NIAH, you're not gonna do well on the more holistic evals.

But the long context eval they used (MRCR) is limited. It's multi-needle, so that's a start, but its not evaluating long range dependency resolution nor topic modeling, which are the things you actually care about beyond raw retrieval for downstream tasks. Better than nothing, but not great for just throwing a pile of text at it and hoping for the best. Particularly for out-of-distribution token sequences.

I do give google some credit though, they didn't try to hide how poorly they did on that eval. But there's a reason you don't see them adding RULER, HELMET, or LongProc to this. The performance is abysmal after ~32k.

EDIT: I still love using 2.5 Pro for a ton of different tasks. I just tend to have all my custom agents compress the context aggressively for any long context or long horizon tasks.

NitpickLawyer

> The performance is abysmal after ~32k.

Huh. We've not seen this in real-world use. 2.5 pro has been the only model where you can throw a bunch of docs into it, give it a "template" document (report, proposal, etc), even some other-project-example stuff, and tell it to gather all relevant context from each file and produce "template", and it does surprisingly well. Couldn't reproduce this with any other top tier model, at this level of quality.

sync

I use it extensively for https://lexikon.ai - in particular one part of what Lexikon does involves processing large amounts of images, and the way Google charges for vision is vastly cheaper compared to the big alternatives (OpenAI, Anthropic)

mrtesthah

Wow, if I knew that someone was using your product on my conversation with them I'd probably have to block them.

extr

Gemini Flash 2.0 is an absolute workhorse of a model at extremely low cost. It's obviously not going to measure up to frontier models in terms of intelligence but the combination of low cost, extreme speed, and highly reliable structured output generation make it really pleasant to develop with. I'll probably test against 2.5 Lite for an upgrade here.

bradly

I've yet to run out of free image gen credits with Gemini, so I use it for any low-effort image gen like when my kids want to play with it or for testing prompts before committing my o4 tokens for better quality results.

k8sToGo

I use Gemini 2.5 Flash (non thinking) as a thought partner. It helps me organize my thoughts or maybe even give some new input I didn't think of before.

I really like to use it also for self reflection where I just input my thoughts and maybe concerns and just see what it has to say.

androng

I use it for https://toolong.link Youtube summaries with images because only Gemini has easy access to YouTube and it has a gigantic context window

crowcroft

Simple unstructured to structured data transformation.

I find Flash and Flash Lite are more consistent than others as well as being really fast and cheap.

I could swap to other providers fairly easily, but don't intend to at this point. I don't operate at a large scale.

dinesh2609

6.33X increase in the price of Audio processing compared to 2.0 Flash-Lite

Gemini 2.5 Flash Lite (Audio Input) - $0.5/million tokens

Gemini 2.0 Flash Lite (Audio Input) - $0.075/million tokens

Wonder what led to such a high bump in Audio token processing

zurfer

for anyone, who was expecting more news: the GA models benchmark basically the same as the last preview models. It's really just Google telling us that we get less api errors and this model will have a checkpoint for a longer time.

zzleeper

Good luck using 2.5 for anything non-trivial.

I have about 500,000 news articles I am parsing. OpenAI models work well but found Gemini had fewer mistakes.

Problem is; they give me a terrible 10k RPD limit. To increase to the next tier, they then require a minimum amount of spending but I can't reach that amount even when maxing the RPD limit for multiple days in a row.

I emailed them twice and completed their forms but everyone knows how this works. So now I'm back at OpenAI, with a model with a bit more mistakes but that won't 403 me after half an hour of using it due to their limits.

eldenring

I'm guessing now that it is GA this won't be a problem.

zzleeper

I wish! The tier-based limits are still the same!

At least it's more expensive now so I guess I will be able to hop to the next tier sooner? ¯\_(ツ)_/¯

serjester

I'm glad that they standardized pricing for the thinking vs non-thinking variant. A couple weeks ago I accidentally spent thousands of extra dollars by forgetting to set the thinking budget to zero. Forgetting a single config parameter should not automatically raise the model cost 5X.

[edit] I'm less excited about this because it looks like their solution was to dramatically raise the base price on the non-thinking variant.

zelias

Not sure where else to post this, but when attempting to use any of the Gemini 2.5 models via API, I receive an "empty content" response about 50% of the time. To be clear, the API responds successfully, but the `content` returned by the LLM is just an empty string.

Has anyone here had any luck working around this problem?

Tiberium

What finish reason are you getting? Perhaps your code sets a low max_tokens, so the generation stops while the model is still thinking, without giving any actual output.

zelias

The finish reason is `length`. I have tried setting minimal token budgets, really small prompts, and max lengths of various sizes from 100-4000 and nothing seems to make a consistent dent in the behavioral pattern.

heliophobicdude

Wishing they release the Gemini Diffusion model. It'll quickly replace the default model for Aider.

vessenes

It feels to me like properly instrumented, these diffusion models are going to be really powerful coding tools. Imagine a “smart” model carving out a certain number of tokens in a response for each category of response output, then diffusing the categories.

causal

Why do you think so? I've played with the Diffusion model a bit and it makes a lot of mistakes