Skip to content(if available)orjump to list(if available)

Gemini 2.0 is now available to everyone

singhrac

What is the model I get at gemini.google.com (i.e. through my Workspace subscription)? It says "Gemini Advanced" but there are no other details. No model selection option.

I find the lack of clarity very frustrating. If I want to try Google's "best" model, should I be purchasing something? AI Studio seems focused around building an LLM wrapper app, but I just want something to answer my questions.

Edit: what I've learned through Googling: (1) if you search "is gemini advanced included with workspace" you get an AI overview answer that seems to be incorrect, since they now include Gemini Advanced (?) with every workspace subscription.(2) a page exists telling you to buy the add-on (Gemini for Google Workspace), but clicking on it says this is no longer available because of the above. (3) gemini.google.com says "Gemini Advanced" (no idea which model) at the top, but gemini.google.com/advanced redirects me to what I have deduced is the consumer site (?) which tells me that Gemini Advanced is another $20/month

The problem, Google PMs if you're reading this, is that the gemini.google.com page does not have ANY information about what is going on. What model is this? What are the limits? Do I get access to "Deep Research"? Does this subscription give me something in aistudio? What about code artifacts? The settings option tells me I can change to dark mode (thanks!).

Edit 2: I decided to use aistudio.google.com since it has a dropdown for me on my workspace plan.

miyuru

changes must be rolling out now, I can see 3 Gemini 2.0 models in the dropdown, with blue "new" badges.

screenshot: https://beeimg.com/images/g25051981724.png

singhrac

This works on my personal Google account, but not on my workspace one. So I guess there's no access to 2.0 Pro then? I'm ok trying out Flash for now and see if it fixes the mistakes I ran into yesterday.

Edit: it does not. It continues to miss the fact that I'm (incorrectly) passing in a scaled query tensor to scaled_dot_product_attention. o3-mini-high gets this right.

ysofunny

hmm did you try clickin where it says 'gemini advanced'? I find it opens a drop down

singhrac

I just tried it but nothing happens when I click on that. You're talking about the thing on the upper left next to the open/close menu button?

easychris

Yes, very frustrating for me as well. I consider now purchasing Gemini Advance with another Non-Workspace account. :-(

I also found this [1]: “ Important:

A chat can only use one model. If you switch between models in an existing chat, it automatically starts a new chat. If you’re using Gemini Apps with a work or school Google Account, you can’t switch between models. Learn more about using Gemini Apps with a work or school account.”

I have no idea why the workspace accounts are such restricted.

[1] https://support.google.com/gemini/answer/14517446?hl=en&co=G...

rickette

"what model are you using, exact name please" is usually the first prompt I enter when trying out something.

mynameisvlad

Gemini 2.0 Flash Thinking responds with

> I am currently running on the Gemini model.

Gemini 1.5 Flash responds with

> I'm using Gemini 2.0 Flash.

I'm not even going to go on a limb here and say that question isn't going to give you an accurate response.

lxgr

You'd be surprised at how confused some models are about who they are.

freedomben

Indeed, asking the model which model it is might be one of the worst ways to find that information out

simonw

I upgraded my llm-gemini plugin to handle this, and shared the results of my "Generate an SVG of a pelican riding a bicycle" benchmark here: https://simonwillison.net/2025/Feb/5/gemini-2/

The pricing is interesting: Gemini 2.0 Flash-Lite is 7.5c/million input tokens and 30c/million output tokens - half the price of OpenAI's GPT-4o mini (15c/60c).

Gemini 2.0 Flash isn't much more: 10c/million for text/image input, 70c/million for audio input, 40c/million for output. Again, cheaper than GPT-4o mini.

mohsen1

> available via the Gemini API in Google AI Studio and Vertex AI.

> Gemini 2.0, 2.0 Pro and 2.0 Pro Experimental, Gemini 2.0 Flash, Gemini 2.0 Flash Lite

3 different ways of accessing the API, more than 5 different but extremely similarly named models. Benchmarks only comparing to their own models.

Can't be more "Googley"!

justanotheratom

They actually have two "studios"

Google AI Studio and Google Cloud Vertex AI Studio

And both have their own documentation, different ways of "tuning" the model.

Talk about shipping the org chart.

seanhunter

I don't know why you're finding it confusing. There's Duff, Duff Lite and now there's also all-new Duff Dry.

sho_hn

vdfs

- Experimental™

- Preview™

- Coming soon™

raverbashing

Honestly naming conventions in the AI world have been appalling regardless of the company

jug

Google is the least confusing to me. Old school version number and Pro is better than Flash which is fast and for simple stuff.

OpenAI is crazy. There may be a day when we might have o5 that is reasoning and 5o that is not, and where they belong to different generations too.

belval

Google isn't even the worst in my opinion. From the top of my head

Anthropic:

Claude 1 Claude Instant 1 Claude 2 Claude Haiku 3 Claude Sonnet 3 Claude Opus 3 Claude Haiku 3.5 Claude Sonnet 3.5 Claude Sonnet 3.5v2

OpenAI:

GPT-3.5 GPT-4 GPT-4o-2024-08-06 GPT-4o GPT-4o-mini o1 o3-mini o1-mini

Fun times when you try to setup throughput provisioning.

jorvi

I don't understand why if they're gonna use shorthands to make the tech seem cooler, they can't at least use mnemonic shorthands.

Imagine if it went like this:

  Mnemonics: m(ini), r(easoning), t(echnical)

  Claude 3m
  Claude 3mr
  Claude 3mt
  Claude 3mtr
  Claude 3r
  Claude 3t
  Claude 3tr

null

[deleted]

llm_trw

You missed the first sentence of the release:

>In December, we kicked off the agentic era by releasing an experimental version of Gemini 2.0 Flash

I guess I wasn't building AI agents in February last year.

pmayrgundter

I tried voice chat. It's very good, except for the politics

We started talking about my plans for the day, and I said I was making chili. G asked if I have a recipe or if I needed one. I said, I started with Obama's recipe many years ago and have worked on it from there.

G gave me a form response that it can't talk politics.

Oh, I'm not talking politics, I'm talking chili.

G then repeated form response and tried to change conversation, and as long as I didn't use the O word, we were allowed to proceed. Phew

xnorswap

I find it horrifying and dystopian that the part where it "Can't talk politics" is just accepted and your complaint is that it interrupts your ability to talk chilli.

"Go back to bed America." "You are free, to do as we tell you"

https://youtu.be/TNPeYflsMdg?t=143

falcor84

Hear, hear!

There has to be a better way about it. As I see it, to be productive, AI agents have to be able to talk about politics, because at the end of the day politics are everywhere. So following up on what they do already, they'll have to define a model's political stance (whatever it is), and to have it hold its ground, voicing an opinion or abstaining from voicing an opinion, but continuing the conversation, as a person would (at least as those of us who don't rage-quit a conversation when they hear something slightly controversial).

xnorswap

Indeed, you can facilitate talking politics without having a set opinion.

It's a fine line, but it is something the BBC managed to do for a very long time. The BBC does not itself present an opinion on Politics yet facilitates political discussion through shows like Newsnight and The Daily Politics (rip).

freedomben

There aren't many mono-cultures as strong as silicon valley politics. Where this intersects with my beliefs I love it, but where it doesn't it is maddening. I suspect that's how most people feel.

But anyway, when one is rarely or never challenged on their beliefs, they become rusty. Do you trust them to do a good job training their own views into the model, let alone training in the views of someone on the opposite side of the spectrum?

freedomben

I agree it's ridiculous that the mention of a politician triggers the block so feels overly tightened (which is the story of existencer for Gemini), but the alternative is that the model will have the politics of it's creators/trainers. Is that preferable to you? (I suppose that depends on how well your politics align with Silicon Valley)

silvajoao

Try out the new models at https://aistudio.google.com.

It's a great way to experiment with all the Gemini models that are also available via the API.

If you haven't yet, try also Live mode at https://aistudio.google.com/live.

You can have a live conversation with Gemini and have the model see the world via your phone camera (or see your desktop via screenshare on the web), and talk about it. It's quite a cool experience! It made me feel the joy of programming and using computers that I had had so many times before.

jbarrow

I've been very impressed by Gemini 2.0 Flash for multimodal tasks, including object detection and localization[1], plus document tasks. But the 15 requests per minute limit was a severe limiter while it was experimental. I'm really excited to be able to actually _do_ things with the model.

In my experience, I'd reach for Gemini 2.0 Flash over 4o in a lot of multimodal/document use cases. Especially given the differences in price ($0.10/million input and $0.40/million output versus $2.50/million input and $10.00/million output).

That being said, Qwen2.5 VL 72B and 7B seem even better at document image tasks and localization.

[1] https://notes.penpusher.app/Misc/Google+Gemini+101+-+Object+...

Alifatisk

> In my experience, I'd reach for Gemini 2.0 Flash over 4o

Why not use o1-mini?

msuvakov

Gemini 2.0 works great with large context. A few hours ago, I posted a ShowHN about parsing an entire book in a single prompt. The goal was to extract characters, relationships, and descriptions that could then be used for image generation:

https://news.ycombinator.com/item?id=42946317

Alifatisk

Which Gemini model is notebooklm using atm? Have they switched yet?

gwern

2.0 Pro Experimental seems like the big news here?

> Today, we’re releasing an experimental version of Gemini 2.0 Pro that responds to that feedback. It has the strongest coding performance and ability to handle complex prompts, with better understanding and reasoning of world knowledge, than any model we’ve released so far. It comes with our largest context window at 2 million tokens, which enables it to comprehensively analyze and understand vast amounts of information, as well as the ability to call tools like Google Search and code execution.

Tiberium

It's not that big of a news because they already had gemini-exp-1206 on the API - they just didn't say it was Gemini 2.0 Pro until today. Now the AI Studio marks it as 2.0 Pro Experimental - basically an older snapshot, the newer one is gemini-2.0-pro-exp-02-05.

Alifatisk

Oh so the previous model gemini-exp-1206 is now gemini-2.0-pro-experimental on aistudio? Is it better than gemini-2.0-flash-thinking-exp?

butlike

Flash is back, baby.

Next release should be called Gemini Macromedia

sho_hn

How about Gemini Director for the next agentic stuff.

ChocolateGod

This is going to send shockwaves through the industry.

VenturingVole

Or perhaps it'll help people to weave their dreams together and so it should be called.. ahh I feel old all of a sudden.

sbruchmann

Google Frontpage?

VenturingVole

I feel seen!

Also just had to explain to the better half why I suddenly shuddered and pulled such a face of despair.

benob

You made me feel old ;)

VenturingVole

Everyone old is new again!

On a serious note - LLMs have actually brought me a lot of joy lately and elevated my productivity substantially within the domains in which I choose to use them. When witnessing the less experienced more readily accept outputs without understanding the nuances there's definitely additional value in being... experienced.

drewda

Google Gemini MX 2026

leonidasv

That 1M tokens context window alone is going to kill a lot of RAG use cases. Crazy to see how we went from 4K tokens context windows (2023 ChatGPT-3.5) to 1M in less than 2 years.

Alifatisk

Gemini can in theory handle 10M tokens, I remember they saying it in one of their presentations.

leetharris

These names are unbelievably bad. Flash, Flash-Lite? How do these AI companies keep doing this?

Sonnet 3.5 v2

o3-mini-high

Gemini Flash-Lite

It's like a competition to see who can make the goofiest naming conventions.

Regarding model quality, we experiment with Google models constantly at Rev and they are consistently the worst of all the major players. They always benchmark well and consistently fail in real tasks. If this is just a small update to the gemini-exp-1206 model, then I think they will still be in last place.

falcor84

> It's like a competition to see who can make the goofiest naming conventions.

I'm still waiting for one of them to overflow from version 360 down to One.

cheeze

Just wait for One X, S, Series X, Series X Pro, Series X Pro with Super Fast Charging 2.0

Ninjinka

Pricing is CRAZY.

Audio input is $0.70 per million tokens on 2.0 Flash, $0.075 for 2.0 Flash-Lite and 1.5 Flash.

For gpt-4o-mini-audio-preview, it's $10 per million tokens of audio input.

sunaookami

Sadly: "Gemini can only infer responses to English-language speech."

https://ai.google.dev/gemini-api/docs/audio?lang=rest#techni...