Skip to content(if available)orjump to list(if available)

Gemini CLI

iandanforth

I love how fragmented Google's Gemini offerings are. I'm a Pro subscriber, but I now learn I should be a "Gemini Code Assist Standard or Enterprise" user to get additional usage. I didn't even know that existed! As a run of the mill Google user I get a generous usage tier but paying them specifically for "Gemini" doesn't get me anything when it comes to "Gemini CLI". Delightful!

diegof79

Google suffers from Microsoft's issues: it has products for almost everything, but its confusing product messaging dilutes all the good things it does.

I like Gemini 2.5 Pro, too, and recently, I tried different AI products (including the Gemini Pro plan) because I wanted a good AI chat assistant for everyday use. But I also wanted to reduce my spending and have fewer subscriptions.

The Gemini Pro subscription is included with Google One, which is very convenient if you use Google Drive. But I already have an iCloud subscription tightly integrated with iOS, so switching to Drive and losing access to other iCloud functionality (like passwords) wasn’t in my plans.

Then there is the Gemini chat UI, which is light years behind the OpenAI ChatGPT client for macOS.

NotebookLM is good at summarizing documents, but the experience isn’t integrated with the Gemini chat, so it’s like constantly switching between Google products without a good integrated experience.

The result is that I end up paying a subscription to Raycast AI because the chat app is very well integrated with other Raycast functions, and I can try out models. I don’t get the latest model immediately, but it has an integrated experience with my workflow.

My point in this long description is that by being spread across many products, Google is losing on the UX side compared to OpenAI (for general tasks) or Anthropic (for coding). In just a few months, Google tried to catch up with v0 (Google Stitch), GH Copilot/Cursor (with that half-baked VSCode plugin), and now Claude Code. But all the attempts look like side-projects that will be killed soon.

behnamoh

Actually, that's the reason a lot of startups and solo developers prefer non-Google solutions, even though the quality of Gemini 2.5 Pro is insanely high. The Google Cloud Dashboard is a mess, and they haven't fixed it in years. They have Vertex that is supposed to host some of their models, but I don't understand what's the difference between that and their own cloud. And then you have two different APIs depending on the level of your project: This is literally the opposite of what we would expect from an AI provider where you start small and regardless of the scale of your project, you do not face obstacles. So essentially, Google has built an API solution that does not scale because as soon as your project gets bigger, you have to switch from the Google AI Studio API to the Vertex API. And I find it ridiculous because their OpenAI compatible API does not work all the time. And a lot of tools that rely on that actually don't work.

Google's AI offerings that should be simplified/consolidated:

- Jules vs Gemini CLI?

- Vertex API (requires a Google Cloud Account) vs Google AI Studio API

Also, since Vertex depends on Google Cloud, projects get more complicated because you have to modify these in your app [1]:

``` # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values # with appropriate values for your project. export GOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECT export GOOGLE_CLOUD_LOCATION=global export GOOGLE_GENAI_USE_VERTEXAI=True ```

[1]: https://cloud.google.com/vertex-ai/generative-ai/docs/start/...

irthomasthomas

I just use gemini-pro via openrouter API. No painful clicking around on the cloud to find the billing history.

behnamoh

but you won't get the full API capabilities of Gemini (like setting the safety level).

tarvaina

It took me a while but I think the difference between Vertex and Gemini APIs is that Vertex is meant for existing GCP users and Gemini API for everyone else. If you are already using GCP then Vertex API works like everything else there. If you are not, then Gemini API is much easier. But they really should spell it out, currently it's really confusing.

Also they should make it clearer which SDKs, documents, pricing, SLAs etc apply to each. I still get confused when I google up some detail and end up reading the wrong document.

fooster

The other difference is that reliability for the gemini api is garbage, whereas for vertex ai it is fantastic.

cperry

@sachinag is afk but wanted me to flag that he's on point for fixing the Cloud Dashboard - it's WIP!

sachinag

Thanks Chris!

"The Google Cloud Dashboard is a mess, and they haven't fixed it in years." Tell me what you want, and I'll do my best to make it happen.

In the interim, I would also suggest checking out Cloud Hub - https://console.cloud.google.com/cloud-hub/ - this is us really rethinking the level of abstraction to be higher than the base infrastructure. You can read more about the philosophy and approach here: https://cloud.google.com/blog/products/application-developme...

WXLCKNO

You guys should try my AGI test.

It's easy, you just ask the best Google Model to create a script that outputs the number of API calls made to the Gemini API in a GCP account.

100% fail rate so far.

coredog64

At least a bunch of people got promotions for demonstrating scope via the release of a top-level AI product.

bachmeier

I had a conversation with Copilot about Copilot offerings. Here's what they told me:

If I Could Talk to Satya...

I'd say:

“Hey Satya, love the Copilots—but maybe we need a Copilot for Copilots to help people figure out which one they need!”

Then I had them print out a table of Copilot plans:

- Microsoft Copilot Free - Github Copilot Free - Github Copilot Pro - Github Copilot Pro+ - Microsoft Copilot Pro (can only be purchased for personal accounts) - Microsoft 365 Copilot (can't be used with personal accounts and can only be purchased by an organization)

bayindirh

There's also $300/mo AI ULTRA membership. It's interesting. Google One memberships even can't detail what "extra features" I can have, because it possibly changes every hour or so.

SecretDreams

Maybe their products team is also just run by Gemini, and it's changing its mind every day?

I also just got the email for Gemini ultra and I couldn't even figure out what was being offered compared to pro outside of 30tb storage vs 2tb storage!

ethbr1

> Maybe their products team is also just run by Gemini, and it's changing its mind every day?

Never ascribe to AI, that which is capable of being borked by human PMs.

Keyframe

> There's also $300/mo AI ULTRA membership

Not if you're in EU though. Even though I have zero or less AI use so far, I tinker with it. I'm more than happy to pay $200+tax for Max 20x. I'd be happy to pay same-ish for Gemini Pro.. if I knew how and where to have Gemini CLI like I do with Claude code. I have Google One. WHERE DO I SIGN UP, HOW DO I PAY AND USE IT GOOGLE? Only thing I have managed so far is through openrouter via API and credits which would amount to thousands a month if I were to use it as such, which I won't do.

What I do now is occasionally I go to AI Studio and use it for free.

GardenLetter27

Google is fumbling the bag so badly with the pricing.

Gemini 2.5 Pro is the best model I've used (even better than o3 IMO) and yet there's no simple Claude/Cursor like subscription to just get full access.

Nevermind Enterprise users too, where OpenAI has it locked up.

bachmeier

> Google is fumbling the bag so badly with the pricing.

In certain areas, perhaps, but Google Workspace at $14/month not only gives you Gemini Pro, but 2 TB of storage, full privacy, email with a custom domain, and whatever else. College students get the AI pro plan for free. I recently looked over all the options for folks like me and my family. Google is obviously the right choice, and it's not particularly close.

weird-eye-issue

And yet there were still some AI features that were unavailable to workspace users for a few months and you had to use a personal account. I think it's mostly fixed now but that was quite annoying since it was their main AI product (Gemini Studio or whatever, I don't remember for sure)

llm_nerd

I wouldn't dream of thinking anyone has anything "locked up". Certainly not OpenAI which increasingly seems to be on an uphill battle against competitors (including Microsoft who even though they're a partner, are also a competitor) who have other inroads.

Not sure what you mean by "full access", as none of the providers offer unrestricted usage. Pro gets you 2.5 Pro with usage limits. Ultra gets you higher limits + deep research + Veo 3. And of course you can use the API usage-billed model.

tmoertel

The Gemini Pro subscription includes Deep Research and Veo 3; you don't need the pricey Ultra subscription: https://gemini.google/subscriptions/

gavinray

I actually had this exact same question when I read the docs, made an issue about it:

https://github.com/google-gemini/gemini-cli/issues/1427

__MatrixMan__

Anthropic is the same. Unless it has changed within the last few months, you can subscribe to Claude but if you want to use Claude Code it'll come out of your "API usage" bucket which is billed separately than the subscription.

Some jerk has learned that we prefer CLI things and has come to the conclusion that we should therefore pay extra for them.

Workaround is to use their GUI with some MCPs but I dislike it because window navigation is just clunky compared to terminal multiplexer navigation.

carefulfungi

This is down voted I guess because the circumstances have changed - but boy is it still confusing. All these platforms have chat subscriptions, api pay-as-you-go, CLI subscriptions like "claude code" ... built-in offers via Github enterprise or Google Workspace enterprise ...

It's a frigg'n mess. Everyone at our little startup has spent time trying to understand what the actual offerings are; what the current set of entitlements are for different products; and what API keys might be tied to what entitlements.

I'm with __MatrixMan__ -- it's super confusing and needs some serious improvements in clarity.

justincormack

And claude code can now be connected to either an API sub or a chat sub apparently.

gnur

This has changed actually, since this month you can use claude code if you have a cloud pro subscription.

__MatrixMan__

Great news, thanks.

unshavedyak

In addition to others mentioning subscriptions being better in Claude Code, i wanted to compare the two so i tried to find a Claude Max equivalent license... i have no clue how. In their blog post they mention `Gemini Code Assist Standard or Enterprise license` but they don't even link to it.. lol.

Some googling lands me to a guide: https://cloud.google.com/gemini/docs/discover/set-up-gemini#...

I stopped there because i don't want to signup i just wanted to review, but i don't have an admin panel or etc.

It feels insane to me that there's a readme on how to give them money. Claude's Max purchase was just as easy as Pro, fwiw.

Workaccount2

I think it is pretty clear that these $20/subs are loss leaders, and really only meant to get regular people to really start leaning on LLMs. Once they are hooked, we will see what the actual price of using so much compute is. I would imagine right now they are pricing their APIs either at cost or slightly below.

stpedgwdgfhgdd

When using a single terminal Pro is good enough (even with a medium-large code base). When I started working with two terminals at two different issues at the same time, i’m reaching the credit limit.

ethbr1

Or they're planning on the next wave of optimized hardware cutting inference costs.

trostaft

AFAIK, Claude code operates on your subscription, no? That's what this support page says

https://support.anthropic.com/en/articles/11145838-using-cla...

Could have changed recently. I'm not a user so I can't verify.

re5i5tor

In recent research (relying on Claude so bear that in mind), connecting CC via Anthropic Console account / API key ends up being less expensive.

kissgyorgy

This is simply not true. All personal paid packages include Claude Code now.

indigodaddy

Are you using CC for your python framework?

3abiton

And they say our scale up is siloed. Leave it to google to show' em.

cperry

Hi - I work on this. Uptake is a steep curve right now, spare a thought for the TPUs today.

Appreciate all the takes so far, the team is reading this thread for feedback. Feel free to pile on with bugs or feature requests we'll all be reading.

bsenftner

Thank you for your work on this. I spent the afternoon yesterday trying to convert an algorithm written in ruby (which I do not know) to vanilla JavaScript. It was a comedy of failing nonsense as I tried to get gpt-4.1 to help, and it just led me down pointless rabbit holes. I installed Gemini CLI out of curiosity, pointed it at the Ruby project, and it did the conversion from a single request, total time from "think I'll try this" to it working was 5 minutes. Impressed.

cperry

<3 love to hear it!

ebiester

So, as a member of an organization who pays for google workspace with gemini, I get the message `GOOGLE_CLOUD_PROJECT environment variable not found. Add that to your .env and try again, no reload needed!`

At the very least, we need better documentation on how to get that environment variable, as we are not on GCP and this is not immediately obvious how to do so. At the worst, it means that your users paying for gemini don't have access to this where your general google users do.

thimabi

I believe Workspace users have to pay a separate subscription to use the Gemini CLI, the so-called “Gemini for Google Cloud”, which starts at an additional 19 dollars per month [^1]. If that’s really the case, it’s very disappointing to me. I expected access to Gemini CLI to be included in the normal Workspace subscription.

[^1]: https://console.cloud.google.com/marketplace/product/google/...

cperry

[edit] all lies - I got my wires crossed, free tier for Workspace isn't yet supported. sorry. you need to set the project and pay. this is WIP.

Workspace users [edit: cperry was wrong] can get the free tier as well, just choose "More" and "Google for Work" in the login flow.

It has been a struggle to get a simple flow that works for all users, happy to hear suggestions!

827a

Having played with the gemini-cli here for 30 minutes, so I have no idea but best guess: I believe that if you auth with a Workspace account it routes all the requests through the GCP Vertex API, which is why it needs a GOOGLE_CLOUD_PROJECT env set, and that also means usage-based billing. I don't think it will leverage any subscriptions the workspace account might have (are there still gemini subscriptions for workspace? I have no idea. I thought they just raised everyone's bill and bundled it in by default. What's Gemini Code Assist Standard or Enterprise? I have no idea).

cperry

Maxious

I'd echo that having to get the IT section involved to create a google cloud project is not great UX when I have access to NotebookLM Pro and Gemini for Workplace already.

Also this doco says GOOGLE_CLOUD_PROJECT_ID but the actual tool wants GOOGLE_CLOUD_PROJECT

ebiester

While I get my organization's IT department involved, I do wonder why this is built in a way that requires more work for people already paying google money than a free user.

taupi

Right now authentication doesn't work if you're working on a remote machine and try to authenticate with Google, FYI. You need an alternate auth flow that gives the user a link and lets them paste a key in (this is how Claude Code does it).

GenerWork

I'm just a hobbyist, but I keep getting the error "The code change produced by Gemini cannot be automatically applied. You can manually apply the change or ask Gemini to try again". I assume this is because the service is being slammed?

Edit: I should mention that I'm accessing this through Gemini Code Assist, so this may be something out of your wheelhouse.

cperry

odd, haven't seen that one - you might file an issue https://github.com/google-gemini/gemini-cli/issues

I don't think that's capacity, you should see error codes.

mkagenius

Hi - I integrated Apple Container on M1 to run[1] the code generated by Gemini CLI. It works great!

1. CodeRunner - https://github.com/BandarLabs/coderunner/tree/main?tab=readm...

cperry

<3 amazing

conception

Google Gemini Google Gemini Ultra AI Studio Vertex AI Notebook LLM Jules

All different products doing the sameish thing. I don’t know where to send users to do anything. They are all licensed differently. Bonkers town.

elashri

Hi, Thanks for this work.

currently it seems these are the CLI tools available. Is it possible to extend or actually disable some of these tools (for various reasons)?

> Available Gemini CLI tools:

    - ReadFolder
    - ReadFile
    - SearchText
    - FindFiles
    - Edit
    - WriteFile
    - WebFetch
    - ReadManyFiles
    - Shell
    - Save Memory
    - GoogleSearch

cperry

I had to ask Gemini CLI to remind myself ;) but you can add this into settings.json:

{ "excludeTools": ["run_shell_command", "write_file"] }

but if you ask Gemini CLI to do this it'll guide you!

bdmorgan

I also work on the product :-)

You can also extend with the Extensions feature - https://github.com/google-gemini/gemini-cli/blob/main/docs/e...

_ryanjsalva

I also work on the product. You can extend the tools with MCP. https://github.com/google-gemini/gemini-cli/blob/main/docs/t...

silverlake

I tried to get Gemini CLI to update itself using the MCP settings for Claude. It went off the rails. I then fed it the link you provided and it correctly updates it's settings file. You might mention the settings.json file in the README.

danavar

Is there a way to instantly, quickly prompt it in the terminal, without loading the full UI? Just to get a short response without filling the terminal page.

like to just get a short response - for simple things like "what's a nm and grep command to find this symbol in these 3 folders". I use gemini alot for this type of thing already

Or would that have to be a custom prompt I write?

peterldowns

I use `mods` for this https://github.com/charmbracelet/mods

other people use simon willison's `llm` tool https://github.com/simonw/llm

Both allow you to switch between models, send short prompts from a CLI, optionally attach some context. I prefer mods because it's an easier install and I never need to worry about Python envs and other insanity.

indigodaddy

Didn't know about mods, looks awesome.

cperry

-p is your friend

nojito

https://github.com/google-gemini/gemini-cli/blob/main/docs/c...

Looks like there is a non-interactive mode!

    echo "What is fine tuning?" | gemini

    gemini -p "What is fine tuning?"

wohoef

A few days ago I tested Claude Code by completely vibe coding a simple stock tracker web app in streamlit python. It worked incredibly well, until it didn't. Seems like there is a critical project size where it just can't fix bugs anymore. Just tried this with Gemini CLI and the critical project size it works well for seems to be quite a bit bigger. Where claude code started to get lost, I simply told Gemini CLI to "Analyze the codebase and fix all bugs". And after telling it to fix a few more bugs, the application simply works.

We really are living in the future

tvshtr

Yeah, and it's variable, can happen at 250k, 500k or later. When you interrogate it; usually the issue comes to it being laser focused or stuck on one specific issue, and it's very hard to turn it around. For the lack of the better comparison it feels like the AI is on a spectrum...

AJ007

Current best practice for Claude Code is to have heavy lifting done by Gemini Pro 2.5 or o3/o3pro. There are ways to do this pretty seamlessly now because of MCP support (see Repo Prompt as an example.) Sometimes you can also just use Claude but it requires iterations of planning, integration while logging everything, then repeat.

I haven't looked at this Gemini CLI thing yet, but if its open source it seems like any model can be plugged in here?

I can see a pathway where LLMs are commodities. Every big tech company right now both wants their LLM to be the winner and the others to die, but they also really, really would prefer a commodity world to one where a competitor is the winner.

If the future use looks more like CLI agents, I'm not sure how some fancy UI wrapper is going to result in a winner take all. OpenAI is winning right now with user count by pure brand name with ChatGPT, but ChatGPT clearly is an inferior UI for real work.

sysmax

I think, there are different niches. AI works extremely well for Web prototyping because a lot of that work is superficial. Back in the 90s we had Delphi where you could make GUI applications with a few clicks as opposed to writing tons of things by hand. The only reason we don't have that for Web is the decentralized nature of it: every framework vendor has their own vision and their own plan for future updates, so a lot of the work is figuring out how to marry the latest version of component X with the specific version of component Y because it is required by component Z. LLMs can do that in a breeze.

But in many other niches (say embedded), the workflow is different. You add a feature, you get weird readings. You start modelling in your head, how the timing would work, doing some combination of tracing and breakpoints to narrow down your hypotheses, then try them out, and figure out what works the best. I can't see the CLI agents do that kind of work. Depends too much on the hunch.

Sort of like autonomous driving: most highway driving is extremely repetitive and easy to automate, so it got automated. But going on a mountain road in heavy rain, while using your judgment to back off when other drivers start doing dangerous stuff, is still purely up to humans.

dawnofdusk

I feel like you get more mileage out of prompt engineering and being specific... not sure if "fix all the bugs" is an effective real-world use case.

ZeroCool2u

Ugh, I really wish this had been written in Go or Rust. Just something that produces a single binary executable and doesn't require you to install a runtime like Node.

qsort

Projects like this have to update frequently, having a mechanism like npm or pip or whatever to automatically handle that is probably easier. It's not like the program is doing heavy lifting anyway, unless you're committing outright programming felonies there shouldn't be any issues on modern hardware.

It's the only argument I can think of, something like Go would be goated for this use case in principle.

masklinn

> having a mechanism like npm or pip or whatever to automatically handle that is probably easier

Re-running `cargo install <crate>` will do that. Or install `cargo-update`, then you can bulk update everything.

And it works hella better than using pip in a global python install (you really want pipx/uvx if you're installing python utilities globally).

IIRC you can install Go stuff with `go install`, dunno if you can update via that tho.

StochasticLi

This whole thread is a great example of the developer vs. user convenience trade-off.

A single, pre-compiled binary is convenient for the user's first install only.

mpeg

You'd think that, but a globally installed npm package is annoying to update, as you have to do it manually and I very rarely need to update other npm global packages so at least personally I always forget to do it.

ZeroCool2u

I feel like Cargo or Go Modules can absolutely do the same thing as the mess of build scripts they have in this repo perfectly well and arguably better.

koakuma-chan

If you use Node.js your program is automatically too slow for a CLI, no matter what it actually does.

fhinkel

Ask Gemini CLI to re-write itself in your preferred language

ZeroCool2u

Unironically, not a bad idea.

AJ007

Contest between Claude Code and Gemini CLI, who rewrites it faster/cheaper/better?

i_love_retros

This isn't about quality products, it's about being able to say you have a CLI tool because the other ai companies have one

clbrmbr

Fast following is a reasonable strategy. Anthropic provided the existence proof. It’s an immensely useful form factor for AI.

mike_hearn

The question is whether what makes it useful is actually being in the terminal (limited, glitchy, awkward interaction) or whether it's being able to run next to files on a remote system. I suspect the latter.

closewith

Yeah, it would be absurd to avoid a course of action proven productive by a competitor.

behnamoh

> This isn't about quality products, it's about being able to say you have a CLI tool because the other ai companies have one

Anthropic's Claude Code is also installed using npm/npx.

rs186

Eh, I can't see how your comment is relevant ti the parent thread. Creating a CLI in Go is barely more complicated than JS. Rust, probably, but people aren't asking for that.

iainmerrick

Looks like you could make a standalone executable with Bun and/or Deno:

https://bun.sh/docs/bundler/executables

https://docs.deno.com/runtime/reference/cli/compile/

Note, I haven't checked that this actually works, although if it's straightforward Node code without any weird extensions it should work in Bun at least. I'd be curious to see how the exe size compares to Go and Rust!

tln

A Bun "hello world" is 58Mb

Claude also requires npm, FWIW.

buildfocus

You can also do this natively with Node, since v18: https://nodejs.org/api/single-executable-applications.html#s...

JimDabell

I was going to say the same thing, but they couldn’t resist turning the project into a mess of build scripts that hop around all over the place manually executing node.

ZeroCool2u

Yeah, this just seems like a pain in the ass that could've been easily avoided.

iainmerrick

From my perspective, I'm totally happy to use pnpm to install and manage this. Even if it were a native tool, NPM might be a decent distribution mechanism (see e.g. esbuild).

Obviously everybody's requirements differ, but Node seems like a pretty reasonable platform for this.

jstummbillig

It feels like you are creating a considerable fraction of the pain by taking offense with simply using npm.

buildfocus

Node can also produce a single binary executable: https://nodejs.org/api/single-executable-applications.html

ur-whale

> and doesn't require you to install a runtime like Node.

My exact same reaction when I read the install notes.

Even python would have been better.

Having to install that Javascript cancer on my laptop just to be able to try this, is a huge no.

geodel

My thoughts exactly. Neither Rust not Go, not even C/C++ which I could accept if there were some native OS dependencies. Maybe this is a hint on who could be its main audience.

ur-whale

> Maybe this is a hint on who could be its main audience.

Or a hint about the background of the folks who built the tool.

lazarie

"Failed to login. Ensure your Google account is not a Workspace account."

Is your vision with Gemini CLI to be geared only towards non-commercial users? I have had a workspace account since GSuite and have been constantly punished for it by Google offerings all I wanted was gmail with a custom domain and I've lost all my youtube data, all my fitbit data, I cant select different versions of some of your subscriptions (seemingly completely random across your services from a end-user perspective), and now as a Workspace account I cant use Gemini CLI for my work, which is software development. This approach strikes me as actively hostile towards your loyal paying users...

GlebOt

Have you checked the https://github.com/google-gemini/gemini-cli/blob/main/docs/c... ? It has a section for workspace accounts.

zxspectrum1982

Same here.

raincole

It seems that you need to set up an env variable called GOOGLE_CLOUD_PROJECT https://github.com/google-gemini/gemini-cli/issues/1434

... and other stuff.

asadm

I have been using this for about a month and it’s a beast, mostly thanks to 2.5pro being SOTA and also how it leverages that huge 1M context window. Other tools either preemptively compress context or try to read files partially.

I have thrown very large codebases at this and it has been able to navigate and learn them effortlessly.

zackify

When I was using it in cursor recently, I found it would break imports in large python files. Claude never did this. Do you have any weird issues using Gemini? I’m excited to try the cli today

asadm

not at all. these new models mostly write compiling code.

tvshtr

Depends on the language. It has some bugs where it replaces some words with Unicode symbols like ©. And is completely oblivious to it even when pointed out.

_zoltan_

what's your workflow?

ed_mercer

> That’s why we’re introducing Gemini CLI

Definitely not because of Claude Code eating our lunch!

jstummbillig

I find it hard to imagine that any of the major model vendors are suffering from demand shortages right now (if that's what you mean?)

If you mean: This is "inspired" by the success of Claude Code. Sure, I guess, but it's also not like Claude Code brought anything entirely new to the table. There is a lot of copying from each other and continually improving upon that, and it's great for the users and model providers alike.

unshavedyak

Yea, i'm not even really interested in Gemini atm because last i tried 2.5 Pro it was really difficult to shape behavior. It would be too wordy, or offer too many comments, etc - i couldn't seem to change some base behaviors, get it to focus on just one thing.

Which is surprising because at first i was ready to re-up my Google life. I've been very anti-Google for ages, but at first 2.5 Pro looked so good that i felt it was a huge winner. It just wasn't enjoyable to use because i was often at war with it.

Sonnet/Opus via Claude Code are definitely less intelligent than my early tests of 2.5 Pro, but they're reasonable, listen, stay on task and etc.

I'm sure i'll retry eventually though. Though the subscription complexity with Gemini sounds annoying.

sirn

I've found that Gemini 2.5 Pro is pretty good at analyzing existing code, but really bad at generating a new code. When I use Gemini with Aider, my session usually went like:

    Me: build a plan to build X
    Gemini: I'll do A, B, and C to achieve X
    Me: that sounds really good, please do
    Gemini: <do A, D, E>
    Me: no, please do B and C.
    Gemini: I apologize. <do A', C, F>
    Me: no! A was already correct, please revert. Also do B and C.
    Gemini: <revert the code to A, D, E>
Whereas Sonnet/Opus on average took me more tries to get it to the implementation plan that I'm satisfied with, but it's so much easier to steer to make it produce the code that I want.

ur-whale

> It would be too wordy, or offer too many comments

Wholeheartedly agree.

Both when chatting in text mode or when asking it to produce code.

The verbosity of the code is the worse. Comments often longer than the actual code, every nook and cranny of an algorithm unrolled over 100's of lines, most of which unnecessary.

Feels like typical code a mediocre Java developer would produce in the early 2000's

porridgeraisin

> Feels like typical code a mediocre Java developer would produce in the early 2000's

So, google's codebase

null

[deleted]

troupo

And since they have essentially unlimited money they can offer a lot for free/cheaply, until all competitors die out, and then they can crank up the prices

pzo

yeah we already seen this with gemini 2.5 flash. Gemini 2.0 is such a work horse for API model with great price. Gemini 2.5 flash lite same price but is not as good except math and coding (very niche use case for API key)

meetpateltech

Key highlights from blog post and GitHub repo:

- Open-source (Apache 2.0, same as OpenAI Codex)

- 1M token context window

- Free tier: 60 requests per minute and 1,000 requests per day (requires Google account authentication)

- Higher limits via Gemini API or Vertex AI

- Google Search grounding support

- Plugin and script support (MCP servers)

- Gemini.md file for memory instruction

- VS Code integration (Gemini Code Assist)

Mond_

Oh hey, afaik all of this LLM traffic goes through my service!

Set up not too long ago, and afaik pretty load-bearing for this. Feels great, just don’t ask me any product-level questions. I’m not part of the Gemini CLI team, so I’ll try to keep my mouth shut.

Not going to lie, I’m pretty anxious this will fall over as traffic keeps climbing up and up.

asadm

do you mean the genai endpoints?

rbren

If you're looking for a fully open source, LLM-agnostic alternative to Claude Code and Gemini CLI, check out OpenHands: https://docs.all-hands.dev/usage/how-to/cli-mode

spiffytech

I've had a good experience with https://kilocode.ai

It integrates with VS Code, which suits my workflow better. And buying credits through them (at cost) means I can use any model I want without juggling top-ups across several different billing profiles.

joelthelion

Or aider. In any case, while top llms will likely remain proprietary for some time, there is no reason for these tools to be closed source or tied to a particular llm vendor.

iddan

This is awesome! We recently started using Xander (https://xander.bot). We've found it's even better to assign PMs to Xander on Linear comments and get a PR. Then, the PM can validate the implementation in a preview environment, and engineers (or another AI) can review the code.