Skip to content(if available)orjump to list(if available)

Gemma3 – The current strongest model that fits on a single GPU

archerx

I have tried a lot of local models. I have 656GB of them on my computer so I have experience with a diverse array of LLMs. Gemma has been nothing to write home about and has been disappointing every single time I have used it.

Models that are worth writing home about are;

EXAONE-3.5-7.8B-Instruct - It was excellent at taking podcast transcriptions and generating show notes and summaries.

Rocinante-12B-v2i - Fun for stories and D&D

Qwen2.5-Coder-14B-Instruct - Good for simple coding tasks

OpenThinker-7B - Good and fast reasoning

The Deepseek destills - Able to handle more complex task while still being fast

DeepHermes-3-Llama-3-8B - A really good vLLM

Medical-Llama3-v2 - Very interesting but be careful

Plus more but not Gemma.

sieve

The Gemma 2 Instruct models are quite good (9 & 27B) for writing. The 27B is good at following instructions. I also like DeepSeek R1 Distill Llama 70B.

The Gemma 3 Instruct 4B model that was released today matches the output of the larger models for some of the stuff I am trying.

Recently, I compared 13 different online and local LLMs in a test where they tried to recreate Saki's "The Open Window" from a prompt.[1] Claude wins hands down IMO, but the other models are not bad.

[1] Variations on a Theme of Saki (https://gist.github.com/s-i-e-v-e/b4d696bfb08488aeb893cce3a4...)

smahs

[dead]

anon373839

From the limited testing I've done, Gemma 3 27B appears to be an incredibly strong model. But I'm not seeing the same performance in Ollama as I'm seeing on aistudio.google.com. So, I'd recommend trying it from the source before you draw any conclusions.

One of the downsides of open models is that there are a gazillion little parameters at inference time (sampling strategy, prompt template, etc.) that can easily impair a model's performance. It takes some time for the community to iron out the wrinkles.

moffkalast

At the end of the day it doesn't matter how good it its, it has no system prompt which means no steerability, a sliding window for incredibly slow inference compared to similar sized models because it's too niche and most inference systems have high overhead implementations of it, and Google's psychotic instruct tuning that made Gemma 2 an inconsistent and unreliable glass cannon.

I mean hell, even Mistral added system prompts in their last release, Google are the only ones that don't seem to bother with it by now.

hnfong

If you actually looked at gemma-3 you’ll see that it does support system prompts.

I’ve never seen a case where putting the system prompt in the user prompt would lead to significantly different outcomes though. Would like to see some examples.

(edit: my bad. i stand corrected. it seems the code just prepends the system prompts to the first user prompt.)

mythz

Concur with Gemma2 being underwhelming, I dismissed it pretty quickly but gemma3:27b is looking pretty good atm.

BTW mistral-small:24b is also worth mentioning (IMO best local model) and phi4:14b is also pretty strong for its size.

mistral-small was my previous local goto model, testing now to see if gemma3 can replace it.

InsideOutSanta

One more vote for Mistral for local models. The 7B model is extremely fast and still good enough for many prompts.

zacksiri

You should try Mistral Small 24b it’s been my daily companion for a while and have continued to impress me daily. I’ve heard good things about QwQ 32b that just came out too.

jrm4

Nice, I think you're nailing the important thing -- which is "what exactly are they good FOR?"

I see a lot of talk about good and not good here, but (and a question for everyone) what are people using the non-local big boys for that the locals CAN'T do? I mean, IRL tasks?

usef-

To clarify, are you basing this comment on experience with previous Gemma releases, or the one from today?

xnx

Let us know when you've evaluated Gemma 3. Just as with the switch between ChatGPT 3.5 and ChatGPT 4, old versions don't tell you much about the current version.

rpastuszak

Thanks for the overview.

> Qwen2.5-Coder-14B-Instruct - Good for simple coding tasks > OpenThinker-7B - Good and fast reasoning

Any chance you could be more specific, ie give an example of a concrete coding task or reasoning problem you used them for?

miroljub

Qwen2.5-Coder:32B is the best open source coding model. I use it daily, and I don't notice that it lags much behind Claude 3.5.

I would be actually happy to see R1 distilled version, it may make it perform better with the less resource usage.

rpastuszak

Thanks! Do you use it with Aider/terminal/a web GUI?

danielhanchen

I wrote a mini guide on running Gemma 3 at https://docs.unsloth.ai/basics/tutorial-how-to-run-gemma-3-e...!

The recommended settings according to the Gemma team are:

temperature = 0.95

top_p = 0.95

top_k = 64

Also beware of double BOS tokens! You can run my uploaded GGUFs with the recommended chat template and settings via ollama run hf.co/unsloth/gemma-3-27b-it-GGUF:Q4_K_M

iamgopal

Small Models should be train on specific problem in specific language, and should be built one upon another, the way container works. I see a future where a factory or home have local AI server which have many highly specific models, continuously being trained by super large LLM on the web, and are connected via network to all instruments and computer to basically control whole factory. I also see a future where all machinery comes with AI-Readable language for their own functioning. A http like AI protocol for two way communication between machine and an AI. Lots of possibility.

swores

See the other HN submission (for the Gemma3 technical report doc) for a more active discussion thread - 50 comments at time of writing this.

https://news.ycombinator.com/item?id=43340491

antirez

After reading the technical report do the effort of downloading the model and run it against a few prompts. In 5 minutes you understand how broken LLM benchmarking is.

archerx

That's why I like giving it a real world test. For example take a podcast transcription and ask it to make show notes and summary. With a temperature of 0 different models will tackle the problem in different ways and you can infer if they really understood the transcript. Usually the transcripts that I give it come from about 1 hour of audio of two or more people talking.

amelius

Aren't there any "blind" benchmarks?

nathanasmith

Unfortunately that wouldn't help as much as you think since talented AI labs can just watch the public leaderboard and note what models move up and down to deduce and target whatever the hidden benchmark is testing.

toinewx

can you expand a bit?

antirez

The model performs very poorly in practice, while in the benchmark it is shown to be DeepSeek V3 level. It's not terrible but it's at another level compared to the models it is very close to (a bit better / a bit worse) in the benchmarks.

anon373839

I’d recommend trying it on Google AI Studio (aistudio.google.com). I am getting exceptional results on a handful of novel problems that require deep domain knowledge and structured reasoning. I’m not able to replicate this performance with Ollama, so I suspect something is a bit off.

alekandreev

Hey, Gemma engineer here. Can you please share reports on the type of prompts and the implementation you used?

tarruda

In my experience, Gemma models were always bad at coding (but good at other tasks).

bearjaws

Prompt adherence is pretty bad from what I can tell.

elif

Good job Google. It is kinda hilarious that 'open'AI seems to be the big player least likely to release any of their models.

amelius

lyingAI

mythz

Not sure if anyone else experiences this, but ollama downloads starts off strong but the last few MBs take forever.

Finally just finished downloading (gemma3:27b). Requires the latest version of Ollama to use, but now working, getting about 21 tok/s on my local 2x A4000.

From my few test prompts looks like a quality model, going to run more tests to compare against mistral-small:24b to see if it's going to become my new local model.

Patrick_Devine

There are some fixes coming to uniformly speed up pulls. We've been testing that out but there are a lot of moving pieces with the new engine so it's not here quite yet.

dizhn

It might not be downloading but converting the model. Or if it's already downloading a properly formatted model file, deduping on disk which I hear it does. This also makes its model files on disk useless for other frontends.

squeakywhite

I experienced this just now. The download slowed down to approx 500kB/s for the last 1% or so. When this happens, you can Ctrl+C to cancel and then start the download again It will continue from where it left off, but at regular (fast) download speed.

singularity2001

How does it compare to OlympicCoder 7B [0] which allegedly beats Claude Sonnet 3.7 in the International Olympiad in Informatics [1] ?

[0] https://huggingface.co/open-r1/OlympicCoder-7B?local-app=vll...

[1] https://pbs.twimg.com/media/GlyjSTtXYAAR188?format=jpg&name=...

smcleod

No mention of how well it's claimed to perform with tool calling?

The Gemma series of models has historically been pretty poor when it comes to coding and tool calling - two things that are very important to agentic systems, so it will be interesting to see how 3 does in this regard.

tarruda

My usual non-scientific benchmark is asking it to implement the game Tetris in python, and then iterating with the LLM to fix/tweak it.

My prompt to Gemma 27b (q4) on open webui + ollama: "Can you create the game tetris in python?"

It immediately starts writing code. After the code is finished, I noticed something very strange, it starts a paragraph like this:

" Key improvements and explanations:

     Clearer Code Structure:  The code is now organized into a Tetris class, making it much more maintainable and readable.  This is essential for any non-trivial game.
"

Followed by a bunch of fixes/improvements, as if this was not the first iteration of the script.

I also notice a very obvious error: In the `if __name__ == '__main__':` block, it tries to instantiate a `Tetris` class, when the name of the class it created was "TetrisGame".

Nevertheless, I try to run it and paste the `NameError: name 'Tetris' is not defined` error along with stack trace specifying the line. Gemma then gives me this response:

"The error message "NameError: name 'Tetris' is not defined" means that the Python interpreter cannot find a class or function named Tetris. This usually happens when:"

Then continues with a generic explanation with how to fix this error in arbitrary programs. It seems like it completely ignored the code it just wrote.

tarruda

I ran the same prompt on google AI studio it had the same behavior of talking about improvements as if the code it wrote was not the first version.

Other than that, the experience was completely different:

- The game worked on first try

- I iterated with the model making enhancements. The first version worked but didn't show scores, levels or next piece, so I asked it to implement those features. It then produced a new version which almost worked: The only problem was that levels were increasing whenever a piece fell, and I didn't notice any increase in falling speed.

- So I reported the problems with level tracking and falling speed and it produced a new version which crashed immediately. I pasted the error and it was able to fix it in the next version

- I kept iterating with the model, fixing issues until it finally produced a perfectly working tetris game which I played and eventually lost due to high falling speed.

- As a final request, I asked it to port the latest working version of the game to JS/HTML with the implementation self contained in a file. It produced a broken implementation, but I was able to fix it after tweaking it a little bit.

Gemma 3 27b on Google AI studio is easily one of the best LLMs I've used for coding.

Unfortuantely I can't seem to reproduce the same results in ollama/open webui, even when running the full fp16 version.

whbrown

Those sound like the sort of issues which could be caused by your server silently truncating the middle of your prompts.

By default, Ollama uses a context window size of 2048 tokens.

tarruda

I checked this, the whole conversation was about 1000 tokens.

I suspect the Ollama version might have wrong default settings, such as conversation delimiters. The experience of Gemma 3 in AI studio is completely different.

whiplash451

Why did this get downvoted? Asking genuinely

sigmoid10

These bar charts are getting more disingenuous every day. This one makes it seem like Gemma3 ranks as nr. 2 on the arena just behind the full DeepSeek R1. But they just cut out everything that ranks higher. In reality, R1 currently ranks as nr. 6 in terms of Elo. It's still impressive for such a small model to compete with much bigger models, but at this point you can't trust any publication by anyone who has any skin in model development.

swores

The chart isn't claiming to be an overview of the best ranking models - it's an evaluation of this particular model, which wouldn't be helped at all by having loads more unrelated models in the chart, even if that would have helped you avoid misunderstanding the point of the chart.

sigmoid10

How are better ranking models unrelated? They are explicitly comparing open and closed, small and large foundation models. Leaving the best ones out is just plain disingenuous. There's no way to sugarcoat this.

antirez

The most disturbing thing is that in the chart it ranks higher than V3. Test a few prompts against DeepSeek V3 and Gemma 3. They are like at two totally different levels, one is a SOTA model, one is a small LLM that can be useful for certain vertical tasks perhaps.

pzo

open llm leaderboard [0] is probably good to compare open weights model on many different benchmarks - wish they put also some closed source one just to see what's relative ranking of best open weights to closed source one. They haven't updated yet for gemma 3 though

[0] https://huggingface.co/spaces/open-llm-leaderboard/open_llm_...

sigmoid10

Beware that they use very narrow metrics. Which is also why you only see fine-tunes over there gaming narrow aspects. If your edge case fits into one of those - great. If not and you just want a good general purpose model you'll have to look elsewhere.

null

[deleted]