Qwen3-Coder: Agentic coding in the world
144 comments
·July 22, 2025danielhanchen
I'm currently making 2bit to 8bit GGUFs for local deployment! Will be up in an hour or so at https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruc...
Also docs on running it in a 24GB GPU + 128 to 256GB of RAM here: https://docs.unsloth.ai/basics/qwen3-coder
Abishek_Muthian
Thank you for your work, does the Qwen3-Coder offer significant advantage over Qwen2.5-coder for non-agentic tasks like just plain autocomplete and chat?
danielhanchen
Oh it should be better, especially since the model was specifically designed for coding tasks! You can disable the tool calling parts of the model!
mathrawka
Looks like the docs have a typo:
Recommended context: 65,536 tokens (can be increased)
That should be recommended token output, as shown in the official docs as: Adequate Output Length: We recommend using an output length of 65,536 tokens for most queries, which is adequate for instruct models.
danielhanchen
Oh thanks - so the output can be any length you like - I'm actually also making 1 million context length GGUFs as well! https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruc...
andai
I've been reading about your dynamic quants, very cool. Does your library let me produce these, or only run them? I'm new to this stuff.
danielhanchen
Thank you! Oh currently not sadly - we might publish some stuff on it in the future!
Jayakumark
What will be the approx token/s prompt processing and generation speed with this setup on RTX 4090?
danielhanchen
I also just made IQ1_M which needs 160GB! If you have 160-24 = 136 ish of RAM as well, then you should get 3 tokens to 5 ish per second.
If you don't have enough RAM, then < 1 token / s
jdright
Any idea if there is a way to run on 256gb ram + 16gb vram with usable performance, even if barely?
danielhanchen
Yes! 3bit maybe 4bit can also fit! llama.cpp has MoE offloading so your GPU holds the active experts and non MoE layers, thus you only need 16GB to 24GB of VRAM! I wrote about how to do in this section: https://docs.unsloth.ai/basics/qwen3-coder#improving-generat...
sgammon
Cool, thanks! I'd like to try it
danielhanchen
It just got uploaded! I made some docs as well on how to run it at https://docs.unsloth.ai/basics/qwen3-coder
Rockytu1
[dead]
pxc
> Qwen3-Coder is available in multiple sizes, but we’re excited to introduce its most powerful variant first
I'm most excited for the smaller sizes because I'm interested in locally-runnable models that can sometimes write passable code, and I think we're getting close. But since for the foreseeable future, I'll probably sometimes want to "call in" a bigger model that I can't realistically or affordably host on my own computer, I love having the option of high-quality open-weight models for this, and I also like the idea of "paying in" for the smaller open-weight models I play around with by renting access to their larger counterparts.
Congrats to the Qwen team on this release! I'm excited to try it out.
segmondy
small models can never match bigger models, the bigger models just know more and are smarter. the smaller models can get smarter, but as they do, the bigger models get smart too. HN is weird because at one point this was the location where I found the most technically folks, and now for LLM I find them at reddit. tons of folks are running huge models, get to researching and you will find out you can realistically host your own.
pxc
> small models can never match bigger models, the bigger models just know more and are smarter.
They don't need to match bigger models, though. They just need to be good enough for a specific task!
This is more obvious when you look at the things language models are best at, like translation. You just don't need a super huge model for translation, and in fact you might sometimes prefer a smaller one because being able to do something in real-time, or being able to run on a mobile device, is more important than marginal accuracy gains for some applications.
I'll also say that due to the hallucination problem, beyond whatever knowledge is required for being more or less coherent and "knowing" what to write in web search queries, I'm not sure I find more "knowledgeable" LLMs very valuable. Even with proprietary SOTA models hosted on someone else's cloud hardware, I basically never want an LLM to answer "off the dome"; IME it's almost always wrong! (Maybe this is less true for others whose work focuses on the absolute most popular libraries and languages, idk.) And if an LLM I use is always going to be consulting documentation at runtime, maybe that knowledge difference isn't quite so vital— summarization is one of those things that seems much, much easier for language models than writing code or "reasoning".
All of that is to say:
Sure, bigger is better! But for some tasks, my needs are still below the ceiling of the capabilities of a smaller model, and that's where I'm focusing on local usage. For now that's mostly language-focused tasks entirely apart from coding (translation, transcription, TTS, maybe summarization). It may also include simple coding tasks today (e.g., fancy auto-complete, "ghost-text" style). I think it's reasonable to hope that it will eventually include more substantial programming tasks— even if larger models are still preferable for more sophisticated tasks (like "vibe coding", maybe).
If I end up having a lot of fun, in a year or two I'll probably try to put together a machine that can indeed run larger models. :)
saurik
> Even with proprietary SOTA models hosted on someone else's cloud hardware, I basically never want an LLM to answer "off the dome"; IME it's almost always wrong! (Maybe this is less true for others whose work focuses on the absolute most popular libraries and languages, idk.)
I feel like I'm the exact opposite here (despite heavily mistrusting these models in general): if I came to the model to ask it a question, and it decides to do a Google search, it pisses me off as I not only could do that, I did do that, and if that had worked out I wouldn't be bothering to ask the model.
FWIW, I do imagine we are doing very different things, though: most of the time, when I'm working with a model, I'm trying to do something so complex that I also asked my human friends and they didn't know the answer either, and my attempts to search for the answer are failing as I don't even know the terminology.
bredren
>you might sometimes prefer a smaller one because being able to do something in real-time, or being able to run on a mobile device, is more important than marginal accuracy gains for some applications.
This reminds me of ~”the best camera is the one you have with you” idea.
Though, large models are an http request away, there are plenty of reasons to want to run one locally. Not the least of which is getting useful results in the absence of internet.
conradkay
For coding though it seems like people are willing to pay a lot more for a slightly better model.
Eggpants
The large models are using tools/functions to make them useful. Sooner or later open source will provide a good set of tools/functions for coding as well.
mlyle
> HN is weird because at one point this was the location where I found the most technically folks, and now for LLM I find them at reddit.
Is this an effort to chastise the viewpoint advanced? Because his viewpoint makes sense to me: I can run biggish models on my 128GB Macbook but not huge ones-- even 2b quantized ones suck too many resources.
So I run a combination of local stuff and remote stuff depending upon various factors (cost, sensitivity of information, convenience/whether I'm at home, amount of battery left, etc ;)
Yes, bigger models are better, but often smaller is good enough.
y1n0
I'd be interested in smaller models that were less general, with a training corpus more concentrated. A bash scripting model, or a clojure model, or a zig model, etc.
ants_everywhere
There's a niche for small-and-cheap, especially if they're fast.
I was surprised in the AlphaEvolve paper how much they relied on the flash model because they were optimizing for speed of generating ideas.
BriggyDwiggs42
The small model only needs to get as good as the big model is today, not as the big model is in the future.
otabdeveloper4
> small models can never match bigger models, the bigger models just know more and are smarter
Untrue. The big important issue for LLMs is hallucination, and making your model bigger does little to solve it.
Increasing model size is a technological dead end. The future advanced LLM is not that.
ActorNightly
Not really true. Gemma from Google with quantized aware training does an amazing job.
Under the hood, the way it works, is that when you have final probabilities, it really doesn't matter if the most likely token is selected with 59% or 75% - in either case it gets selected. If the 59% case gets there with smaller amount of compute, and that holds across the board for the training set, the model will have similar performance.
In theory, it should be possible to narrow down models even smaller to match the performance of big models, because I really doubt that you do need transformers for every single forward pass. There are probably plenty of shortcuts you can take in terms of compute for sets of tokens in the context. For example, coding structure is much more deterministic than natural text, so you probably don't need as much compute to generate accurate code.
You do need a big model first to train a small model though.
As for running huge models locally, its not enough to run them, you need good throughput as well. If you spend $2k on a graphics card, that is way more expensive than realistic usage with a paid API, and slower output as well.
flakiness
The "qwen-code" app seems to be a gemini-cli fork.
https://github.com/QwenLM/qwen-code https://github.com/QwenLM/qwen-code/blob/main/LICENSE
I hope these OSS CC clones converge at some point.
Actually it is mentioned in the page:
we’re also open-sourcing a command-line tool for agentic coding: Qwen Code. Forked from Gemini Code
rapind
I currently use claude-code as the director basically, but outsource heavy thinking to openai and gemini pro via zen mcp. I could instead use gemini-cli as it's also supported by zen. I would imagine it's trivial to add qwen-coder support if it's based on gemini-cli.
bredren
How was your experience using Gemini via Zen?
I’ve instead used a Gemini via plain ol’ chat, first building a competitive, larger context than Claude can hold then manually bringing detailed plans and patches to Gemini for feedback with excellent results.
I presumed mcp wouldn’t give me the focused results I get from completely controlling Gemini.
And that making CC interface via the MCP would also use up context on that side.
apwell23
what is the benefit of outsourcing to other models. do you see any noticable differences?
bredren
There are big gains to be had by having one top tier model review the work of another.
For example, you can drive one model to a very good point through several turns, and then have the second “red team” the result of the first.
Then return that to the first model with all of its built up context.
This is particularly useful in big plans doing work on complex systems.
Even with a detailed plan, it is not unusual for Claude code to get “stuck” which can look like trying the same thing repeatedly.
You can just stop that, ask CC to summarize the current problem and attempted solutions into a “detailed technical briefing.”
Have CC then list all related files to the problem including tests, then provide the briefing and all of the files to the second LLM.
This is particularly good for large contexts that might take multiple turns to get into Gemini.
You can have the consulted model wait to provide any feedback until you’ve said your done adding context.
And then boom, you get a detailed solution without even having to directly focus on whatever minor step CC is stuck on. You stay high level.
In general, CC is immediately cured and will finish its task. This is a great time to flip it into planning mode and get plan alignment.
Get Claude to output an update on its detailed plan including what has already been accomplished then again—-ship it to the consulting model.
If you did a detailed system specification in advance, (which CC hopefully was originally also working from) You can then ask the consulting model to review the work done and planned next steps.
Inevitably the consulting model will have suggestions to improve CC’s work so far and plans. Send it on back and you’re getting outstanding results.
mrbonner
They also support Claude Code. But my understanding is Claude Code is closed source and only support Clade API endpoint. How do they make it work?
alwillis
But my understanding is Claude Code is closed source and only support Clade API endpoint. How do they make it work?
You set the environment variable ANTHROPIC_BASE_URL to an OpenAI-compatible endpoint and ANTHROPIC_AUTH_TOKEN to the API token for the service.
I used Kimi-K2 on Moonshot [1] with Claude Code with no issues.
There's also Claude Code Router and similar apps for routing CC to a bunch of different models [2].
mrbonner
That makes sense. Thanks Do you know if this works with AWS Berdrock as well? Or do I need to sort out to use the proxy approach?
Zacharias030
How good is it in comparison? This is an interesting apples to apples situation:)
vtail
Claude uses OpenAI-compatible APIs, and Claude Code respects environment variables that change the base url/token.
segmondy
no it doesn't, claude uses anthropic API. you need to run an anthropic2openAPI proxy
Imanari
You can use any model from openrouter with CC via https://github.com/musistudio/claude-code-router
ai-christianson
We shipped RA.Aid, an agentic evolution of what aider started, back in late '24, well before CC shipped.
Our main focuses were to be 1) CLI-first and 2) truly an open source community. We have 5 independent maintainers with full commit access --they aren't from the same org or entity (disclaimer: one has joined me at my startup Gobii where we're working on web browsing agents.)
I'd love someone to do a comparison with CC, but IME we hold our own against Cursor, Windsurf, and other agentic coding solutions.
But yes, there really needs to be a canonical FOSS solution that is not tied to any specific large company or model.
danenania
I’ll throw out a mention for my project Plandex[1], which predates Claude Code and combines models from multiple providers (Anthropic, Google, and OpenAI by default). It can also use open source and local models.
It focuses especially on large context and longer tasks with many steps.
esafak
Have you measured and compared your agent's efficiency and success rate against anything? I am curious. It would help people decide; there are many coding agents now.
sunaookami
Thank god I already made an Alibaba Cloud account last year because this interface sucks big time. At least you get 1 mio. tokens free (once?). Bit confusing that they forked the Gemini CLI but you still have to set environment variables for OpenAI?
nnx
This suggests adding a `QWEN.md` in the repo for agents instructions. Where are we with `AGENTS.md`? In a team repo it's getting ridiculous to have a duplicate markdown file for every agent out there.
sunaookami
I just make a file ".llmrules" and symlink these files to it. It clutters the repo root, yes...
singhrac
I just symlink to AGENTS.md, the instructions are all the same (and gitignore the model-specific version).
drewbitt
CLAUDE.md MISTRAL.md GEMINI.md QWEN.md GROK.md .cursorrules .windsurfrules .copilot-instructions
Saw a repo recently with probably 80% of those
mattigames
Maybe there could be an agent that is in charge of this and it's trained to automatically create a file for any new agent, it could even temporarily delete local copies of MD files that no agents are using at the moment to free the visual clutter when navigating the repo.
indigodaddy
How does one keep up with all this change? I wish we could fast-forward like 2-3 years to see if an actual winner has landed by then. I feel like at that point there will be THE tool, with no one thinking twice about using anything else.
segmondy
One keeps up with it, by keeping up with it. Folks keep up with latest social media gossip, the news, TV shows, or whatever interests them. You just stay on it. Weekend I got to running Kimi K2, last 2 days I have been driving Ernie4.5-300B, Just finished downloading the latest Qwen3-235b this morning and started using it this evening. Tonight I'll start downloading this 480B, might take 2-3 days with my crappy internet and then I'll get to it.
Obsession?
Zacharias030
what kind of hardware do you run it on?
Sabinus
Do you write about your assessments of model capabilities and the results of your experiments?
SchemaLoad
Just ignore it until something looks useful. There's no reason to keep up, it's not like it takes 3 years experience to type in a prompt box.
blibble
don't bother at all
assuming it doesn't all implode due to a lack of profitability, it should be obvious
int_19h
Why do you believe so? The leaderboard is highly unstable right now and there are no signs of that subsiding. I would expect the same situation 2-3 years forward, just possibly with somewhat different players.
lizardking
It's hard to avoid if you frequent HN
jasonthorsness
What sort of hardware will run Qwen3-Coder-480B-A35B-Instruct?
With the performance apparently comparable to Sonnet some of the heavy Claude Code users could be interested in running it locally. They have instructions for configuring it for use by Claude Code. Huge bills for usage are regularly shared on X, so maybe it could even be economical (like for a team of 6 or something sharing a local instance).
danielhanchen
I'm currently trying to make dynamic GGUF quants for them! It should use 24GB of VRAM + 128GB of RAM for dynamic 2bit or so - they should be up in an hour or so: https://huggingface.co/unsloth/Qwen3-Coder-480B-A35B-Instruc.... On running them locally, I do have docs as well: https://docs.unsloth.ai/basics/qwen3-coder
zettabomb
Any significant benefits at 3 or 4 bit? I have access to twice that much VRAM and system RAM but of course that could potentially be better used for KV cache.
danielhanchen
So dynamic quants like what I upload are not actually 4bit! It's a mixture of 4bit to 8bit with important layers being in higher precision! I wrote about our method here: https://docs.unsloth.ai/basics/unsloth-dynamic-2.0-ggufs
sourcecodeplz
For coding you want more precision so the higher the quant the better. But there is discussion if a smaller model in higher quant is better than a larger one in lower quant. Need to test for yourself with your use cases I'm afraid.
e: They did announce smaller variants will be released.
fzzzy
I would say that three or four bit are likely to be significantly better. But that’s just from my previous experience with quants. Personally, I try not to use anything smaller than a Q4.
simonw
There's a 4bit version here that uses around 272GB of RAM on a 512GB M3 Mac Studio: https://huggingface.co/mlx-community/Qwen3-Coder-480B-A35B-I... - see video: https://x.com/awnihannun/status/1947771502058672219
That machine will set you back around $10,000.
jychang
You can get similar performance on an Azure HX vm:
https://learn.microsoft.com/en-us/azure/virtual-machines/siz...
osti
How? These don't even have GPU's right?
kentonv
Ugh, why is Apple the only one shipping consumer GPUs with tons of RAM?
I would totally buy a device like this for $10k if it were designed to run Linux.
jauntywundrkind
Intel already has a great value GPU. Everyone wants them to disrupt the game, destroy the product niches. It's general purpose compute performance is quite ass alas but maybe that doesn't matter for AI?
I'm not sure if there are higher capacity gddr6 & 7's rams to buy. I semi doubt you can add more without more channels, to some degree, but also, AMD just shipped R9700 based on rx9070 but with double the ram. But something like Strix Halo, an API with more lpddr channels could work. Word is that Strix Halo's 2027 successor Medusa Halo will go to 6 channels and it's hard to see a significant advantage without that win; the processing is already throughput constrained-ish and a leap on memory bandwidth will definitely be required. Dual channel 128b isn't enough!
There's also MRDIMMs standard, which multiplexes multiple chips. That promises a doubling of both capacity and throughout.
Apple's definitely done two brilliant costly things, by putting very wide (but not really fast) memory on package (Intel had dabbled in doing similar with regular width ram in consumer space a while ago with Lakefield). And then by tiling multiple cores together, making it so that if they had four perfect chips next to each other they could ship it as one. Incredibly brilliant maneuver to get fantastic yields, and to scale very big.
sbrother
You can buy a RTX 6000 Pro Blackwell for $8000-ish which has 96GB VRAM and is much faster than the Apple integrated GPU.
sagarm
You can get 128GB @ ~500GB/s now for ~$2k: https://a.co/d/bjoreRm
It has 8 channels of DDR5-8000.
ilaksh
To run the real version with the bench arks they give, it would be a nonquantized non distilled version. So I am guessing that is a cluster of 8 H200s if you want to be more or less up to date. They have B200s now which are much faster but also much more expensive. $300,000+
You will see people making quantized distilled versions but they never give benchmark results.
danielhanchen
Oh you can run the Q8_0 / Q8_K_XL which is nearly equivalent to FP8 (maybe off by 0.01% or less) -> you will need 500GB of VRAM + RAM + Disk space. Via MoE layer offloading, it should function ok
summarity
This should work well for MLX Distributed. The low activation MoE is great for multi node inference.
ilaksh
1. What hardware for that. 2. Can you do a benchmark?
sourcecodeplz
With RAM you would need at least 500gb to load it but some 100-200gb more for context too. Pair it with a 24gb GPU and the speed will be 10t/s, at least, I estimate.
danielhanchen
Oh yes for the FP8, you will need 500GB ish. 4bit around 250GB - offloading MoE experts / layers to RAM will definitely help - as you mentioned a 24GB card should be enough!
vFunct
Do we know if the full model is FP8 or FP16/BF16? The hugging face page says BF16: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
So likely it needs 2x the memory.
Avlin67
xeon 6980P which now costs 6K€ instead of 17K€
btian
Do need to be super fancy. Just RTX Pro 6000 and 256GB of RAM.
rapind
I just checked and it's up on OpenRouter. (not affiliated) https://openrouter.ai/qwen/qwen3-coder
rbren
Glad to see everyone centering on using OpenHands [1] as the scaffold! Nothing more frustrating than seeing "private scaffold" on a public benchmark report.
swyx
more info on AllHands from robert (above) https://youtu.be/o_hhkJtlbSs
KaoruAoiShiho
How is cognition so incompetent? They got hundreds of millions of dollars but now they're not just supplanted by Cursor and Claude Code but also by their literal clone, an outfit that was originally called "OpenDevin".
samrus
The AI space is attracting alot of grifters. Even the initial announcement for devin was reaking of elon musk style overpromising.
Im sure the engineers are doing the best work they can. I just dont think leadership is as interested in making a good product as they are in creating a nice exit down the line
mohsen1
Open weight models matching Cloud 4 is exciting! It's really possible to run this locally since it's MoE
ilaksh
Where do you put the 480 GB to run it at any kind of speed? You have that much RAM?
Cheer2171
You can get a used 5 year old Xeon Dell or Lenovo Workstation and 8x64GB of ECC DDR4 RAM for about $1500-$2000.
Or you can rent a newer one for $300/mo on the cloud
sourcecodeplz
Everyone keeps saying this but it is not really useful. Without a dedicated GPU & VRAM, you are waiting overnight for a response... The MoE models are great but they need dedicated GPU & VRAM to work fast.
binarymax
You rent an a100x8 or higher and pay $10k a month in costs, which will work well if you have a whole team using it and you have the cash. I’ve seen people spending $200-500 per day on Claude code. So if this model is comparable to Opus then it’s worth it.
jychang
If you're running it for personal use, you don't need to put all of it onto GPU vram. Cheap DDR5 ram is fine. You just need a GPU in the system to do compute for the prompt processing and to hold the common tensors that run for every token.
For reference, a RTX 3090 has about 900GB/sec memory bandwidth, and a Mac Studio 512GB has 819GB/sec memory bandwidth.
So you just need a workstation with 8 channel DDR5 memory, and 8 sticks of RAM, and stick a 3090 GPU inside of it. Should be cheaper than $5000, for 512GB of DDR5-6400 that runs at a memory bandwidth of 409GB/sec, plus a RTX 3090.
ac29
> So if this model is comparable to Opus then it’s worth it.
Qwen says this is similar in coding performance to Sonnet 4, not Opus.
danielhanchen
You don't actually need 480GB of RAM, but if you want at least 3 tokens / s, it's a must.
If you have 500GB of SSD, llama.cpp does disk offloading -> it'll be slow though less than 1 token / s
UncleOxidant
> but if you want at least 3 tokens / s
3 t/s isn't going to be a lot of fun to use.
teaearlgraycold
As far as inference costs go 480GB of RAM is cheap.
danielhanchen
Ye! Super excited for Coder!!
jddj
Odd to see this languishing at the bottom of /new. Looks very interesting.
Open, small, if the benchmarks are to be believed sonnet 4~ish, tool use?
stuartjohnson12
Qwen has previously engaged in deceptive benchmark hacking. They previously claimed SOTA coding performance back in January and there's a good reason that no software engineer you know was writing code with Qwen 2.5.
https://winbuzzer.com/2025/01/29/alibabas-new-qwen-2-5-max-m...
Alibaba is not a company whose culture is conducive to earnest acknowledgement that they are behind SOTA.
daemonologist
Maybe not the big general purpose models, but Qwen 2.5 Coder was quite popular. Aside from people using it directly I believe Zed's Zeta was a fine-tune of the base model.
sourcecodeplz
Benchmarks are one thing but the people really using these models, do it for a reason. Qwen team is top in open models, esp. for coding.
swyx
> there's a good reason that no software engineer you know was writing code with Qwen 2.5.
this is disingenous. there are a bunch of hurdles to using open models over closed models and you know them as well as the rest of us.
omneity
Also dishonest since the reason Qwen 2.5 got so popular is not so much paper performance.
danielhanchen
Ye the model looks extremely powerful! I think they're also maybe making a small variant as well, but unsure yet!
sourcecodeplz
Yes they are:
"Today, we're announcing Qwen3-Coder, our most agentic code model to date. Qwen3-Coder is available in multiple sizes, but we're excited to introduce its most powerful variant first: Qwen3-Coder-480B-A35B-Instruct."
danielhanchen
Oh yes fantastic! Excited for them!
fotcorn
It says that there are multiple sizes in the second sentence of the huggingface page: https://huggingface.co/Qwen/Qwen3-Coder-480B-A35B-Instruct
You won't be out of work creating ggufs anytime soon :)
I've been using it all day, it rips. Had to bump up toolcalling limit in cline to 100 and it just went through the app no issues, got the mobile app built, fixed throug hthe linter errors... wasn't even hosting it with the toolcall template on with the vllm nightly, just stock vllm it understood the toolcall instructions just fine