Qwen3-Next
38 comments
·September 12, 2025jychang
puilp0502
What kind of benefit does Multi-Token Prediction bring to the inference side? Is it only relevant in pretraining efficiency?
jychang
Speculative decoding! It makes inference a LOT faster.
Instead of generating tokens one at a time, you generate the second one as well, and then use speculative decoding on that second token (instead of having it be produced by a draft model like Qwen 0.6b). If the token is checked and is correct, then the 2nd token gets generated MUCH faster.
If it's wrong, you have to generate it again the normal way (a lot slower than just checking it). Usually, it's correct, so inference is a lot faster.
rfoo
It could be a better speculative model than separately trained EAGLE etc for speculative decoding.
syntaxing
The craziest part is how far MoE has come thanks to Qwen. This beats all those 72B dense models we’ve had before and runs faster than 14B model depending on how you off load your VRAM and CPU. That’s insane.
mynti
For anyone curious about what the Gated Delta Network is: https://arxiv.org/pdf/2412.06464
yorwba
Also, Gated Attention: https://arxiv.org/abs/2505.06708
irthomasthomas
llm -m qwen3-next-80b-a3b-thinking "An ASCII of spongebob"
Here's a classic ASCII art representation of SpongeBob SquarePants:
.------.
/ o o \
| |
| \___/ |
\_______/
llm -m chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 \
"An ASCII of spongebob"
Here's an ASCII art of SpongeBob SquarePants:
.--..--..--..--..--..--.
.' \ (`._ (_) _ \
.' | '._) (_) |
\ _.')\ .----..--.' /
|(_.' | / .-\-. \---.
\ 0| | ( O| O) | |
| _ | .--.____.'._.-. |
\ (_) | o -` .-` |
| \ |`-._ _ _ _ _\ /
\ | | `. |_||_| |
| o | \_ \ | -. .-.
|.-. \ `--..-' O | `.`-' .'
_.' .' | `-.-' /-.__ ' .-'
.' `-.` '.|='=.='=.='=.='=|._/_ `-'.'
`-._ `. |________/\_____| `-.'
.' ).| '=' '='\/ '=' |
`._.` '---------------'
//___\ //___\
|| ||
||_.-. ||_.-.
(_.--__) (_.--__)
Meta: I generated a few dozen spongebobs last night on the same model and NONE where as good as this. Most started well but collapsed into decoherence at the end - missing the legs off. Then this morning the very same prompt to the same model API produced a perfect bob on the first attempt. Can utilization affect response quality, if all else remains constant? Or was it just random luck?Edit: Ok, the very next attempt, a few minutes later, failed, so I guess it is just random, and you have about a 1 in 20 chance of getting a perfect spongebob from qwen3-coder, and ~0 chance with qwen3-next.
dev_hugepages
irthomasthomas
Naturally. That's how LLMs work. During training you measure the loss, the difference between the model output and the ground-truth and try to minimize it. We prize models for their ability to learn. Here we can see that the large model does a great job at learning, while the small model performs poorly.
endymion-light
I'd argue that actually, the smaller model is doing a better job at "learning" - in that it's including key characteristics within an ascii image while poor.
The larger model already has it in the training corpus so it's not particularly a good measure though. I'd much rather see the capabilities of a model in trying to represent in ascii something that it's unlikely to have in it's training.
Maybe a pelican riding a bike as ascii for both?
ginko
Conveniently removed the artist's signature though.
irthomasthomas
Yes - they all do that. Actually, most attempts start well but unravel toward the end.
llm -m chutes/Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8 \
"An ASCII of spongebob"
Here's an ASCII art of SpongeBob SquarePants:
```
.--..--..--..--..--..--.
.' \ (`._ (_) _ \
.' | '._) (_) |
\ _.')\ .----..--. /
|(_.' | / .-\-. \
\ 0| | ( O| O) |
| _ | .--.____.'._.-.
/.' ) | (_.' .-'"`-. _.-._.-.--.-.
/ .''. | .' `-. .-'-. .-'"`-.`-._)
.'.' | | | | | | | | | |
.'.' | | | | | | | | | |
.'.' | | | | | | | | | |
.'.' | | | | | | | | | |
.'.' | | | | | | | | | |
.'.' | | | | | | | | | |
```
eurekin
Certainly not defending LLMs here, don't mistake with that.
Humans do it too. I have given up on my country's information sources, because I could recognize original sources that are being deliberately omitted. There's a satiric webpage that is basically a reddit scrape. Most of users don't notice and those who do, don't seem to care.
slimebot80
Complete newbie here - some questions, if I may!
This stuff can run on a local machine without internet access, correct?
And it can pretty much match Nano Banana? https://github.com/PicoTrex/Awesome-Nano-Banana-images/blob/...
Also -- what are the specs for a machine to run it (even if slowly!)
NitpickLawyer
This model can be run completely offline, yes. You'll need anywhere from 60-200 gb of RAM (either VRAM for high speeds, or a combination of VRAM and RAM, or just CPU+RAM). The active params are really low (3B) so it'll likely run fine even on CPU. Should get 10-15+t/s even on old DDR4 systems. Offload some experts to a GPU (can be as low as 8-16gb) and you'll see greater speeds.
This has nothing to do with nano banana, or image generation. For that you want the qwen image edit[1] models.
prawel
what you mean is Qwen Image and Qwen Image Edit, you can run it on local machine, using Draw Things application for example.
the model discussed here is text model, so similar to ChatGPT. You can also run it on your local machine, but not yet, as apps need to be updated with Qwen 3 Next support (llama.cpp, Ollama, etc)
dragonwriter
> This stuff can run on a local machine without internet access, correct?
Yes.
> And it can pretty much match Nano Banana?
No, Qwen3-Next is not a multimodal model, it has no image generation function.
Davidzheng
Isn't this one a text model
slimebot80
Ah, maybe! I am lost reading this page with all the terminology
arcanemachiner
You'll get used to it.
Make sure to lurk on r/LocalLlama.
Jgoauh
Seems impressive, i believe better architectures are really the path forward, i don't think you need more than 100B params taking this model and what GPT OSS 120B can acchieve
NitpickLawyer
New arch seems cool, and it's amazing that we have these published in the open.
That being said, qwen models are extremely overfit. They can do some things well, but they are very limited in generalisation, compared to closed models. I don't know if it's simply scale, or training recipes, or regimes. But if you test it ood the models utterly fail to deliver, where the closed models still provide value.
vintermann
Could you give some practical examples? I don't know what Qwen's 36T-token training set is like, so I don't know what it's overfitting to...
NitpickLawyer
Take math and coding for example:
- in math, if they can solve a problem, or a class of problems, they'll solve it. If you use a "thinking" model + maj@x, you'll get strong results. But if you try for example to have the model consider a particular way or method of exploring a problem, it'll default to "solving" mode. It's near impossible to have it do something else with a math problem, other than solving it. Say "explore this part, in this way, using this method". Can't do it. It'll maybe play a bit, but then enter "solving" mdoe and continue to solve it as it was trained.
In practice, this means that "massive parallel" test time compute becomes harder to do with these models, because you can't "guide" them towards certain aspects of a problem. They are extremely "stubborn".
- in coding it's even more obvious. Ask them to produce any 0shot often tested and often shown things (spa, game, visualisation, etc) - and they do it. Convincingly.
But ask them to look at a piece of code and extract meaning, and they fail. Or ask them to reverse an implementation. Figure out what a function does and reverse its use, or make it do something else, and they fail.
pveierland
> "The content loading failed."
It's amazing how far and how short we've come with software architectures.
yekanchi
how much vram it requires?
NitpickLawyer
A good rule of thumb is to think that one param is one unit of storage. The "default" unit of storage these days is bf16 (i.e. 16 bits for 1 weight). So for a 80B model that'll be ~160GB of weights. Then you have quantisation, usually in 8bit and 4bit. That means each weight is "stored" in 8bits or 4bits. So for a 80B model that'll be ~80GB in fp8 and ~40GB in fp4/int4.
But in practice you need a bit more than that. You also need some space for context, and then for kv cache, potentially a model graph, etc.
So you'll see in practice that you need 20-50% more RAM than this rule of thumb.
For this model, you'll need anywhere from 50GB (tight) to 200GB (full) RAM. But it also depends how you run it. With MoE models, you can selectively load some experts (parts of the model) in VRAM, while offloading some in RAM. Or you could run it fully on CPU+RAM, since the active parameters are low - 3B. This should work pretty well even on older systems (DDR4).
DiabloD3
Thats not a meaningful question. Models can be quantized to fit into much smaller memory requirements, and not all MoE layers (in MoE models) have to be offloaded to VRAM to maintain performance.
keyle
For a model that can run offline, they've nailed how the website can too.
And it appears like it's thinking about it! /s
croemer
ERR_NAME_NOT_RESOLVED
Coolest part of Qwen3-Next, in my opinion, is that they do MTP without adding another un-embedding matrix.
Deepseek R1 also has a MTP layer (layer 61) https://huggingface.co/deepseek-ai/DeepSeek-R1/blob/main/mod...
But Deepseek R1 adds embed_tokens and shared_head.head tensors, which are [129280, 7168] or about 2GB in size at FP8.
Qwen3-Next doesn't have that: https://huggingface.co/Qwen/Qwen3-Next-80B-A3B-Instruct/blob...
So it saves a few GB in active parameters for MTP, which is a Big Deal.