Skip to content(if available)orjump to list(if available)

QwQ-32B: Embracing the Power of Reinforcement Learning

antirez

Note the massive context length (130k tokens). Also because it would be kinda pointless to generate a long CoT without enough context to contain it and the reply.

EDIT: Here we are. My first prompt created a CoT so long that it catastrophically forgot the task (but I don't believe I was near 130k -- using ollama with fp16 model). I asked one of my test questions with a coding question totally unrelated to what it says:

<QwQ output> But the problem is in this question. Wait perhaps I'm getting ahead of myself.

Wait the user hasn't actually provided a specific task yet. Let me check again.

The initial instruction says:

"Please act as an AI agent that can perform tasks... When responding, first output a YAML data structure with your proposed action, then wait for feedback before proceeding."

But perhaps this is part of a system prompt? Wait the user input here seems to be just "You will be given a problem. Please reason step by step..." followed by a possible task? </QwQ>

Note: Ollama "/show info" shows that the context size set is correct.

anon373839

> Note: Ollama "/show info" shows that the context size set is correct.

That's not what Ollama's `/show info` is telling you. It actually just means that the model is capable of processing the context size displayed.

Ollama's behavior around context length is very misleading. There is a default context length limit parameter unrelated to the model's capacity, and I believe that default is a mere 2,048 tokens. Worse, when the prompt exceeds it, there is no error -- Ollama just silently truncates it!

If you want to use the model's full context window, you'll have to execute `/set parameter num_ctx 131072` in Ollama chat mode, or if using the API or an app that uses the API, set the `num_ctx` parameter in your API request.

antirez

Ok, this explains why QwQ is working great on their chat. Btw I saw this thing multiple times: that ollama inference, for one reason or the other, even without quantization, somewhat had issues with the actual model performance. In one instance the same model with the same quantization level, if run with MLX was great, and I got terrible results with ollama: the point here is not ollama itself, but there is no testing at all for this models.

I believe that models should be released with test vectors at t=0, providing what is the expected output for a given prompt for the full precision and at different quantization levels. And also for specific prompts, the full output logits for a few tokens, so that it's possible to also compute the error due to quantization or inference errors.

svachalek

Yeah the state of the art is pretty awful. There have been multiple incidents where a model has been dropped on ollama with the wrong chat template, resulting in it seeming to work but with greatly degraded performance. And I think it's always been a user that notices, not the ollama team or the model team.

anon373839

The test vectors idea is pretty interesting! That's a good one.

I haven't been able to try out QwQ locally yet. There seems to be something wrong with this model on Ollama / my MacBook Pro. The text generation speed is glacial (much, much slower than, say Qwen 72B at the same quant). I also don't see any MLX versions on LM Studio yet.

wizee

Ollama defaults to a context of 2048 regardless of model unless you override it with /set parameter num_ctx [your context length]. This is because long contexts make inference slower. In my experiments, QwQ tends to overthink and question itself a lot and generate massive chains of thought for even simple questions, so I'd recommend setting num_ctx to at least 32768.

In my experiments of a couple mechanical engineering problems, it did fairly well in final answers, correctly solving mechanical engineering problems that even DeepSeek r1 (full size) and GPT 4o did wrong in my tests. However, the chain of thought was absurdly long, convoluted, circular, and all over the place. This also made it very slow, maybe 30x slower than comparably sized non-thinking models.

I used a num_ctx of 32768, top_k of 30, temperature of 0.6, and top_p of 0.95. These parameters (other than context length) were recommended by the developers on Hugging Face.

zamadatix

I always see:

  /set parameter num_ctx <value>
Explained but never the follow up:

  /save <custom-name>
So you don't have to do the parameter change every load. Is there a better way or is it kind of like setting num_ctx in that "you're just supposed to know"?

sReinwald

You can also set

    OLLAMA_CONTEXT_LENGTH=<tokens>
as an environment variable to change ollama's default context length.

flutetornado

My understanding is that top_k and top_p are two different methods of decoding tokens during inference. top_k=30 considers the top 30 tokens when selecting the next token to generate and top_p=0.95 considers the top 95 percentile. You should need to select only one.

https://github.com/ollama/ollama/blob/main/docs/modelfile.md...

Edit: Looks like both work together. "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)"

Not quite sure how this is implemented - maybe one is preferred over the other when there are enough interesting tokens!

nodja

They both work on a sorted list of tokens by probability. top_k selects a fixed amount of tokens, top_p selects the top tokens until the sum of probabilities passes the threshold p. So for example if the top 2 tokens have a .5 and .4 probability, then a 0.9 top_p would stop selecting there.

Both can be chained together and some inference engines let you change the order of the token filtering, so you can do p before k, etc. (among all other sampling parameters, like repetition penalty, removing top token, DRY, etc.) each filtering step readjusts the probabilities so they always sum to 1.

hbbio

"My first prompt created a CoT so long that it catastrophically forgot the task"

Many humans would do that

AustinDev

I tried the 'Strawberry' question which generated nearly 70k words of CoT.

moffkalast

I think you guys might be using too low of a temperature, it never goes beyond like 1k thinking tokens for me.

nicman23

lol did it at least get it right?

nkozyra

It's a hard problem, that's a lot to ask.

ignorantguy

Yeah it did the same in my case too. it did all the work in the <think> tokens. but did not spit out the actual answer. I was not even close to 100K tokens

freehorse

If you did not change the context length, it is certain that it is not 2k or so. In "/show info" there is a field "context length" which is about the model in general, while "num_ctx" under "parameters" is the context length for the specific chat.

I use modelfiles because I only use ollama because it has easy integration with other stuff eg with zed, so this way I can easily choose models with a set context size directly.

Here nothing fancy, just

    FROM qwq
    PARAMETER num_ctx 100000
You save this somewhere as a text file, you run

    ollama create qwq-100k -f path/to/that/modelfile
and you now have "qwq-100k" in your list of models.

smallerize

From https://huggingface.co/Qwen/QwQ-32B

Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.

GTP

Sorry, could you please explain what this means? I'm not into machine learning, so I don't get the jargon.

smallerize

Well I can't be positive, but it looks like some of the factors that support a long context length might be set wrong. https://blog.eleuther.ai/yarn/

tsunego

Can’t wait to see if my memory can even acocomodate this context

gagan2020

Chinese strategy is open-source software part and earn on robotics part. And, They are already ahead of everyone in that game.

These things are pretty interesting as they are developing. What US will do to retain its power?

BTW I am Indian and we are not even in the race as country. :(

nazgulsenpai

If I had to guess, more tariffs and sanctions that increase the competing nation's self-reliance and harm domestic consumers. Perhaps my peabrain just can't comprehend the wisdom of policymakers on the sanctions front, but it just seems like all it does is empower the target long-term.

h0l0cube

The tarrifs are for the US to build it's own domestic capabilities, but this will ultimately shift the rest of the world's trade away from the US and toward each other. It's a trade-off – no pun intended – between local jobs/national security and downgrading their own economy/geo-political standing/currency. Anyone who's been making financial bets on business as usual for globalization is going to see a bit of a speed bump over the next few years, but in the long term it's the US taking an L to undo decades of undermining their own peoples' prospects from offshoring their entire manufacturing capability. Their trump card - still no pun intended - is their military capability, which the world will have to wean themselves off first.

whatshisface

Tariffs don't create local jobs, they shut down exporting industries (other countries buy our exports with the dollars we pay them for our imports) and some of those people may over time transition to non-export industries.

Here's an analysis indicating how many jobs would be destroyed in total over several scenarios: https://taxfoundation.org/research/all/federal/trump-tariffs...

pstuart

The tariffs are seen as "free money" that will allow for cutting taxes on the wealthy. Note that the current messaging is "we spend too much money" and there's nothing about "we need to invest in _foo_"

bugglebeetle

Unitree just open-sourced their robot designs:

https://sc.mp/sr30f

China’s strategy is to prevent any one bloc from achieving dominance and cutting off the others, while being the sole locus for the killer combination of industrial capacity + advanced research.

aurareturn

  China’s strategy is to prevent any one bloc from achieving dominance and cutting off the others, while being the sole locus for the killer combination of industrial capacity + advanced research.
You're acting like these startups are controlled by the Chinese government. In reality, they're just like any other American startup. They make decisions on how to make the most money - not what the Chinese government wants.

esalman

What if aligning with Chinese interest becomes the best way to make money? What stopping the Chinese government from providing better incentives to businesses and academics?

asadm

Not really. It seems unitree didn't open source anything. Not anything useful.

rcdwealth

[dead]

dtquad

>BTW I am Indian and we are not even in the race as country

Why are you surprised?

India was on a per capita basis poorer than sub-Saharan Africa until 2004.

The only reason India is no longer poorer than Africa is because the West (the IMF and World Bank) forced India to do structural reforms in 1991 that stopped the downward trajectory of the Indian economy since its 1947 independence.

aurareturn

  The only reason India is no longer poorer than Africa is because the West (the IMF and World Bank) forced India to do structural reforms in 1991 that stopped the downward trajectory of the Indian economy since its 1947 independence.
India had the world's largest GDP at some point in its history. Why did India lose its status?

null

[deleted]

holoduke

Also part of their culture/identity. A good thing i believe.

dcreater

India is absolutely embarrassing. Could have been an extremely important 3rd party that obviates the moronic US vs China, us or them, fReEdOm vs communism narrative with all the talent it has.

esalman

Turns out conservatism and far right demagoguery is not great for progress.

dr_dshiv

I love that emphasizing math learning and coding leads to general reasoning skills. Probably works the same in humans, too.

20x smaller than Deep Seek! How small can these go? What kind of hardware can run this?

daemonologist

It needs about 22 GB of memory after 4 bit AWQ quantization. So top end consumer cards like Nvidia's 3090 - 5090 or AMD's 7900 XTX will run it.

be_erik

Just ran this on a 4000RTX with 24gb of vram and it struggles to load, but it’s very fast once the model loads.

Ey7NFZ3P0nzAe

A mathematician once told me that this might be because math teaches you to have different representations for a same thing, you then have to manipulate those abstractions and wander through their hierarchy until you find an objective answer.

samstave

>I love that emphasizing math learning and coding leads to general reasoning skills

Its only logical.

Leary

To test: https://chat.qwen.ai/ and select Qwen2.5-plus, then toggle QWQ.

bangaladore

They baited me into putting in a query and then asking me to sign up to submit it. Even have a "Stay Logged Out" button that I thought would bypass it, but no.

I get running these models is not cheap, but they just lost a potential customer / user.

zamadatix

Running this model is dirt cheap, they're just not chasing that type of customer.

mrshu

You can also try the HuggingFace Space at https://huggingface.co/spaces/Qwen/QwQ-32B-Demo (though it seems to be fully utilized at the moment)

null

[deleted]

doublerabbit

Check out venice.ai

They're pretty up to date with latest models. $20 a month

Alifatisk

They have a option specifically for QwQ-32B now

cubefox

How do you know this model is the same as in the blog post?

Leary

One of the people on the Qwen team tweeted this instruction.

cubefox

Thanks. I just saw they also link to https://chat.qwen.ai/?models=Qwen2.5-Plus in the blog post.

attentive

it's on groq now for super fast inference

fsndz

super impressive. we won't need that many GPUs in the future if we can have the performance of DeepSeek R1 with even less parameters. NVIDIA is in trouble. We are moving towards a world of very cheap compute: https://medium.com/thoughts-on-machine-learning/a-future-of-...

holoduke

Have you heard of Jevons paradox? That says that whenever new tech is used to make something more efficient the tech is just upscaled to make the product quality higher. Same here. Deepseek has some algoritmic improvements that reduces resources for the same output quality. But increasig resources (which are available) will increase the quality. There will be always need for more compute. Nvidia is not in trouble. They have a monopoly on high performing ai chips for which demand will at least rise by a factor of 1000 upcoming years (my personal opinion)

UncleOxidant

I agree that the Jevons paradox can apply here, however, there have been several "breakthroughs" in the last couple of months (R1, diffusion LLMs, this) that really push the amount of GPU compute down such that I think it's going to be problematic for companies that went out and bought boatloads of GPUs (like OpenAI, for example). So while it might not be bad news for NVidia (given Jevons) it does seem to be bad news for OpenAI.

fsndz

yeah, sure, I guess the investors selling NVIDIA's stock like crazy know nothing about jevons

pzo

Surprisingly those open models might be savour for Apple and gift for Qualcomm too. They can finetune them to their liking and catch up to competition and also sell more of their devices in the future. Longterm even better models for Vision will have problem to compete with latency of smaller models that are good enough but have very low latency. This will be important in robotics - reason Figure AI dumped OpenAI and started using their own AI models based on Open Source (founder mentioned recently in one interview).

daemonologist

It says "wait" (as in "wait, no, I should do X") so much while reasoning it's almost comical. I also ran into the "catastrophic forgetting" issue that others have reported - it sometimes loses the plot after producing a lot of reasoning tokens.

Overall though quite impressive if you're not in a hurry.

huseyinkeles

I read somewhere which I can't find now, that for the -reasoning- models they trained heavily to keep saying "wait" so they can keep reasoning and not return early.

rahimnathwani

Is the model using budget forcing?

Szpadel

I do not understand why to force wait when model want to output </think>.

why not just decrease </think> probability? if model really wants to finish maybe or could over power it in cases were it's really simple question. and definitely would allow model to express next thought more freely

rahimnathwani

  why not just decrease </think> probability?
Huggingface's transformers library supports something similar to this. You set a minimum length, and until that length is reached, the end of sequence token has no chance of being output.

https://github.com/huggingface/transformers/blob/51ed61e2f05...

S1 does something similar to put a lower limit on its reasoning output. End of thinking is represented with the <|im_start|> token, followed by the word 'answer'. IIRC the code dynamically adds/removes <|im_start|> to the list of suppressed tokens.

Both of these approaches set the probability to zero, not something small like you were suggesting.

rosspackard

I have a suspicion it does use budget forcing. The word "alternatively" also frequently show up and it happens when it seems logically that a </think> tag could have been place.

manmal

I guess I won’t be needing that 512GB M3 Ultra after all.

UncleOxidant

I think the Framework AI PC will run this quite nicely.

Tepix

I think you want a lot of speed to make up for the fact that it's so chatty. Two 24GB GPUs (so you have room for context) will probably be great.

seanmcdirmid

A max with 64 GB of ram should be able to run this (I hope). I have to wait until an MLX model is available to really evaluate its speed, though.

pickettd

seanmcdirmid

I downloaded the 8 bit quant last night but haven't had a chance to play with it yet.

mettamage

Yep, it does that. I have 64 GB and was actually running 40 GB of other stuff.

rpastuszak

How much vram do you need to run this model? Is 48 gb unified memory enough?

zamalek

39gb if you use a fp8 quantized model.[1] Remember that your OS might be using some of that itself.

As far as I recall, Ollama/llama.cpp recently added a feature to page-in parameters - so you'll be able to go arbitrarily large soon enough (at a performance cost). Obviously more in RAM = more speed = more better.

[1]: https://token-calculator.net/llm-memory-calculator

dulakian

I am using the Q6_K_L quant and it's running at about 40G of vram with the KV cache.

Device 1 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||20.170Gi/23.988Gi]

Device 2 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||19.945Gi/23.988Gi]

lostmsu

What's the context length?

brandall10

It's enough for 6 bit quant with a somewhat restricted context length.

Though based on the responses here, it needs sizable context to work, so we may be limited to 4 bit (I'm on an M3 Max w/ 48gb as well).

daemonologist

The quantized model fits in about 20 GB, so 32 would probably be sufficient unless you want to use the full context length (long inputs and/or lots of reasoning). 48 should be plenty.

manmal

I‘ve tried the very early Q4 mlx release on an M1 Max 32GB (LM Studio @ default settings), and have run into severe issues. For the coding tasks I gave it, it froze before it was done with reasoning. I guess I should limit context size. I do love what I‘m seeing though, the output reads very similar to R1, and I mostly agree with its conclusions. The Q8 version has to be way better even.

iamronaldo

This is insane matching deepseek but 20x smaller?

Imnimo

I wonder if having a big mixture of experts isn't all that valuable for the type of tasks in math and coding benchmarks. Like my intuition is that you need all the extra experts because models store fuzzy knowledge in their feed-forward layers, and having a lot of feed-forward weights lets you store a longer tail of knowledge. Math and coding benchmarks do sometimes require highly specialized knowledge, but if we believe the story that the experts specialize to their own domains, it might be that you only really need a few of them if all you're doing is math and coding. So you can get away with a non-mixture model that's basically just your math-and-coding experts glued together (which comes out to about 32B parameters in R1's case).

mirekrusin

MoE is likely temporary, local optimum now that resembles bitter lesson path. With the time we'll likely distill what's important, shrink it and keep it always active. There may be some dynamic retrieval of knowledge (but not intelligence) in the future but it probably won't be anything close to MoE.

mirekrusin

...let me expand a bit.

It would be interesting if research teams would try to collapse trained MoE into JoaT (Jack of all Trades - why not?).

With MoE architecture it should be efficient to back propagate other expert layers to align with result of selected one – at end changing multiple experts into multiple Jacks.

Having N multiple Jacks at the end is interesting in itself as you may try to do something with commonalities that are present, available on completely different networks that are producing same results.

littlestymaar

> , but if we believe the story that the experts specialize to their own domains

I don't think we should believe anything like that.

7734128

Roughly the same number of active parameters as R1 is a mixture-of-experts model. Still extremely impressive, but not unbelievable.

kmacdough

I understand the principles of MOE, but clearly not enough to make full sense of this.

Does each expert within R1 have 37B parameters? If so, is QwQ only truly competing against one expert in this particular benchmark?

Generally I don't think I follow how MOE "selects" a model during training or usage.

Imnimo

I had a similar confusion previously, so maybe I can help. I used to think that a mixture of experts model meant that you had like 8 separate parallel models, and you would decide at inference time which one to route to. This is not the case, the mixture happens at a much smaller scale.

Instead, the mixture of experts exists within individual layers. Suppose we want to have a big feed-forward layer that takes as input a 1024-element vector, has a hidden size of 8096, and an output size of 1024. We carve up that 8096 hidden layer into 8 1024-sized chunks (this does not have to be the same size as the input). Whenever an input arrives at this layer, a routing function determines which of those 1024-sized chunks should serve as the hidden layer. Every token within a single prompt/response can choose a different chunk when it is processed by this layer, and every layer can have a different routing decision. So if I have 100 layers, each of which has 8 experts, there are 8^100 possible different paths that an individual token could take through the network.

WiSaGaN

I think it will be more akin to o1-mini/o3-mini instead of r1. It is a very focused reasoning model good at math and code, but probably would not be better than r1 at things like general world knowledge or others.

nycdatasci

Wasn't this release in Nov 2024 as a "preview" with similarly impressive performance? https://qwenlm.github.io/blog/qwq-32b-preview/

yorwba

The benchmark scores in the new announcement are significantly higher than for the preview model.

samus

That's good news, I was highly impressed already by what that model could do, even under heavy quantization.

rvz

The AI race to zero continues to accelerate with downloadable free AI models which have already won the race and destroying closed source frontier AI models.

They are once again getting squeezed in the middle and this is even before Meta releases Llama 4.

freehorse

How does it compare to qwen32b-r1-distill? Which is probably the most directly comparable model.

pzo

I'm wondering as well. Here in open llm leaderboard there is only preview. Better than deepseek-ai/DeepSeek-R1-Distill-Qwen-32B but surprisingly worse than deepseek-ai/DeepSeek-R1-Distill-Qwen-14B

in Open LLM leaderboard overall this model is ranked quite low at 660: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_...

wbakst

actually insane how small the model is. they are only going to get better AND smaller. wild times

dulakian

My informal testing puts it just under Deepseek-R1. Very impressive for 32B. It maybe thinks a bit too much for my taste. In some of my tests the thinking tokens were 10x the size of the final answer. I am eager to test it with function calling over the weekend.