Skip to content(if available)orjump to list(if available)

QwQ-32B: Embracing the Power of Reinforcement Learning

antirez

Note the massive context length (130k tokens). Also because it would be kinda pointless to generate a long CoT without enough context to contain it and the reply.

EDIT: Here we are. My first prompt created a CoT so long that it catastrophically forgot the task (but I don't believe I was near 130k -- using ollama with fp16 model). I asked one of my test questions with a coding question totally unrelated to what it says:

<QwQ output> But the problem is in this question. Wait perhaps I'm getting ahead of myself.

Wait the user hasn't actually provided a specific task yet. Let me check again.

The initial instruction says:

"Please act as an AI agent that can perform tasks... When responding, first output a YAML data structure with your proposed action, then wait for feedback before proceeding."

But perhaps this is part of a system prompt? Wait the user input here seems to be just "You will be given a problem. Please reason step by step..." followed by a possible task? </QwQ>

Note: Ollama "/show info" shows that the context size set is correct.

wizee

Ollama defaults to a context of 2048 regardless of model unless you override it with /set parameter num_ctx [your context length]. This is because long contexts make inference slower. In my experiments, QwQ tends to overthink and question itself a lot and generate massive chains of thought for even simple questions, so I'd recommend setting num_ctx to at least 32768.

In my experiments of a couple mechanical engineering problems, it did fairly well in final answers, correctly solving mechanical engineering problems that even DeepSeek r1 (full size) and GPT 4o did wrong in my tests. However, the chain of thought was absurdly long, convoluted, circular, and all over the place. This also made it very slow, maybe 30x slower than comparably sized non-thinking models.

I used a num_ctx of 32768, top_k of 30, temperature of 0.6, and top_p of 0.95. These parameters (other than context length) were recommended by the developers on Hugging Face.

zamadatix

I always see:

  /set parameter num_ctx <value>
Explained but never the follow up:

  /save <custom-name>
So you don't have to do the parameter change every load. Is there a better way or is it kind of like setting num_ctx in that "you're just supposed to know"?

flutetornado

My understanding is that top_k and top_p are two different methods of decoding tokens during inference. top_k=30 considers the top 30 tokens when selecting the next token to generate and top_p=0.95 considers the top 95 percentile. You should need to select only one.

https://github.com/ollama/ollama/blob/main/docs/modelfile.md...

Edit: Looks like both work together. "Works together with top-k. A higher value (e.g., 0.95) will lead to more diverse text, while a lower value (e.g., 0.5) will generate more focused and conservative text. (Default: 0.9)"

Not quite sure how this is implemented - maybe one is preferred over the other when there are enough interesting tokens!

nodja

They both work on a sorted list of tokens by probability. top_k selects a fixed amount of tokens, top_p selects the top tokens until the sum of probabilities passes the threshold p. So for example if the top 2 tokens have a .5 and .4 probability, then a 0.9 top_p would stop selecting there.

Both can be chained together and some inference engines let you change the order of the token filtering, so you can do p before k, etc. (among all other sampling parameters, like repetition penalty, removing top token, DRY, etc.) each filtering step readjusts the probabilities so they always sum to 1.

AustinDev

I tried the 'Strawberry' question which generated nearly 70k words of CoT.

nicman23

lol did it at least get it right?

hbbio

"My first prompt created a CoT so long that it catastrophically forgot the task"

Many humans would do that

ignorantguy

Yeah it did the same in my case too. it did all the work in the <think> tokens. but did not spit out the actual answer. I was not even close to 100K tokens

smallerize

From https://huggingface.co/Qwen/QwQ-32B

Presently, vLLM only supports static YARN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the rope_scaling configuration only when processing long contexts is required.

codelion

that's interesting... i've been noticing similar issues with long context windows & forgetting. are you seeing that the model drifts more towards the beginning of the context or is it seemingly random?

i've also been experimenting with different chunking strategies to see if that helps maintain coherence over larger contexts. it's a tricky problem.

orbital-decay

Neither lost-in-the-middle nor long context performance have seen a lot of improvement in the recent year. It's not easy to generate long training examples that also stay meaningful, and all existing models still become significantly dumber after 20-30k tokens, particularly on hard tasks.

Reasoning models probably need some optimization constraint put on the length of the CoT, and also some priority constraint (only reason about things that need it).

dr_dshiv

I love that emphasizing math learning and coding leads to general reasoning skills. Probably works the same in humans, too.

20x smaller than Deep Seek! How small can these go? What kind of hardware can run this?

daemonologist

It needs about 22 GB of memory after 4 bit AWQ quantization. So top end consumer cards like Nvidia's 3090 - 5090 or AMD's 7900 XTX will run it.

be_erik

Just ran this on a 4000RTX with 24gb of vram and it struggles to load, but it’s very fast once the model loads.

samstave

>I love that emphasizing math learning and coding leads to general reasoning skills

Its only logical.

mohsen1

Gets really stuck with my query which R1 figures out after some thinking

      First 3 odd numbers without e in their spelling

Imustaskforhelp

Doesn't every odd number has a e ? one three five seven nine

Is this a riddle which has no answer ? or what? why are people on internet saying its answer is one huh??

manmal

I guess I won’t be needing that 512GB M3 Ultra after all.

UncleOxidant

I think the Framework AI PC will run this quite nicely.

rpastuszak

How much vram do you need to run this model? Is 48 gb unified memory enough?

dulakian

I am using the Q6_K_L quant and it's running at about 40G of vram with the KV cache.

Device 1 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||20.170Gi/23.988Gi]

Device 2 [NVIDIA GeForce RTX 4090] MEM[||||||||||||||||||19.945Gi/23.988Gi]

lostmsu

What's the context length?

zamalek

39gb if you use a fp8 quantized model.[1] Remember that your OS might be using some of that itself.

As far as I recall, Ollama/llama.cpp recently added a feature to page-in parameters - so you'll be able to go arbitrarily large soon enough (at a performance cost). Obviously more in RAM = more speed = more better.

[1]: https://token-calculator.net/llm-memory-calculator

brandall10

It's enough for 6 bit quant with a somewhat restricted context length.

Though based on the responses here, it needs sizable context to work, so we may be limited to 4 bit (I'm on an M3 Max w/ 48gb as well).

daemonologist

The quantized model fits in about 20 GB, so 32 would probably be sufficient unless you want to use the full context length (long inputs and/or lots of reasoning). 48 should be plenty.

manmal

I‘ve tried the very early Q4 mlx release on an M1 Max 32GB (LM Studio @ default settings), and have run into severe issues. For the coding tasks I gave it, it froze before it was done with reasoning. I guess I should limit context size. I do love what I‘m seeing though, the output reads very similar to R1, and I mostly agree with its conclusions. The Q8 version has to be way better even.

Leary

To test: https://chat.qwen.ai/ and select Qwen2.5-plus, then toggle QWQ.

bangaladore

They baited me into putting in a query and then asking me to sign up to submit it. Even have a "Stay Logged Out" button that I thought would bypass it, but no.

I get running these models is not cheap, but they just lost a potential customer / user.

zamadatix

Running this model is dirt cheap, they're just not chasing that type of customer.

null

[deleted]

mrshu

You can also try the HuggingFace Space at https://huggingface.co/spaces/Qwen/QwQ-32B-Demo (though it seems to be fully utilized at the moment)

fsndz

super impressive. we won't need that many GPUs in the future if we can have the performance of DeepSeek R1 with even less parameters. NVIDIA is in trouble. We are moving towards a world of very cheap compute: https://medium.com/thoughts-on-machine-learning/a-future-of-...

holoduke

Have you heard of Jevons paradox? That says that whenever new tech is used to make something more efficient the tech is just upscaled to make the product quality higher. Same here. Deepseek has some algoritmic improvements that reduces resources for the same output quality. But increasig resources (which are available) will increase the quality. There will be always need for more compute. Nvidia is not in trouble. They have a monopoly on high performing ai chips for which demand will at least rise by a factor of 1000 upcoming years (my personal opinion)

UncleOxidant

I agree that the Jevons paradox can apply here, however, there have been several "breakthroughs" in the last couple of months (R1, diffusion LLMs, this) that really push the amount of GPU compute down such that I think it's going to be problematic for companies that went out and bought boatloads of GPUs (like OpenAI, for example). So while it might not be bad news for NVidia (given Jevons) it does seem to be bad news for OpenAI.

cubefox

How do you know this model is the same as in the blog post?

Leary

One of the people on the Qwen team tweeted this instruction.

iamronaldo

This is insane matching deepseek but 20x smaller?

Imnimo

I wonder if having a big mixture of experts isn't all that valuable for the type of tasks in math and coding benchmarks. Like my intuition is that you need all the extra experts because models store fuzzy knowledge in their feed-forward layers, and having a lot of feed-forward weights lets you store a longer tail of knowledge. Math and coding benchmarks do sometimes require highly specialized knowledge, but if we believe the story that the experts specialize to their own domains, it might be that you only really need a few of them if all you're doing is math and coding. So you can get away with a non-mixture model that's basically just your math-and-coding experts glued together (which comes out to about 32B parameters in R1's case).

mirekrusin

MoE is likely temporary, local optimum now that resembles bitter lesson path. With the time we'll likely distill what's important, shrink it and keep it always active. There may be some dynamic retrieval of knowledge (but not intelligence) in the future but it probably won't be anything close to MoE.

littlestymaar

> , but if we believe the story that the experts specialize to their own domains

I don't think we should believe anything like that.

7734128

Roughly the same number of active parameters as R1 is a mixture-of-experts model. Still extremely impressive, but not unbelievable.

kmacdough

I understand the principles of MOE, but clearly not enough to make full sense of this.

Does each expert within R1 have 37B parameters? If so, is QwQ only truly competing against one expert in this particular benchmark?

Generally I don't think I follow how MOE "selects" a model during training or usage.

Imnimo

I had a similar confusion previously, so maybe I can help. I used to think that a mixture of experts model meant that you had like 8 separate parallel models, and you would decide at inference time which one to route to. This is not the case, the mixture happens at a much smaller scale.

Instead, the mixture of experts exists within individual layers. Suppose we want to have a big feed-forward layer that takes as input a 1024-element vector, has a hidden size of 8096, and an output size of 1024. We carve up that 8096 hidden layer into 8 1024-sized chunks (this does not have to be the same size as the input). Whenever an input arrives at this layer, a routing function determines which of those 1024-sized chunks should serve as the hidden layer. Every token within a single prompt/response can choose a different chunk when it is processed by this layer, and every layer can have a different routing decision. So if I have 100 layers, each of which has 8 experts, there are 8^100 possible different paths that an individual token could take through the network.

WiSaGaN

I think it will be more akin to o1-mini/o3-mini instead of r1. It is a very focused reasoning model good at math and code, but probably would not be better than r1 at things like general world knowledge or others.

daemonologist

It says "wait" (as in "wait, no, I should do X") so much while reasoning it's almost comical. I also ran into the "catastrophic forgetting" issue that others have reported - it sometimes loses the plot after producing a lot of reasoning tokens.

Overall though quite impressive if you're not in a hurry.

rahimnathwani

Is the model using budget forcing?

Imustaskforhelp

Dude its so crazy , in my personal experience , I gave it can you read what I have wrote backwards and answer that query

ip fo eulav si tahw

profile Qwen2.5-Max 11:22 am Thinking completed

Okay, let me try to figure this out. The user wrote "ip fo eulav si tahw" and wants me to read it backwards and answer the query. Hmm, first, I need to reverse the entire string. Let's see, reversing "ip fo eulav si tahw" would be "what is value of pi". Oh, right! Pi is the mathematical constant approximately equal to 3.14159. So the user is asking for the value of pi. Let me confirm that by checking each part. "ip" reversed is "pi", "fo" is "of", "eulav" is "value", "si" is "is", and "tahw" is "what". Yep, that makes sense. The question is "What is the value of pi?" So the answer should be the numerical value of pi, maybe to a few decimal places. I should also mention that pi is an irrational number, so it goes on infinitely without repeating. Let me make sure I don't make any mistakes here. Pi is approximately 3.1415926535... and so on. Alright, I think that's it. Just need to present it clearly.

The reversed query is "What is the value of pi?"

Answer: The value of π (pi) is approximately 3.14159 . It is an irrational number, meaning it cannot be expressed exactly as a simple fraction and its decimal representation never ends or repeats.

Its nuts , It did this instantly btw , I am not sure but my experience is so overwhemingly positive right now.

dulakian

My informal testing puts it just under Deepseek-R1. Very impressive for 32B. It maybe thinks a bit too much for my taste. In some of my tests the thinking tokens were 10x the size of the final answer. I am eager to test it with function calling over the weekend.

wbakst

actually insane how small the model is. they are only going to get better AND smaller. wild times

nycdatasci

Wasn't this release in Nov 2024 as a "preview" with similarly impressive performance? https://qwenlm.github.io/blog/qwq-32b-preview/

yorwba

The benchmark scores in the new announcement are significantly higher than for the preview model.

kelsey98765431

first thoughts: wow this is a real reasoning model, not just llama variant with a sft. the chain of thought actually wwill go for a very long time on a seemingly simple question like writing a pi calculation in c. very interesting.

Imustaskforhelp

I tried it for something basic like 2+2 and it was very simple. But I might try your pi calculation idea as well.

Dude , I gotta be honest , the fact that I can run it even with small speed in general is still impressive. I can wait , yknow , if I own my data.

I wonder if nvidia would plummet again. Or maybe the whole american market.