Skip to content(if available)orjump to list(if available)

DeepSeekMath-V2: Towards Self-Verifiable Mathematical Reasoning

victorbuilds

Notable: they open-sourced the weights under Apache 2.0, unlike OpenAI and DeepMind whose IMO gold models are still proprietary.

PunchyHamster

I think we should treat copyright for the weights the same way the AI companies treat source material ;)

littlestymaar

We don't even have to do that: weights being entirely machine generated without human intervention, they are likely not copyrightable in the first place.

In fact, we should collectively refuse to abide to these fantasy license before weight copyrightability gets created out of thin air because it's been commonplace for long enough.

SilverElfin

If they open source just weights and not the training code and data, then it’s still proprietary.

very_illiterate

Stop kvetching and read the submission title.

mips_avatar

Yeah but you can distill

littlestymaar

You can distill closed weights models as well. (Just not logit-distillation)

amelius

Is that the equivalent of decompile?

falcor84

Isn't that a bit like saying that if I open source a tool, but not a full compendium of all the code that I had read, which led me to develop it, then it's not really open source?

KaiserPro

No its like releasing a binary. I can hook into it and its API and make it do other things. But I can't rebuild it from scratch.

nextaccountic

No, it's like saying that if you release under Apache license, it's not open source even though it's under an open source license

For something to be open source it needs to have sources released. Sources are the things in the preferred format to be edited. So the code used for training is obviously source (people can edit the training code to change something about the released weights). Also the training data, under the same rationale: people can select which data is used for training to change the weights

exe34

"open source" as a verb is doing too much work here. are you proposing to release the human readable code or the object/machine code?

if it's the latter, it's not the source. it's free as in beer. not freedom.

nurettin

Is this a troll? They don't want to reproduce your open source code, they want to reproduce the weights.

fragmede

No. In that case, you're providing two things, a binary version of your tool, and the tool's source. That tool's source is available to inspect and build their own copy. However, given just the weights, we don't have the source, and can't inspect what alignment went into it. In the case of DeepSeek, we know they had to purposefully cause their model to consider Tiananmen Square something it shouldn't discuss. But without the source used to create the model, we don't know what else is lurking around inside the model.

amelius

True. But the headline says open weights.

ekianjo

It's just open weights, the source has no place in this expression

jimmydoe

you are absolutely right. I'd rather use true closed models, not fake open source ones from China.

yorwba

Previous discussion: https://news.ycombinator.com/item?id=46072786 218 points 3 days ago, 48 comments

victorbuilds

Ah, missed that one. Thanks for the link.

ilmj8426

It's impressive to see how fast open-weights models are catching up in specialized domains like math and reasoning. I'm curious if anyone has tested this model for complex logic tasks in coding? Sometimes strong math performance correlates well with debugging or algorithm generation.

alansaber

It makes complete sense to me: highly-specific models don't have much commercial value, and at-scale llm training favours generalism.

stingraycharles

kimi-k2 is pretty decent at coding but it’s nowhere near the SOTA models of Anthropic/OpenAI/Google.

tripplyons

Are you referring to the new reasoning version of Kimi K2?

simianwords

A bit important that this model is not general purpose whereas the ones Google and OpenAI used were general purpose.

yorwba

Both OpenAI and Google used models made specifically for the task, not their general-purpose products.

OpenAI: https://xcancel.com/alexwei_/status/1946477756738629827#m "we are releasing GPT-5 soon, and we’re excited for you to try it. But just to be clear: the IMO gold LLM is an experimental research model. We don’t plan to release anything with this level of math capability for several months."

DeepMind: https://deepmind.google/blog/advanced-version-of-gemini-with... "we additionally trained this version of Gemini on novel reinforcement learning techniques that can leverage more multi-step reasoning, problem-solving and theorem-proving data. We also provided Gemini with access to a curated corpus of high-quality solutions to mathematics problems, and added some general hints and tips on how to approach IMO problems to its instructions."

simianwords

https://x.com/sama/status/1946569252296929727

>we achieved gold medal level performance on the 2025 IMO competition with a general-purpose reasoning system! to emphasize, this is an LLM doing math and not a specific formal math system; it is part of our main push towards general intelligence.

asterisks mine

yorwba

DeepSeekMath-V2 is also an LLM doing math and not a specific formal math system. What interpretation of "general purpose" were you using where one of them is "general purpose" and the other isn't?

simianwords

Not true

mangolie

andy12_

Do note that that is a different model. The one we are talking about here, DeepSeekMath-V2, is indeed overcooked with math RL. It's so eager to solve math problems, that it even comes up with random ones if you prompt it with "Hello".

https://x.com/AlpinDale/status/1994324943559852326?s=20

simianwords

Oh you may be correct. Are these models general purpose or fine tuned for mathematics?

terespuwash

Why isn’t OpenAI’s gold medal-winning model available to the public yet?

esafak

'coz it was for advertisement. They'll roll their lessons into the next general purpose model.

H8crilA

How do you run this kind of a model at home? On a CPU on a machine that has about 1TB of RAM?

pixelpoet

Wow, it's 690GB of downloaded data, so yeah, 1TB sounds about right. Not even my two Strix Halo machines paired can do this, damn.

bertili

Two 512GB Mac Studios connected with thunderbolt 5.

Gracana

You can do it slowly with ik_llama.cpp, lots of RAM, and one good GPU. Also regular llama.cpp, but the ik fork has some enhancements that make this sort of thing more tolerable.

letmetweakit

Does anyone know if this will become available on OpenRouter?

sschueller

How is OpenAI going to be able to serve ads in chatgpt without everyone immediately jumping ship to another model?

Coffeewine

I suppose the hope is that they don’t, and we wind up with commodity frontier models from multiple providers at market rates.

miroljub

I don't care about OpenAI even if they don't serve ads.

I can't trust any of their output until they become honest enough to change their name to CloseAI.

PunchyHamster

by having datacenters with GPUs and API everyone uses.

So they are either earning money directly or on the API calls.

Now, competition can come and compete on that, but they will probably still be the first choice for foreseeable future

KeplerBoy

Google served ads for decades and no one ever jumped ship to another search engine.

sschueller

Because Google gave the best results for a long time.

PunchyHamster

and now, when they are not, everyone else's results are also pretty terrible...

bootsmann

They pay $30bn (more than OpenAIs lifetime revenue) each year to make sure noone does.

KeplerBoy

What are you referring to?

dist-epoch

The same way people stayed on Google despite DuckDuckGo existing.