Nvidia Launches Family of Open Reasoning AI Models: OpenReasoning Nemotron
21 comments
·July 21, 2025righthand
chii
It is in nvidia's interest to commoditize one's complement. These models make owning and using nvidia hardware more attractive - being open and all.
There's no incentive to "hide" the sauce.
jonas21
It's under a Creative Commons Attribution license (CC-BY). That's about as open as it gets.
hkt
For any actual openness in released models the dataset they're trained on would have to be released as well, so yeah, it is a bastardisation.
kristianp
NVIDIA unveiled OpenReasoning-Nemotron, a quartet of distilled reasoning models with 1.5B, 7B, 14B, and 32B parameters, all derived from the 671B-parameter DeepSeek R1 0528. [1]
[1] https://www.techpowerup.com/339089/nvidia-brings-reasoning-m...
lordofgibbons
This is from March. Why is it being re-posted?
kristianp
That nvidia announcement was from March, but I think something has been released recently, as there were a bunch of news stories about a week ago. This blog post [1] was released on July 18th, for example.
[1] https://huggingface.co/blog/nvidia/openreasoning-nemotron
jasonjmcghee
Crazy how a few months make such a difference.
SV_BubbleTime
AI is in 5x historical-tech-speed-mode. So, this is kind of like posting about an iPhone 15.
behole
lol! Claude told me last night, to a question about MCP confusion, that I was experiencing “AI dog years”!
amirhirsch
lol!! but also driving this is the fact is that each new thing speeds up the development of the next thing so 5x is more like (e^t)x
mdaniel
The model card, and the "you have to be authed" image pull instructions: https://build.nvidia.com/nvidia/llama-3_1-nemotron-70b-instr...
Alifatisk
March 18, 2025?
mikefreeman
[dead]
mikefreeman
[dead]
Is this another repurposing and bastardization of “Open” or are these actually open? Should I even be asking?