OpenAI's new open-source model is basically Phi-5
16 comments
·August 7, 2025lifis
LeoPanthera
I don't know about Phi-5, but earlier versions of Phi were trained on stories written by larger models trained on real-world data. Since it's Microsoft, they probably used one of the OpenAI GPT series.
null
magicalhippo
I've found good use of Phi-4 at home, and after a few tests of the GPT-OSS 20B version I'm quite impressed so far.
Particularly one SQL question that has tripped every other model of similar or smaller size that I've tried, like Devstral 24B, Falcon 3 7B, Qwen2.5-coder 14B and Phi 4 14B.
The question contains an key point which is obvious for most humans, and which all of the models I tried previously have failed to pick up on. GPT-OSS picked up on it, and made a reasonable assumption.
It's also much more thorough at explaining code compared to the other models, again including details the others miss.
Now if only I had a GPU that could run the whole thing...
tarruda
If a model is trained only on synthetic data, is it still possible it will output things like this? https://x.com/elder_plinius/status/1952958577867669892
LeoPanthera
By definition, a model can't "know" things that are not somewhere in its training set, unless it can use a tool to query external knowledge.
The problem is that the size of the training set required for a good model is so large, that's really hard to make a good model without including almost all known written text available.
NitpickLawyer
Yeah, makes sense. Good observations regarding the benchmark vs. vibes in general, and I didn't know / made the connection between the lead of phi models going to oAI and gpt-oss. Could very well be a similar exercise + their "new" prompt level adherence (system > developer > user). In all the traces I've seen of refusals the model "quotes" the policy quite religiously. Similar thing was announced for gpt5.
I think the mention of the "horny people" is warranted, they are an important part of the open models (and first to explore the idea of "identities / personas" for LLMs, AFAIK). Plenty of fine-tuning bits of know-how trickled from there to the "common knowledge".
There's a thing that I would have liked to be explored, perhaps. The idea that companies might actually want what -oss offers. While the local llm communities might want freedom and a horny assistant, businesses absolutely do not want that. And in fact they spend a lot of effort into implementing (sometimes less than ideal) guardrails, to keep the models on track. For very easy usecases like support chatbots and the like, businesses will always prefer something that errs on the side of less than useful but "safe", rather than have the bot start going crazy with sex/slurs/insults/etc.
I do have a problem with this section though:
> Really open weight, not open source, because the weights are freely available but the training data and code is not.
This is factually incorrect. The -oss models are by definition open source. Apache2.0 is open source (I think even the purists agree with this). The requirement of sharing "training data and code" is absolutely not a prerequisite for being open source (and historically it was never required. The craze surrounding LLMs suddenly made this a thing. It's not).
Here's the definition of source in "open source":
> "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.
Well, for LLMs the weights are the "preffered form of making modifications". The labs themselves modify models the same as you are allowed to by the license! They might use more advanced tools, or better datasets, but in the end the definition still holds. And you get all the other stuff, like the right to modify, re-release, etc. I'd really wish people would stop proliferating this open weight nonsense.
Models released under open source licenses are open source. gpt-oss, qwens and mistrals (apache2.0), deepseeks(MIT), etc.
Models released under non open source licenses also exist, and they're not open source because the licenses under which they're released aren't. LLamas, gemmas, etc.
mejutoco
The key is if you consider weights source code. I do not think this is a common interpretation.
> The labs themselves modify models the same as you are allowed to by the license
Do the labs do not use source code?
It is a bit like arguing that releasing a binary executable is releasing the source code. One could claim developers modify the binary the same as you are allowed to.
NitpickLawyer
> Do the labs do not use source code?
The weights are part of the source code. When running inference on a model you use the architecture, config files and weights together. All of these are released. Weights are nothing but "hardcoded values". The way you reach those values is irrelevant in the license discussion.
Let's take a simple example: I write a chess program that is comprised of a source file with 10 "if" statements, a config file that matches between the variables used in the if statements and a "hardcoded values" file that stores the actual values. It would be a crappy chess program, but I hope you agree that I can release that as open source and no-one would bat an eye. You would also be granted the right to edit those hardcoded values, if you wish so. You'd perhaps make the chess bot better or worse. But you would be allowed to edit it, just like I would. That's the preferred way of modifying it. Me providing the methods that I used to reach those 10 hardcoded values has 0 bearing on my crappy chess bot being open source or not. Do we agree on that?
Now instead of 10 values, make it 100billion. Hey, that's an LLM!
> It is a bit like arguing that releasing a binary executable is releasing the source code.
That's the misconception. Weights are not a binary executable. In other words, there isn't another level above weights that the labs use to "compile" the weights. The weights exist from the beginning to the end, and the labs edit the weights if they want to modify the models. And so can you. There isn't a "compilation" step anywhere in the course of training a model.
jononor
No the preferred way of making modifications is the weights _together_ with training (or fine tuning) scripts, and the entire evaluation pipeline to measure performance. And the data required to support all of this.
When someone joins your data science team your would give them all this code and data. Not just the weights and say - the weights are the source, modify that to improve the model, I look forward to see your MR next week.
EDIT: Heck, sometimes the way to make improvements (modifications) is just to improve the data, and not touch the training code at all. It is often one of the most powerful ways. You still need training code though, and evaluation to measure the impact.
NitpickLawyer
The license gives you the right to modify the weights, how you do the modification is up to you. The rest is in the realm of IP, know-how, etc. Apples and oranges.
wizzwizz4
You also need the training data, so you can ensure you're not benchmarking on the training set, fine-tuning on the training set (overfitting with extra steps), or otherwise breaking things.
jchw
I think source code really only exists in terms of the source code/object code dichotomy, so what "traditional" open source means for model weights is really not obvious if you only go off of traditional definitions. Personally I think the word "open source" shouldn't apply here anymore than it would for art or binary code.
Consider the following: it is possible to release binaries under the Apache2 license. Microsoft has, at least at one point, released a binary under the BSD license. These binaries are not open source because they are not source.
This isn't the same argument as given in the article though, so I guess it is a third position.
NitpickLawyer
> Consider the following: it is possible to release binaries under the Apache2 license. Microsoft has, at least at one point, released a binary under the BSD license. These binaries are not open source because they are not source.
Agreed. But weights are not binaries in the licensing context. For weights to be binaries it would imply another layer of abstraction, above weights, that the labs use as the preferred way of modifying the model, and then "compile" it into weights. That layer does not exist. When you train a model you start with the weights (randomly initialised, can be 0 can be 1, can be any value, whatever works best). But you start with the weights. And at every step of the training process you modify those weights. Not another layer, not another abstraction. The weights themselves.
tuckerman
I mostly agree with your assessment of what we should/shouldn't call open source for models but there is enough grey area to make the other side a valid position and not worthy of being dismissed so easily. I think there is a fine line between model weights and, say, bytecode for an interpreter and I think if you released bytecode dumps under any license it would be called out.
I also believe the four freedoms are violated to some extent (at least in spirit) by just releasing the weights and for some that might be enough to call something not open source. Your "freedom to study how the program works, and change it to make it do what you wish" is somewhat infringed by not having the training data. Additionally, gpt-oss added a (admittedly very minimal) usage policy that somewhat infringes on the first freedom, i.e. "the freedom to run the program as you wish, for any purpose".
BoorishBears
"Good observations regarding the benchmark vs. vibes in general"
Most "vibes" people are missing that it as only has 5B active parameters.
They read 120B and expect way more performance than a 24B parameter model, even though empricaly a 120B model with 5B active parameters is expected to perform right around there.
Does anyone know how synthetic data is commonly generated? Do they just sample the model randomly starting from an empty state, perhaps with some filtering? Or do they somehow automatically generate prompts and if how? Do they have some feedback mechanism, e.g. do they maybe test the model while training and somehow generate data related to poorly performing tests?