OLMo 2 - a family of fully-open language models
4 comments
·July 9, 2025tripplyons
Nice to see how open it is! However, if you are just looking for the best model, Mistral Small 3.2 appears to be a stronger model with fewer parameters compared to OLMo 2 32B. It would be interesting to see how far these "fully open" models can get to their "open weight" counterparts.
real0mar
The inconvenient truth might be that the other models score higher than OLMO because they aren't restricted to purely "open and accessible" training data. Who knows what private or ethically dubious data went into training Mistral or llama, for example.
erlend_sh
Exactly. If we really wanted to benchmark the various models on the merits of their individual implementations, we should be comparing them all on the same open dataset.
This model is from Nov/2024.
https://lifearchitect.ai/models-table/