OpenAI: Our new model GPT-5-Codex-Mini – a more cost-efficient GPT-5-Codex
4 comments
·November 8, 2025vessenes
Looks like a leak: https://platform.openai.com/docs/models does not list it, and codex-mini-latest says that it's based on 4o. I wonder if it will be faster than codex; gpt-5-nano and -mini are still very slow for me on API, surprisingly so.
simonw
They announced it on Twitter yesterday: https://x.com/OpenAIDevs/status/1986861734619947305 and https://x.com/OpenAIDevs/status/1986861736041853368
> GPT-5-Codex-Mini allows roughly 4x more usage than GPT-5-Codex, at a slight capability tradeoff due to the more compact model.
> Available in the CLI and IDE extension when you sign in with ChatGPT, with API support coming soon.
ChadNauseam
I noticed the same thing with -mini. It can be even slower than the full fat version. I'm guessing their infra for it is very cost-optimized to help them offer it at such a low price
If any open AI devs reading this comment section: is it possible for us to get api access at runable.com ?