Skip to content(if available)orjump to list(if available)

Phi-4 Reasoning Models

Phi-4 Reasoning Models

23 comments

·May 1, 2025

danielhanchen

We uploaded GGUFs for anyone who wants to run them locally.

[EDIT] - I fixed all chat templates so no need for --jinja as at 10:00PM SF time.

Phi-4-mini-reasoning GGUF: https://huggingface.co/unsloth/Phi-4-mini-reasoning-GGUF

Phi-4-reasoning-plus-GGUF: https://huggingface.co/unsloth/Phi-4-reasoning-plus-GGUF

danielhanchen

Final edit 11:40PM PT/SF time: I noticed not using --jinja doesn't work - so unfortunately you'll have to use it.

Also it looks very very very verbose. The recommended settings are --temp 0.8 --top-p 0.95

gthompson512

Sorry if this comment is outdated or ill-informed, but it is hard to follow the current news. Do the Phi models still have issues with training on the test set, or have they fixed that?

mrbonner

The example prompt for reasoning model that never fails to amuse me: "How amy letter 'r's in the word 'strrawberrry'?"

Phi-4-mini-reasoning: thought for 2 min 3 sec

<think> Okay, let's see here. The user wants to know how many times the letter 'r' appears in the word 'strrawberrry'. Hmm, first I need to make sure I'm spelling the word correctly. Let me write it out slowly: S-T-R-A-W-B-E-R-R-Y. ...

But I'm overcomplicating this. Given that the user explicitly writes 'strawberrry' with multiple r's (as indicated by \texttt{rrrr}), I think they mean there are three R's in that word. Therefore, counting them would give an answer of 3. </think>

The word "strrawberrry" contains *5* occurrences of the letter 'r'.

To determine this: 1. Break down the spelling: S-T-R-A-W-B-E-R-R-Y. 2. Identify each 'R': - The first R is in the "straw" part (position 3). - Additional Rs appear at positions corresponding to three instances of 'r' due to repetition.

*Answer:* \boxed{5}

I have no idea how it came to that conclusion with those thoughts lol.

The full transcript: https://pastebin.com/tyF5aSb9

gitroom

Honestly the Phi-4 stuff is starting to get real interesting for me. Im still confused about Microsofts whole play here, but thats kind of what makes it fun to watch.

csdvrx

Read https://blogs.windows.com/windowsexperience/2024/12/06/phi-s...

I submitted it, as it gives a better picture of what Microsoft is trying to do: both the hardware, and the software.

Phi is small, not just for shows, but also to be able to run locally on the hardware they are planning for it to run on the copilot branded devices.

behnamoh

what's Microsoft's plan here? on one hand they have a huge share in OpenAI and Mistral, and on the other hand they release open weight models.

changoplatanero

Building their own models gives them more leverage for negotiating favorable terms with openai. The key to a good negotiation is having a strong BATNA, best alternative to a negotiated agreement.

OneDeuxTriSeiGo

Microsoft's push long term seems to be for local-first SLMs. That's the end goal.

csdvrx

Yes: they are building both the software and the hardware for that: https://blogs.windows.com/windowsexperience/2024/12/06/phi-s...

csdvrx

Is anyone here using phi-4 multimodal for image-to-text tasks?

The phi models often punch above their weight, and I got curious about the vision models after reading https://unsloth.ai/blog/phi4 stories of finetuning

Since lmarena.ai only has the phi-4 text model, I've tried "phi-4 multimodal instruct" from openrouter.ai.

However, the results I get are far below what I would have expected.

Is there any "Microsoft validated" source (like https://chat.qwen.ai/c/guest for qwen) to easily try phi4 vision?

refulgentis

These look quite incredible. I work on a llama.cpp GUI wrapper and its quite surprising to see how well Microsoft's Phi-4 releases set it apart as the only competition below ~7B, it'll probably take a year for the FOSS community to implement and digest it completely (it can do multimodal! TTS! STT! Conversation!)

thenameless7741

> it'll probably take a year for the FOSS community to implement and digest it completely

The local community seems to have converged on a few wrappers: Open WebUI (general-purpose), LM Studio (proprietary), and SillyTavern (for role-playing). Now that llama.cpp has an OpenAI-compatible server (llama-server), there's a lot more options to choose from.

I've noticed there really aren't many active FOSS wrappers these days - most of them have either been abandoned or aren't being released with the frequency we saw when OpenAI API first launched. So it would be awesome if you could share your wrapper with us at some point.

pzo

I think OP means that FOSS didn't digest many multimodals of phi4-mini-multimodal such as Audio Input (STT) and Audio Output (TTS), also Image Input also not much supported in many FOSS.

thenameless7741

AFAIK, Phi-4-multimodal doesn't support TTS, but I understand OP's point.

The recent Qwen's release is an excellent example of model providers collaborating with the local community (which include inference engine developers and model quantizers?). It would be nice if this collaboration extended to wrapper developers as well, so that end-users can enjoy a great UX from day one of any model release.

loufe

Kobold.cpp is still my preference for a gui. Single portable exe with good flexibility in configuration if you want it, no need if not.

trc001

Oobabooga is still good as a Swiss Army knife sort of wrapper for a single user trying out new models

wd776g5

The linked article says 14B parameters. edit and I guess the "plus" model is 21B?

refulgentis

grep "As seen above, Phi-4-mini-reasoning with 3.8B parameters outperforms models of over twice its size."

re: reasoning plus, "Phi-4-reasoning-plus builds upon Phi-4-reasoning capabilities, further trained with reinforcement learning to utilize more inference-time compute, using 1.5x more tokens than Phi-4-reasoning, to deliver higher accuracy.", presumably also 14B

justanotheratom

is there a well-established tool-chain for finetuning these models?

idonotknowwhy

Unsloth. Check their colab notebooks