Qwen 3 now supports ARM and MLX
6 comments
·September 13, 2025ukuina
p0w3n3d
Yeah I've been using qwen3 on mlx in July already
NiekvdMaas
Old post indeed: https://x.com/Alibaba_Qwen/status/1934517774635991412
littlestymaar
The unfortunate thing with their Qwen3-next naming is that it doesn't reflect on the fact that the architecture is completely different from Qwen3. Much more different than the difference between Qwen2 and Qwen3 even.
So support is likely to take quite some time because it's not just regular transformer blocks stacked on each other, but a brand new hybrid architecture using SSM.
NiekvdMaas
From https://github.com/ggml-org/llama.cpp/issues/15940#issuecomm...:
> This is a massive task, likely 2-3 months of full-time work for a highly specialized engineer. Until the Qwen team contributes the implementation, there are no quick fixes.
veber-alex
It's already supported in vLLM, SGLang and MLX.
[June 2025]