Microsoft has released a new lineup of small-scale language models in the Phi-4 series, featuring three reasoning models of varying sizes.
Phi-4-reasoning, a medium-sized model, with 14B parameters trained using output from o3-mini.
Phi-4-reasoning-plus, an enhanced version of Phi-4-reasoning with 14B parameters, further trained using reinforcement learning techniques, outperforming o1-mini and DeepSeek-R1-Distill-Llama-70B despite its smaller size.
Phi-4-mini-reasoning, a compact 3.8B model designed for running in resource-constrained environments, emphasizing speed and mathematical reasoning, suitable for embedding in other applications or running on mobile devices.
Microsoft states that these models are already deployed in Copilot+ PC, running on CPU and GPU, with plans to optimize for NPU using the same customization techniques as Phi Silica.
All three models are available on Azure AI Foundry and published on GitHub: Phi-4-reasoning, Phi-4-reasoning-plus, Phi-4-mini-reasoning.
Source: Microsoft
Phi Universe models
Comparing the scores of Phi-4-reasoning
In conclusion, Microsoft’s latest Phi-4 series introduces innovative reasoning models tailored for diverse applications, delivering exceptional performance across various environments while showcasing advancements in AI technology.
Leave a Comment