The second new model that Microsoft released today, Phi-4-multimodal, is an upgraded version of Phi-4-mini with 5.6 billion parameters. It can process not only text but also images, audio and video.
Microsoft has revealed its impressive new Phi-3 artificial intelligence model. This is a tiny model in comparison to the likes of GPT-4, Gemini or Llama 3 but it packs a punch for its size.
The new small language model can help developers build multimodal AI applications for lightweight computing devices, ...
According to Microsoft, the new model handily outperforms its previous Phi-2 small model and is on par with larger models like Llama 2.In fact, the company says Phi-3 Mini provides responses close ...
Microsoft's Phi-4 Series delivers cutting-edge multimodal AI with compact design, local deployment, and advanced ...
Phi-4-multimodal and Phi-4-mini are Microsoft’s latest small language models, both launching on Azure AI Foundry and Hugging Face today. Phi-4-multimodal improves speech recognition, translation ...
Phi – 4 – multimodal, a model with just 5.6 billion parameters, and Phi-4-Mini, with 3.8 billion parameters, outperform similarly sized competitors and on certain tasks even match or exceed ...
Microsoft is expanding its Phi line of open-source language models with two new algorithms optimized for multimodal ...
Today, Microsoft introduced Phi-4, a 14B parameter state-of-the-art small language model (SLM) that even beats OpenAI's GPT-4 large language model in MATH and GPQA AI benchmarks. Microsoft claims ...