Microsoft’s Phi-3 Medium of open model has seen Intel verify and optimise its AI product portfolio across client, edge, and data centre. The Phi-3 family of open-source, tiny models can be quickly adjusted to meet individual needs, operate on less powerful hardware, and allow developers to create locally executable apps. For data centre applications, Intel supports Intel Xeon processors and Intel Gaudi AI accelerators; for client applications, Intel Core Ultra CPUs and Intel Arc graphics are available.
Using the most recent AI models and software available, Intel offers developers and clients robust AI solutions. The secret to implementing AI everywhere is Intel’s aggressive cooperation with other industry giants in the AI software ecosystem, such as Microsoft. Intel is pleased to collaborate closely with Microsoft to guarantee that a number of new Phi-3 models are actively supported by Intel hardware, which spans data centres, edges, and clients.
Why This Is Important: Working with AI pioneers and innovators, Intel consistently invests in the AI software ecosystem as part of its commitment to bring AI everywhere.
For its central processor units (CPUs), graphics processing units (GPUs), and Intel Gaudi accelerators, Intel collaborated with Microsoft to enable Phi-3 model support on launch day. Along with co-designing DeepSpeed, an intuitive package of software for deep learning optimisation, Intel also expanded Hugging Face’s automatic tensor parallelism support for Phi-3 and other models.
Phi-3 models’ compact size makes them ideal for on-device inference, enabling lightweight model creation on AI PCs and edge devices, such as fine-tuning or customisation. Comprehensive software frameworks and tools, such as PyTorch and Intel Extension for PyTorch, which are used for local research and development, and OpenVINO Toolkit, which is used for model deployment and inference, speed the development of Intel client hardware.
What’s Next: Intel is dedicated to satisfying the generative AI requirements of its business clients, and it will keep improving and supporting its software for Phi-3 and other cutting-edge language models.
Microsoft Phi-3 Medium
The previously released Phi-3-small and Phi-3 Medium models are now accessible on Microsoft Azure, giving developers the ability to create generative AI systems that need to handle scenarios with latency bounds, strong reasoning, and limited compute. Finally, customers can now quickly and simply get started with Azure AI’s models by accessing Phi-3 mini and Phi-3 Medium, which were previously available.
The family Phi-3
Phi-3 models outperform models of the same size and next size up across a variety of language, reasoning, coding, and math benchmarks, making them the most capable and economical small language models (SLMs) on the market. As detailed in Tiny but mighty. The Phi-3 little language models with great potential, they are trained utilising high quality training data. With Phi-3 models available, Azure clients have access to a wider range of excellent models, providing them with more useful options for creating and developing generative AI applications.
The Phi-3 model family consists of four models that are all ready to use right out of the box because they have all been carefully calibrated and built in compliance with Microsoft’s responsible AI, safety, and security requirements.
- Phi-3-vision is a multimodal model with 4.2B parameters that combines language and vision functions.
- There are two context lengths for the 3.8B parameter language model, Phi-3 mini (128K and 4K).
- There are two context lengths for the 7B parameter language model, Phi-3-small (128K and 8K).
- The 14B parameter Phi-3 Medium language model comes in two context lengths (128K and 4K).
Discover every Phi-3 model on Hugging Face and Azure AI.
The Phi-3 versions are designed to function well on a wide range of hardware. Optimised versions with DirectML and ONNX Runtime are available, giving developers compatibility for a variety of platforms and devices, such as online and mobile deployments. The Phi-3 models may also be deployed anywhere and are optimised for inference on NVIDIA GPUs and Intel accelerators. They are offered as NVIDIA NIM inference microservices with a standard API interface.
Adding Phi-3 multimodality
The first multimodal model in the Phi-3 family, Phi-3-vision combines text and images with the capacity to extract and reason over text from images as well as reason over real-world images. It may be used to produce insights and provide answers, and it has also been optimised for interpreting charts and diagrams. Phi-3-vision continues to bundle great language and picture reasoning quality in a compact model, building on the Phi-3-mini’s language skills.
Revolutionary performance in a compact package
As was previously said, Phi-3-small and Phi-3 Medium perform better than both language models of the same size and considerably larger ones.
- Phi-3-small surpasses GPT-3.5T in a range of language, reasoning, coding, and math benchmarks with just 7B parameters.
- Continuing the trend, the Phi-3 Medium with 14B parameters beats the Gemini 1.0 Pro.
- With just 4.2B parameters, Phi-3-vision maintains this trend and surpasses larger models in OCR, table and chart interpretation, and general visual reasoning tasks, such as Claude-3 Haiku and Gemini 1.0 Pro V.
To guarantee comparability, the same pipeline is used to produce all reported numbers. These figures may therefore vary slightly from other published figures as a result of variations in the assessment process. Microsoft’s technical document has more information about benchmarks.
Selecting the appropriate model
Customers are looking more and more to employ different models in their applications, based on the use case and business requirements, due to the changing landscape of models that are accessible. Selecting the appropriate model relies on the requirements of a particular use case.
Small language models are easier to use and more accessible for organisations with limited resources. They can also be more quickly adjusted to fit individual requirements. Small language models are optimised to perform effectively for basic tasks. They work effectively for apps that must run locally on a device, where the user only needs to respond quickly and no complex reasoning is required to complete a task.
The decision on whether to use Phi-3-mini, Phi-3-small, or Phi-3 Medium is based on the computational resources that are available and the task’s level of complexity. They can be used for a wide range of activities related to language generation and understanding, including question-answering, summarising, content creation, and sentiment analysis. These models are well-suited for analytical jobs due to their high reasoning and logic abilities, which go beyond standard language tasks. Taking in and reasoning about vast text content papers, web pages, code, and more is made possible by the longer context window that is available across all models.
Phi-3-vision works well for problems requiring both textual and visual reasoning. It works particularly well for jobs using OCR, such as analysing charts, diagrams, and tables, as well as reasoning and questioning over extracted text.
0 Comments