Microsoft has introduced a more affordable AI model called Phi-3, aimed at users who may find the cost of its larger counterparts prohibitive.
Revealed via a statement on Tuesday, Microsoft highlighted that the Phi-3-mini demonstrates superior performance compared to models twice its size across various language, coding, and math benchmarks.
Smaller AI models like Phi-3 are tailored for simpler tasks, making them accessible to companies with limited resources. Microsoft cited examples such as document summarization and extracting insights from market research reports as potential applications for Phi-3.
Eric Boyd, Corporate Vice President of Microsoft Azure AI Platform, emphasized that Phi-3 Mini matches the capabilities of larger models like GPT-3.5 but in a more compact form.
He explained that developers trained Phi-3 using a “curriculum” inspired by how children learn from simplified language structures found in children’s books.
Boyd noted that Phi-3 builds upon the knowledge gained from previous iterations, including Phi-2 released in December. While Phi-2 performed comparably to larger models, Phi-3 surpasses its predecessor and can provide responses similar to models ten times its size.
Smaller AI models require less computational power, allowing tech providers to offer them at lower costs. Microsoft anticipates that this affordability will expand the application of AI to areas previously constrained by the expenses of larger models like GPT-4. While Microsoft asserts that the new models will be “substantially cheaper” than their larger counterparts, exact pricing details were not provided.
Competitors such as Google and Meta also offer their own smaller AI models tailored for specific tasks like document summarization and coding assistance. Google’s Gemma 2B and 7B cater to simple chatbots and language-related tasks, while Meta’s Llama 3 8B is positioned for chatbots and coding assistance, complementing Microsoft’s Phi-3 in the evolving landscape of AI applications.