Microsoft Corp. has begun deploying its second-generation artificial intelligence chip as part of efforts to boost efficiency across its services and reduce reliance on Nvidia Corp.’s hardware.
The new Maia 200 chip, manufactured by Taiwan Semiconductor Manufacturing Co., is already being installed in Microsoft’s data centres in Iowa, United States with further rollouts planned for the Phoenix area.
On Monday, Microsoft opened access to Maia’s control software for developers, though the company has yet to announce when Azure cloud customers will be able to run workloads on servers powered by the chip.
Some of the initial units will be allocated to Microsoft’s superintelligence team to generate data that will help refine future AI models, according to cloud and AI chief Scott Guthrie in a blog post.
The chips will also support the Copilot assistant for businesses and run advanced AI models — including OpenAI’s latest, that Microsoft offers to its cloud customers.
Microsoft said the chip offers stronger performance on certain AI workloads compared with similar semiconductors developed by Google and Amazon Web Services.
“Maia 200 is also the most efficient inference system Microsoft has ever deployed,” Guthrie said, referring to the process of using AI models to generate responses to queries.
Microsoft’s move into custom chip development came years after Amazon.com Inc. and Alphabet Inc.’s Google launched their own in-house designs.
All three companies share the same goal: building cost-efficient processors that integrate smoothly into data centres while delivering savings and improved performance for cloud customers.
Rising costs and limited availability of Nvidia’s top-tier chips have intensified the race to secure alternative sources of computing power.
