Microsoft has unveiled the second generation of its Maia artificial intelligence chips, aiming to compete with chipmakers like Nvidia.
Maia 200 chips performs 30% better per dollar than current-generation Microsoft hardware, the company said. It will be used for both Microsoft’s AI models and models like OpenAI’s GPT-5.2.
“Today, we’re proud to introduce Maia 200, a breakthrough inference accelerator engineered to dramatically improve the economics of AI token generation,” wrote Microsoft’s executive vice president for Cloud and AI Scott Guthrie.
“Maia 200 is part of our heterogenous AI infrastructure and will serve multiple models, including the latest GPT-5.2 models from OpenAI, bringing performance per dollar advantage to Microsoft Foundry and Microsoft 365 Copilot. The Microsoft Superintelligence team will use Maia 200 for synthetic data generation and reinforcement learning to improve next-generation in-house models.”
The chips were built using Taiwan Semiconductor Manufacturing Co. (TSMC)’s 3nm transistor density process. This is similar to Nvidia’s forthcoming Vera Rubin chips, announced earlier in January.
They include three times the FP4 tensor core performance of Amazon’s Trainium chips and FP8 performance greater than Google’s seventh-generation TPU machine learning units, Microsoft said.
Maia 200 has already been deployed at Microsoft’s data centres in Iowa, and the company plans to add the chips to its Arizona data centres next. It has invited developers to begin using the Maia 200 software development kit.
Microsoft’s Maia 100 chip was announced in 2023, but was not released widely to cloud customers.
Microsoft's share price closed 0.9% at US$470.28, before rising 0.2% after-hours. Its market capitalisation is $3.50 trillion.
Related content



