TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.
According to a report from Wccftech, citing The Information, Microsoft is reportedly pushing back its AI chip roadmap and plans to introduce an interim chip in 2027, likely to be named Maia 280, which is expected to combine two Braga chips for improved performance.
The report notes, citing The Information, that the purported delay of Microsoft’s Braga AI chip has raised concerns about its successors—Braga-R and Clea—potentially facing similar setbacks. These delays have reportedly sparked doubts about whether Microsoft’s chips can remain competitive with NVIDIA’s latest offerings.
As a result, Microsoft has reportedly decided to pursue a middle-ground solution by launching an AI chip in 2027 that falls between Braga and Braga-R in terms of performance, the report highlights, citing The Information. Microsoft executives believe the new chip could deliver up to 30% better performance compared to NVIDIA’s expected 2027 offerings, the report adds.
Based on a previous report from The Information, mass production of Microsoft’s Braga AI chip has reportedly been delayed from 2025 to 2026. As noted by Tom’s Hardware, citing The Information, sources indicate that the chip is expected to fall well short of the performance of NVIDIA’s flagship Blackwell chip, which was released last year.
As noted by Wccftech, Microsoft’s executives plan to eventually manufacture hundreds of thousands of in-house AI chips each year. The report points out that as major tech firms look to reduce their reliance on NVIDIA, the growing demand for custom chips has also driven increased orders for chip designers like Broadcom and Marvell.
Major U.S. CSPs Accelerate AI Infrastructure and In-House Chip Development
TrendForce points out that Microsoft still primarily relies on NVIDIA GPU-based solutions for its AI infrastructure, while progress on its in-house ASIC development has remained relatively slow. Its next-generation Maia chips are expected to begin ramping up in 2026.
Meanwhile, according to TrendForce, Meta is actively expanding both its AI server infrastructure and internal ASIC development, with shipment volumes of its MTIA chips projected to double by 2026.
Google—already known for its relatively high adoption of self-developed chips—has begun mass deployment of its inference-focused TPU v6e chips, which became mainstream in the first half of 2025, TrendForce notes.
TrendForce also highlights that AWS is expected to double shipments of its self-developed ASICs in 2025, leading among U.S. CSPs.
Read more
(Photo credit: Microsoft)