About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[News] Google Unveils 7th-Gen TPU Ironwood with 9,216-Chip Superpod, Taking Aim at NVIDIA



Google is advancing its lead in the custom silicon race with its Tensor Processing Units (TPUs). According to CNBC, the company’s seventh-generation TPU, known as Ironwood, is set to hit the market for public use in the coming weeks. In its press release, Google said Ironwood is designed to handle everything from large-scale model training and complex reinforcement learning (RL) to AI inference. The new TPU delivers up to 10× higher peak performance than TPU v5p and achieves over 4× better per-chip efficiency for both training and inference compared with its predecessor, TPU v6e (Trillium).

Tom’s Hardware adds that Ironwood delivers 4,614 FP8 TFLOPS of performance and comes with 192 GB of HBM3E memory. According to Economic Daily News, its memory capacity is six times that of Trillium, enabling the processing of larger models and datasets while reducing the need for frequent data transfers.

Ironwood Pods: Massive 9,216-Chip Scale Drives Next-Level AI Performance

As its press release indicates, Ironwood can scale up to 9,216 chips within a superpod, connected through a 9.6 Tb/s Inter-Chip Interconnect (ICI) network. This breakthrough architecture helps eliminate data bottlenecks, allowing it to efficiently handle even the most demanding AI models.

Ironwood pods deliver 42.5 FP8 ExaFLOPS of compute power for both training and inference—far exceeding NVIDIA’s GB300 NVL72 system, which reaches 0.36 ExaFLOPS, Tom’s Hardware notes. Each pod is interconnected through Google’s proprietary 9.6 Tb/s Inter-Chip Interconnect and equipped with about 1.77 PB of HBM3E memory,  outperforming NVIDIA’s competing platform, as highlighted by Tom’s Hardware.

Several companies are already adopting Google’s Ironwood-based platform. Anthropic plans to use up to one million TPUs to scale its Claude models, while Lightricks has begun leveraging Ironwood to train and serve its LTX-2 multimodal system.

According to TrendForce, Google is partnering with Broadcom on Ironwood, set to expand in 2026 and succeed the TPU v6e (Trilium). TrendForce predicts Google’s TPU shipments will stay the highest among CSPs, with over 40% annual growth in 2026.

Google Axion: Armv9 Efficiency for Modern AI Workloads

In addition to the Ironwood TPU, Google has introduced its first Armv9-based general-purpose processor, called Axion, Tom’s Hardware notes. Google highlights Axion’s strong energy efficiency, positioning it as a key component for modern AI workflows. While specialized accelerators like Ironwood tackle intensive model serving, Axion is designed to power the underlying operations, from large-scale data preparation and ingestion to running application servers.

Read more

(Photo credit: Google)

Please note that this article cites information from CNBC, Google, Tom’s Hardware, and Economic Daily News.


Get in touch with us