This issue highlights server ODMs, CPUs, GPUs, and key changes in the thermal supply chain. AI servers are the focus, CSPs keep ordering, ODMs prep for new platforms, Meta and Google push in-house CPUs, with rising cooling needs. Geopolitical risks still impact the outlook.
The AI ASIC market is rapidly expanding, with TPUs shifting from internal use to external commercialization due to their energy efficiency and customization. Leading cloud and tech firms actively develop in-house ASICs to reduce costs and supply risks, driving AI hardware toward high performance, low power, and diverse applications, becoming the main accelerator after GPUs.
MRDIMM addresses high-core CPU and AI needs, enhancing memory efficiency, but standards, cost, and support limit adoption.
In 2025, global AI chips focus on high-end HBM memory; NVIDIA’s new Blackwell platform drives growth, amid geopolitical limits and steady AI server demand, with rapid HBM technology evolution toward HBM4 in 2026.
3Q25 memory prices rise above forecasts as supply tightens. DDR4 surges, new products rise moderately; NAND Flash up only in enterprise.
DDR4 prices rise due to EOL effects and buying demand, focusing on certain players. DDR5 sees slight growth from CSP order returns with steady pricing. AMD's server market penetration grows, benefiting high-speed DRAM and new processes, leading to greater profit potential for manufacturers.
North American CSPs & OEMs drive AI market growth; new Blackwell platform shipments will expand. China's market faces variables due to geopolitics affecting AI solution supply. Overall AI server shipments are expected to maintain double-digit growth.
Cloud giants accelerate self-developed AI ASICs, boosting market scale and projects. Driven by internal needs and geopolitics, ASIC share will rise, with next-gen ASICs from major cloud providers expected to ramp up in 2026, a key growth year.
HBM4 pricing shows a premium over HBM3e due to increased complexity and cost in manufacturing and design.
Amazon is rapidly expanding its self-developed AI ASIC servers to bolster AWS's cloud AI training competitiveness. The new generation Trainium chips feature diversified designs catering to various training needs. On the NVIDIA platform, Delta leads as the main power supplier, advancing high-power AI server development.