This report show Google and AWS driving strong HBM demand, with capacity expanding across generations; META and Huawei join as large buyers.
Global AI server market stays strong as major cloud providers expand infrastructure and platforms; GPU and self-developed ASIC demand rises amid regional dynamics.
This report analyzes the capital expenditure structure of a typical hyperscale AI data center, breaking down the spending into physical infrastructure, IT computing facilities, and networking equipment. Among these, the investment in computing systems, which are servers, is the most significant. As the proportion of higher-end, higher-power AI servers in new projects continues to rise, the cost share of computing systems is expected to increase further.
The server market notes North American cloud providers as the growth engine, with enterprise OEMs cautious, and AI servers plus efficient cooling as priorities. Key players like Compal and Supermicro focus on AI servers and new cabinet offerings, eyeing Middle East opportunities and regional capacity, while shipment timing is influenced by regional plant starts.
AI servers and chips up; GPU and in‑house ASIC advance. Liquid cooling grows; storage favors higher capacity.
Sept 2025 server DRAM contract prices surged due to persistent CSP demand, with DDR5/DDR4 increasing. 8000MT/s modules will see accelerated adoption. Strong 2026 demand and tight supply are set to extend price increases.
Cloud providers lift AI Capex to new highs; rack-scale GPU and custom ASICs rise, liquid cooling spreads.
The report highlights AI data center expansion by U.S. and Chinese CSPs. U.S. firms scale globally and invest more at home, while Chinese firms expand with self-developed chips, but both prioritize energy stability going forward.
Robust AI server demand, driven by NVIDIA, AMD, and CSP custom ASICs, propels market growth. DDR5 is mainstream, yet DDR4 sees extended life and price rebound from storage needs. AMD and ARM challenge Intel in CPUs. 2Q25 server DRAM revenue increased, with high-capacity modules boosting average prices, forecasting continued strong AI server growth.
Analyst summary: Samsung completed NVIDIA certification for HBM3e and began limited shipments; HBM3e will be the near‑term mainstream. Supplier competition is intensifying while HBM4 upgrades are expected to raise costs and premiums; certification timing will shape prices.