This report analyzes the capital expenditure structure of a typical hyperscale AI data center, breaking down the spending into physical infrastructure, IT computing facilities, and networking equipment. Among these, the investment in computing systems, which are servers, is the most significant. As the proportion of higher-end, higher-power AI servers in new projects continues to rise, the cost share of computing systems is expected to increase further.
AI is sweeping across the world at an unprecedented pace. From training LLMs to powering ubiquitous generative AI applications, data has become the lifeblood of this technological revolution. The massive wave of data generated by AI is putting immense pressure on global data center infrastructure, especially in the storage domain, where a profound transformation is quietly taking place.
Traditionally, nearline HDDs have been the backbone of large-scale data storage. However, they are now facing an unprecedented supply crisis. At the same time, SSDs, long regarded as high-performance yet high-cost solutions, particularly high-capacity QLC SSDs, are being thrust into the spotlight.
This chain reaction, triggered by HDD shortages, is not merely a supply chain issue, it is a catalyst accelerating the rewriting of the storage market’s rules and signaling explosive growth for QLC SSDs in the coming years. This report delves into the causes, challenges, and future dynamics of this trend.
2026 NAND Flash: AI drives explosive enterprise SSD demand, offsetting weak consumer markets. HDD shortages compel a shift to high-capacity QLC SSDs. Despite bit growth, supply for these remains structurally tight, ensuring sustained price increases and a significant market pivot.
The repatriation policy of the US semiconductor manufacturing is spurring suppliers to increase investment in the US and accelerate plans for plant establishment. Due to the high labor costs and scarcity of technical talent in the US market, suppliers are striving to introduce solutions of automated semiconductor production. AI and digital twin technology are becoming the core of smart manufacturing by benefiting production efficiency and yield improvement. Moreover, with the upgrade in coverage rate and deployment flexibility of smart logistics equipment, it is expected to develop from single equipment to a multi-equipment collaborative ecosystem. Taiwanese suppliers such as Techman and KENMEC are actively seizing opportunities in the US semiconductor automation market through their technological prowess and system integration experience.
Cloud providers lift AI Capex to new highs; rack-scale GPU and custom ASICs rise, liquid cooling spreads.
The report highlights AI data center expansion by U.S. and Chinese CSPs. U.S. firms scale globally and invest more at home, while Chinese firms expand with self-developed chips, but both prioritize energy stability going forward.
Analyst summary: Samsung completed NVIDIA certification for HBM3e and began limited shipments; HBM3e will be the near‑term mainstream. Supplier competition is intensifying while HBM4 upgrades are expected to raise costs and premiums; certification timing will shape prices.
AI's surge is transforming data center storage. HDD faces high costs and long lead times due to new tech hurdles, creating a significant supply gap. This opens a key opportunity for QLC Nearline SSDs. NAND suppliers are accelerating advanced process transitions to dominate the AI-driven high-capacity storage market, positioning Nearline SSDs as the future mainstream.
TrendForce notes surging AI inference drives high-capacity storage demand. HDD shortage pushes CSPs to Nearline SSDs. NAND Flash vendors accelerate QLC validation & expansion. Future sees wider SSD adoption in AI, boosting enterprise SSD demand. NAND Flash expands to AI training, diversifying tech. Supply chain faces bottlenecks, requiring expansion & innovation.
Geopolitical constraints and import controls curb RTX PRO in China, with a special edition awaiting broader demand; CPX adopts GDDR7 memory with increased capacity, complementing Rubin GPUs that rely on HBMs, while near-term momentum remains with current solutions.