AI's evolving demands shift storage from HDDs (high latency) to SSDs (speed, low latency). SSDs offer superior performance and TCO benefits, accelerating their adoption in AI infrastructure.
AI drives surging memory demand, prompting capex revisions. However, limited cleanroom space and a shift to advanced tech over raw capacity will constrain future bit output growth. Equipment vendors are optimistic, yet memory tech hurdles rise.
North American cloud giants are substantially increasing Capex, signaling an investment peak in AI infrastructure over the next two years. Shifting to large-scale, long-term strategies, they are focusing on high-end GPU Racks and accelerating in-house AI ASIC development to secure market leadership and drive rapid AI server growth.
2024 SSD market faced conservative demand; Kingston led, Chinese brands rose, intensifying the "big get bigger" trend. Module makers differentiated, targeting new markets, with AI driving future high-capacity SSDs.
AI-driven demand push chiplet-based packaging; EMIB costs less than CoWoS, signaling a move to modular, efficient interconnects.
A clear uptrend is taking shape for 2026, with tighter DRAM supply and broad-based price increases now firmly in sight. The primary growth catalyst is CSPs, which are accelerating data center expansion to support AI workloads. This is not only driving higher global server shipments but also a notable increase in memory content per server.
In the NAND Flash market, enterprise demand will serve as the core growth engine in 2026, while the consumer segment is expected to remain muted until a more visible economic recovery boosts purchasing power and revitalizes demand.
Looking ahead to 2026, continued strength in AI server demand—combined with suppliers' profitability-first strategy—will keep both DRAM and NAND Flash prices on an upward trajectory, reinforcing a structural pricing shift across the memory industry.
Tariffs and policy changes kept early demand cautious, but cloud expansion and AI investment have lifted memory demand and pricing, tightening DRAM supply. The DDR5 uptrend is established, with profitability likely to surpass HBM3e; vendors’ capacity allocation and pricing strategies will reshape the market landscape.
AI-driven data centers evolve from single-chip to heterogeneous multi-GPU architectures. High-speed optical interconnects enable scalability, while silicon photonics and co-packaged optics boost bandwidth and energy efficiency amid modular, ecosystem-based competition.
This report analyzes the capital expenditure structure of a typical hyperscale AI data center, breaking down the spending into physical infrastructure, IT computing facilities, and networking equipment. Among these, the investment in computing systems, which are servers, is the most significant. As the proportion of higher-end, higher-power AI servers in new projects continues to rise, the cost share of computing systems is expected to increase further.
AI is sweeping across the world at an unprecedented pace. From training LLMs to powering ubiquitous generative AI applications, data has become the lifeblood of this technological revolution. The massive wave of data generated by AI is putting immense pressure on global data center infrastructure, especially in the storage domain, where a profound transformation is quietly taking place.
Traditionally, nearline HDDs have been the backbone of large-scale data storage. However, they are now facing an unprecedented supply crisis. At the same time, SSDs, long regarded as high-performance yet high-cost solutions, particularly high-capacity QLC SSDs, are being thrust into the spotlight.
This chain reaction, triggered by HDD shortages, is not merely a supply chain issue, it is a catalyst accelerating the rewriting of the storage market’s rules and signaling explosive growth for QLC SSDs in the coming years. This report delves into the causes, challenges, and future dynamics of this trend.