Research Reports

Crossing AI Memory Wall: Storage Layer Reallocation and HBF Analysis

icon

Last Modified

2026-04-23

icon

Update Frequency

Not

icon

Format

PDF


Contact Us

In AI inference, MoE architectures and long-context processing have sharply increased memory-capacity requirements for model weights and KV cache, shifting the bottleneck from insufficient compute to limited memory capacity. As warm data grows rapidly, this will drive a restructuring of the storage hierarchy, where HBM will handle hot data, while HBF will carry warm data to optimize cost–performance. However, commercialization of HBF still needs to overcome challenges in advanced packaging processes and the inherent characteristics of NAND flash.

Key Highlights

  • Bottleneck: AI advancements shifted the bottleneck from compute power to memory capacity.
  • Hierarchy: Surging warm data demands tiered storage: HBM for hot data and HBF for warm, maximizing cost-efficiency. 
  • HBF Hurdles: Commercialization requires overcoming advanced packaging and NAND flash limitations. 

Table of Contents

  1. Development Bottlenecks of LLM: Impact on Computing Structures by Transformation of Model Architectures
    • Figure 1: Features of MoE
    • Figure 2: Deployment Strategies among AI Storage Vendors
  2. From Computing Bottlenecks to Restructuring of Storage Layers
    • Figure 3: Hot, Warm, and Cold Architectures of Storage Layers
    • Figure 4: “H³” Architecture
    • Table 1: Comparison between HBM and HBF
  3. TRI’s View

<Total Pages: 13>

Hot, Warm, and Cold Architectures of Storage Layers





USD

2,000

icon

Membership

  • Selected Topics New
  • Selected Topics-182_Crossing Over AI Memory Wall: Reallocation of Storage Layers and Analysis on HBF New

Get in touch with us


Get in touch with us