About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[News] “Father of HBM” Predicts Memory-Centric AI Shift as HBF Emerges, Reshaping GPUs’ Role


2026-04-01 Semiconductors editor

Please note that this article cites information from Aju News, Money Today Broadcasting, Hankyung and SK hynix.

As memory giants race to bet on post-HBM technologies like HBF (High Bandwidth Flash), the man who is widely credited as the “father of HBM” is making a bold call. KAIST professor Joungho Kim told Aju News that today’s GPU-centric paradigm led by NVIDIA will eventually flip into a memory-centric architecture.

As the AI landscape pivots from generative to agentic models, memory constraints are emerging as a critical choke point. According to Aju News, Kim described this shift as the rise of “context engineering,” where vast volumes of documents, videos, and multimodal data are processed simultaneously. To keep pace, he stressed that both memory bandwidth and capacity must scale by up to 1,000× to ensure speed and accuracy.

Citing Kim, Money Today Broadcasting reported earlier that a 100x to 1,000x surge in input scale could trigger an exponential jump in memory demand—potentially ballooning by as much as 1 million times.

Against this backdrop, Kim told Aju News that HBM—which dominates today’s AI accelerator memory by vertically stacking DRAM for ultra-high speed—will fall short in the agentic AI era. As a next-generation path, he highlighted HBF, which replaces DRAM with stacked NAND to create a “massive bookshelf” of long-term memory, dramatically expanding capacity beyond current limits.

Against this backdrop, Kim told Aju News that the balance of power in AI could ultimately tilt away from GPUs toward memory. While GPUs and CPUs anchor today’s compute architecture, he argued that future systems will be built around ultra-high-capacity memory such as HBM and HBF—with processors effectively embedded within them.

The idea can be more concretely visualized through a research paper published by SK hynix. As reported by Hankyung in February, the company introduced an “H3” architecture in a paper presented at the Institute of Electrical and Electronics Engineers (IEEE).

According to Hankyung, under H3, HBM and HBF are deployed side by side with the GPU handling computation—marking a departure from current AI chip designs, where only HBM sits adjacent to the processor.

A turning point may be imminent. Aju News, citing Kim, reports that HBF engineering samples are expected to emerge around 2027, with Google, NVIDIA, or AMD potentially adopting the technology as early as 2028.

SK hynix, Samsung Betting on HBF

Notably, Kim also sees the emerging HBF race echoing the competitive playbook of HBM, with Samsung Electronics and SK hynix once again set to go head-to-head.

In February, SK hynix partnered with SanDisk to launch an HBF standardization consortium, aiming to secure early ecosystem leadership. Samsung Electronics, meanwhile, is focusing on next-generation HBM such as HBM4E, while also investing in NAND architectures aligned with the emerging HBF concept, according to Aju News.

Read more

(Photo credit: Sandisk)

 

 


Get in touch with us