About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[News] Beyond HBM: Samsung, SK hynix Reportedly Explore Next-Gen AI Memory That Could Challenge NVIDIA


2026-03-10 Semiconductors editor

NVIDIA’s GPUs and SK hynix’s HBM currently dominate the AI computing landscape. But the industry is already looking beyond HBM as new memory technologies emerge to address the growing bottleneck of data movement. According to South Korean outlet Global Economic News, as large-scale AI models expand, the key constraint is no longer computing speed but the memory “highway” that transports data. Industry experts say more than 80% of the time and energy consumed during complex AI workflows can be wasted on data transfer. The following are five emerging technologies that could help address this challenge, as noted by the report.

PIM: Bringing AI Computing Inside Memory

As the report indicates, PIM (Processing in Memory) embeds computing units directly within memory chips, allowing the memory itself to perform matrix calculations — a core component of AI workloads — and send only the results back to the GPU. Industry experts cited by the report say adopting PIM could improve energy efficiency by dozens of times compared with conventional architectures. The report also notes that SK hynix has already deployed its PIM-based accelerator, AiM (Accelerator in Memory), in real-world applications, demonstrating strong computing efficiency.

CXL: Connecting the Entire Data Center

CXL (Compute Express Link)-based memory enables memory to be shared across systems, effectively expanding available memory capacity for AI servers. The report notes that one of the biggest limitations of current AI servers is capacity constraints, as the amount of HBM that can be attached to a single GPU remains limited. CXL aims to remove this barrier by enabling memory to be shared like a network resource, potentially reducing the cost of training massive AI models. South Korean companies are also advancing in this area. As the report highlights, Samsung Electronics has developed the industry’s first CXL 2.0 DRAM and is investing heavily in the commercialization of CXL 3.0.

MRAM: A Low-Power Alternative to Conventional Memory

MRAM (Magnetoresistive RAM) is a leading candidate among next-generation non-volatile memories. As the report highlights, MRAM stores data using magnetic states rather than electrical charges, allowing information to remain intact even when power is turned off while maintaining speeds comparable to DRAM. The report also indicates that MRAM consumes extremely low power.

ReRAM: Enabling Ultra-High Density Memory

ReRAM (Resistive RAM) records data by changing electrical resistance, and its simple structure makes it highly suitable for stacking memory at extremely high densities. It resembles how human neural networks operate, enabling complex neural network computations to be carried out with significantly lower energy consumption.

As the report highlights, Samsung and SK hynix both hold leading patents and prototypes in next-generation memory technologies such as MRAM and ReRAM.

SoIC: NVIDIA and TSMC’s 3D Chip Integration Strategy

Meanwhile, SoIC (System on Integrated Chips) is also emerging as a critical technology. As the report notes, the architecture — developed through collaboration between NVIDIA and TSMC — vertically stacks computing units and memory instead of placing them side by side, connecting them as if they were a single chip. Compared with conventional HBM approaches, the design can shorten data travel distances by hundreds of times while also mitigating heat generated during chip stacking, the report adds.

Read more

(Photo credit: FREEPIK)

Please note that this article cites information from Global Economic News.


Get in touch with us