[News] CXL May Emerge as Post-HBM Battleground; Samsung Reportedly Presents System with 10× Performance Gain
Competition for leadership in next-generation memory technology, Compute Express Link (CXL), is intensifying among global memory makers. According to Hankyung, sources say CXL could emerge as the next key battleground after HBM, with Samsung recently unveiling a new system delivering up to 10× performance gains compared with four years ago.
As the report indicates, Samsung presented a paper on its CXL-based memory system, “Pangea v2”at the IEEE. The system delivers up to 10.2× higher data transfer performance than conventional interconnect methods such as RDMA, while reducing bottlenecks, long a persistent challenge in traditional memory architectures, by as much as 96%, the company said. To develop the system, Samsung collaborated with global semiconductor design firm Marvell and AI infrastructure company Liquid AI, the report adds.
Pangea v2 is based on the CXL 2.0 standard introduced in 2020 by global semiconductor companies, including Intel and NVIDIA, the report notes. The system connects 22 CXL DRAM modules (CMM-D) into a single shared memory pool, enabling multiple servers to access up to 5.5TB of memory.
Meanwhile, the standard has since progressed to CXL 3.2, and Samsung said it plans to unveil “Pangea v3,” based on the latest specification, within 2026, as Hankyung highlights.
Memory Leaders Accelerate CXL Development
SK hynix is also accelerating its CXL push. According to the report, the company introduced its first CXL DRAM in 2022, followed by a CXL 2.0-compatible product in 2023, and continues to advance development. As cited by the report, Park Joon-deok, head of DRAM marketing, said the company aims to sustain leadership with second-generation products supporting CXL 3.0, building on its first-generation CXL 2.0-based memory that completed customer validation in 2025.
Aside from Korean players, Micron also joined the race with its own CXL memory module unveiled in 2024, the report adds.
CXL Enables Shared Memory Pooling for AI Workloads
CXL is a next-generation technology that allows multiple GPUs and CPUs to access a large, centralized memory pool, Hankyung notes, and is widely seen as a breakthrough for improving AI server performance. Currently, each GPU and CPU in an AI server relies on dedicated memory, but CXL enables them to dynamically share and access a common memory pool as needed.
Because GPUs and CPUs typically require large memory capacity only at certain times, conventional systems utilize just 20% to 30% of installed memory under normal conditions, leading to inefficiencies, the report adds.
Notably, according to Digital Today, citing a March report by The Information, sources say Google has begun deploying CXL technology in its data centers. The company is reported to be installing CXL controllers to manage traffic between server CPUs and large external memory pools.
In addition, according to Digital Today, NVIDIA plans to support CXL 3.1 in its Vera CPU, expected to launch later this year. The report adds that NVIDIA’s move could represent the largest real-world test of CXL to date.
Read more
- [News] Samsung’s May Strike Seen Disrupting Up to 4% of DRAM Output, With Weeks-Long Recovery Risk
- [News] Samsung Reportedly Produces Sub-10nm 10a DRAM Working Die Using 4F Square, VCT, Targets 2028 Production
(Photo credit: Samsung)