About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[News] AI Memory Showdown: SK hynix Unveils 48 Gb/s GDDR7 and LPDDR6; Samsung Showcases HBM4


2025-11-27 Semiconductors editor

The memory race is accelerating in the AI era, with South Korea’s chip giants preparing new technologies to meet soaring AI demand. According to ZDNet, Samsung Electronics and SK hynix are set to showcase a broad lineup of next-generation DRAM solutions at the 2026 International Solid-State Circuits Conference (ISSCC). SK hynix will present its latest GDDR7 and LPDDR6 for graphics and mobile applications, while Samsung Electronics will unveil HBM4.

SK hynix Unveils Next-Gen GDDR7 and LPDDR6

As the report indicates, SK hynix introduces GDDR7 with 48 Gb/s per-pin bandwidth and 24 Gb density. The chip adopts a symmetric dual-channel design, targeting high-bandwidth applications such as GPUs, AI edge inference, and gaming.

The key point of interest is the nonstandard specifications of the GDDR7 DRAM SK hynix plans to introduce. While the industry had anticipated next-generation GDDR7 to peak at approximately 32–37 Gbps, SK hynix will present an ISSCC paper demonstrating 48 Gbps operation and a 24 Gb density, indicating a clear technological lead, as Global Economic News highlights.

The Global Economic News report notes that this represents a more than 70% jump in transfer speed compared with today’s 28 Gbps GDDR7. Bandwidth per chip reaches 192 GB/s, up from roughly 112 GB/s in existing 28 Gbps products—a technological breakthrough that meaningfully reshapes the performance paradigm of graphic DRAM. 

ZDNet further notes that the company also unveils 14.4 Gb/s LPDDR6 for the first time. With a substantial bandwidth increase over LPDDR5 (9.6 Gb/s), it is positioned as a mobile DRAM solution optimized for high-performance smartphones, AI PCs, and edge devices equipped with generative-AI capabilities.

Samsung Unveils Next-Gen HBM4 for AI Accelerators

Meanwhile, ZDNet says Samsung Electronics unveils its next-generation HBM4, delivering 36GB of capacity and 3.3 TB/s of bandwidth. Built on a 1c DRAM process, HBM4 enhances its TSV (Through-Silicon Via) architecture to reduce inter-channel signal delays and provide the ultra-high bandwidth and low-power data transfer needed for upcoming AI accelerators.

As ZDNet notes, Samsung’s HBM4 delivers a significant bandwidth improvement over previous generations. Importantly, it meets the 3 TB/s-plus throughput required by leading GPU and AI ASIC makers and is expected to see broad adoption in AI server accelerators beginning next year.

Samsung is currently negotiating with NVIDIA on next year’s HBM4 pricing. According to Dealsite, citing sources, NVIDIA brought Samsung Electronics to the negotiating table just a week after finalizing its HBM4 supply contract with SK hynix. The report adds that with HBM4 demand exceeding supply and no strong incentive to reduce unit pricing, Samsung Electronics is internally aiming for the same pricing level as SK hynix for its 12-layer HBM4 memory.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from ZDNet, Global Economic News, and Dealsite.


Get in touch with us