About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[News] Samsung in Talks to Supply HBM4 to NVIDIA for 50K-GPU AI Mega Fab; HBM3e/GDDR7/SOCAMM2 Also Included


2025-10-31 Semiconductors editor

Shortly after CEO Jensen Huang met with Samsung Electronics Chairman Jay Y. Lee and Hyundai Motor Group Executive Chair Euisun Chung over fried chicken and beer, NVIDIA announced a major collaboration with the South Korean government and tech industry, led by Samsung.

According to Samsung, it has formed a strategic partnership with NVIDIA to establish an “AI Megafactory” designed to supercharge Samsung’s semiconductor production. The new facility will be equipped with more than 50,000 of NVIDIA’s most advanced GPUs, embedding AI throughout every stage of chip manufacturing.

Notably, the deal also opens a major business opportunity for the South Korean memory giant. Alongside the factory project, Samsung will supply NVIDIA with its latest memory solutions and foundry services—including HBM3e, HBM4, GDDR7, and SOCAMM2.

ZDNet highlights that Samsung and NVIDIA have maintained a partnership spanning more than 25 years, starting with graphics DRAM supply in the late 1990s. Their new AI factory project now reportedly marks the evolution of that long-standing collaboration into a full-fledged AI semiconductor alliance.

Samsung: All-in on HBM4

In a notable shift from its traditionally low-key approach, Samsung openly revealed that it is in close talks with NVIDIA to supply HBM4—on top of the products it already provides. The company plans to proactively invest in facilities to seamlessly meet rapidly increasing customer demand for HBM4, as per its press release.

Samsung said its HBM4 combines a 4nm logic base with sixth-generation 10nm-class (1c) DRAM, achieving speeds of over 11Gbps—surpassing both the JEDEC standard of 8Gbps and customer expectations.

In addition, Samsung noted that it is currently supplying HBM3e to all global clients, and has completed sample shipments of HBM4 to all customers who requested samples, and is preparing for mass production shipments according to customer schedules.

SK hynix Also Joins Force

On the other hand, Samsung’s archrival, SK hynix, announced similar form of cooperation. As NVIDIA stated in its press release, the two parties are deepening their collaboration to advance SK hynix’s HBM and next-generation memory solutions for GPUs, chip manufacturing, and telecommunications, which centers on a massive AI factory featuring over 50,000 NVIDIA GPUs, with the first phase slated for completion by late 2027.

Once operational, the facility is expected to become one of Korea’s largest AI factories, supporting SK subsidiaries—including SK hynix and SK Telecom—as well as external organizations via a GPU-as-a-service model, as per the announcement.

260K NVIDIA GPUs to be Supplied Across Korea

The partnerships with Samsung and SK hynix are just the beginning. According to ZDNet, NVIDIA plans to supply 260,000 GPUs to the Korean government and local companies, expanding collaboration across AI infrastructure, semiconductors, robotics, telecommunications, and data centers.

Currently, about 65,000 NVIDIA Hopper GPUs are deployed among domestic firms and institutions in Korea, the report suggests. Under the new plan, NVIDIA will reportedly deliver a total of 260,000 Blackwell GPUs: up to 5,000 each to the Korean government, Samsung Electronics, SK Group, and Hyundai Motor Group, and 60,000 units to Naver Cloud.

ZDNet, citing a NVIDIA official, notes that the Blackwell GPUs supplied in Korea will include both GB200 servers and RTX Pro 6000 workstations. Once fully deployed, the company’s GPU footprint in the country is expected to nearly quintuple, rising from 65,000 to 320,000 units, the report adds.

Memory Shortages Could Intensify

As TrendForce observes, the HBM market is facing a severe supply crunch and persistently high prices, driven by surging demand from SK hynix and Samsung’s 100,000+ GPU AI factories on top of existing server needs. TrendForce notes that this is accelerating the HBM4 arms race, as companies rush to invest in HBM-focused CapEx, potentially diverting resources away from standard DDR5 DRAM. At the same time, AI is expanding beyond the cloud into manufacturing, robotics, and enterprise applications, creating a long-term, broad-based boost in memory demand.

Read more

(Photo credit: NVIDIA GeForce’s X)

Please note that this article cites information from Samsung, NVIDIA and ZDNet.


Get in touch with us