News

[News] Marvell Develops Custom HBM with Micron, Samsung, and SK hynix


2024-12-12 Semiconductors editor

Marvell announces new custom HBM compute architecture that enables XPUs to achieve greater compute and memory density. According to its press release, this new AI accelerator (XPU) architecture enables up to 25% more compute, 33% greater memory while improving power efficiency.

Notably, the new custom XPU from Marvell is developed with leading memory makers Micron, Samsung, and SK hynix, as its press release indicates.

Marvell’s new custom HBM compute architecture delivers specialized interfaces designed specifically for cloud data center operators. According to its press release, the architecture enables XPUs to scale more efficiently, helping cloud providers build robust infrastructure for the growing demands of artificial intelligence.

Its press release indicates that Marvell’s custom HBM compute architecture improves XPU performance through innovative interface design. By serializing and accelerating I/O interfaces between AI compute accelerator silicon dies and HBM base dies, the architecture reduces power consumption up to 70% compared to standard HBM interfaces.

Moreover, this interface design reduces the required silicon real estate in each die, enabling the integration of HBM support logic onto the base die. Its press release points out that these real-estate savings of up to 25% allow for enhanced compute capabilities by supporting up to 33% more HBM stacks, thereby substantially increasing memory capacity per XPU.

According to a report from Tom’s Hardware, Marvell’s collaboration with major memory manufacturers, Micron, Samsung, and SK hynix, is essential for custom HBM ‘s successful implementation, as it enables broader market availability of custom HBM.

Read more

(Photo credit: Marvell)

Please note that this article cites information from Marvell and Tom’s Hardware

Get in touch with us