TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.
While NVIDIA flexed its muscle with giga-scale AI data center solutions and an expanding NVLink Fusion ecosystem, rival AMD took the stage at the 2025 OCP Global Summit with a slightly different approach. According to TechNews, CTO and EVP Mark Papermaster, speaking on “A Fully Open and Collaborative AI Ecosystem,” emphasized that as AI advances at breakneck speed, openness and collaboration aren’t just advantages—they’re essentials. Here are the key announcements AMD has made, as well as its progress on open collaboration.
AMD on x86 Progress and NVIDIA-Intel Deal
On October 13, AMD and Intel celebrate one year of x86 advisory group. The report, citing Papermaster, notes that over the past year, AMD and Intel, together with hyperscale providers, OEMs, and OS leaders, have made significant progress. According to Papermaster, industry input has helped drive alignment on interrupt models, ISA extensions for vector and matrix operations, and new memory safety features, TechNews adds.
Notably, Papermaster said that the recent NVIDIA–Intel announcement highlights the massive x86 install base and the ecosystem’s strength, adding the AI advisory Group ensures that code remains universally accessible, avoiding fragmentation across vendors.
AMD Focuses on Open Collaboration with ROCm and ESUN
Turning to the theme of openness, Papermaster argued that every breakthrough shaping the industry shares a common foundation: open collaboration. He added that while Linux unlocked the modern software era, open standards like TCP/IP and HTTP powered networking. Thus, like previous waves, AI will require open ecosystems to keep pace—and that’s the vital role OCP plays, he said.
Notably, Papermaster announced at OCP Global Summit that AMD has also joined ESUN (Ethernet for Scale-Up Networks), which will provide a common abstraction point so system designers can leverage common Ethernet infrastructure while choosing the most suitable transport protocols. This will result in a more robust and diverse Ethernet ecosystem, he said.
The report, citing Papermaster, highlights that AMD’s open AI stack, ROCm, along with open interconnect technologies like UCIe for chiplets, CXL for expansion, UA Link for scale-up, and UEC for scale-out, all play a role in scaling the AI ecosystem aggressively and sustainably.
TechNews notes that in 2025, AMD has focused on opening the ROCm stack and creating a vibrant developer ecosystem. As 2026 marks the 10th anniversary of the ROCm AI stack, Papermaster reportedly said that initially aimed at HPC applications, the stack has since become production-ready software, running in the world’s largest supercomputers.
AMD Pushes Scale-Up and Scale-Out Networking
On the other hand, for rack-scale systems, AMD is committed to standards that give choice for scale-up and strengthen Ethernet to better handle scale-out demands with congestion management, packet spraying, and more. As a founding member of the Ultra Ethernet Consortium (UEC), AMD is focused on addressing scale-out networking needs for both HPC and AI, the report suggests.
For scale-up, AMD is also a founding member of UA Link, having donated its battle-hardened Infinity Fabric IP to help the consortium grow rapidly across the industry, the report adds.
For AMD’s own scale-up implementations, Technews explains that the plan is to offer choice: native UA Link optimized for direct GPU-to-GPU connections, with switches supporting UA Link in development, as well as running UA Link standard over Ethernet.
Helios Rack: Open Standards in Action
Citing Papermaster, the report notes that open standards such as OCP, DC, NHS, UA Link, and others are driving the modularity and interoperability of the Helios Rack. Looking inside a single tray reveals how many standards are integrated across the system: EPIC CPU-connected peripherals leverage PCIe 6.0, expansion is handled via CXL, and CPU–GPU connectivity uses Infinity Fabric. The Instinct GPU links to the AI network through UA Link, with scale-up implemented over UA Link on Ethernet. The management module, DCSCM, completes the modular setup.
Read more
(Photo credit: TrendForce at 2025 OCP Global Summit)