TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.
As NVIDIA GTC 2026 gears up to kick off in just a few hours, all eyes might be on an unexpected star: CPUs. While NVIDIA rose to fame on GPUs, Intel is stepping into the spotlight—not just attending, but helping steer NVIDIA’s computing future, with Team Blue taking a more prominent role this year, according to its post on X.
According to Wccftech, the conference will likely detail the NVIDIA‑Intel partnership following their announcement last September, with enterprise-class CPUs expected to take center stage. As noted by the report, the two companies reached a $5 billion cooperation agreement then to jointly advance CPU development, spanning x86 processors for both consumer and enterprise segments.
Wccftech reports that Intel may reveal plans to embed Xeon CPUs into NVIDIA’s AI racks, a move hinted at by its role in the NVLink Fusion initiative.
The key question now is which Xeon generation will be adopted, with the latest 6th-gen lineup—Sierra Forest and Granite Rapids—seen as potential candidates, the report notes.
Notably, hyperscalers such as Meta and AI labs like OpenAI have recently signed CPU-focused supply deals with NVIDIA, underscoring the rising importance of CPUs within rack-scale AI infrastructure, as per Wccftech.
On the other hand, though the two parties are also reportedly developing a joint x86 laptop-oriented SoC incorporating RTX GPU chiplets, the project is unlikely to appear at NVIDIA GTC, as the conference typically focuses on enterprise and data center technologies rather than consumer launches, Wccftech reports.
Agent AI Drives NVIDIA’s Spotlight on CPUs
Meanwhile, Reuters highlights that with the surge of agentic AI, the true bottleneck has moved to the agent orchestration level, a role handled by CPUs. This reportedly represents a shift from traditional AI training, where labs link multiple NVIDIA GPUs into massive systems to process enormous datasets and fine-tune models.
CNBC further explains that while GPUs excel at training and running AI models, CPUs handle sequential, general-purpose tasks with fewer but more powerful cores. As agentic AI workloads grow, so does the need for this type of compute. These systems move massive amounts of data and coordinate workflows across many agents, making CPUs just as essential as GPUs in modern AI infrastructure, the report notes.
Thus, an analyst cited by Reuters noted that NVIDIA is expected to showcase servers running solely on its CPUs, a concept highlighted by CEO Jensen Huang during a recent earnings call.
According to CNBC, the U.S. chip giant first introduced its data-center CPU, Grace, in 2021. Its successor, Vera, has since entered production. These processors are typically paired with Hopper, Blackwell, or the upcoming Rubin GPUs in full rack-scale AI systems, as highlighted by CNBC.
Read more
(Photo credit: NVIDIA’s blog)