NVIDIA’s next-generation AI compute rack architecture indicates that future GPU designs will increasingly prioritize higher chip-to-chip interconnect density and faster data transmission, according to TrendForce’s latest research on the high-speed interconnect market. Intra-rack chip interconnects (scale-up) and large-scale interconnects across racks (scale-out) will become central considerations in data center design as AI clusters continue to scale.
The global NAND Flash industry continued to benefit from AI infrastructure build-outs in 4Q25, according to TrendForce’s latest research. All in all, the combined revenue of the top five NAND flash suppliers sharply rose 23.8% QoQ to US$21.17 billion.
TrendForce’s latest findings reveal that the expansion of AI applications from LLM training to inference has prompted CSPs to broaden data center build-outs beyond AI servers to include general-purpose servers.
Global CSPs are accelerating investment in AI servers and infrastructure to support expanding AI deployment and upgrades, according to TrendForce’s latest findings on the AI server market. Combined capital expenditures by the world’s eight leading CSPs—Google, AWS, Meta, Microsoft, Oracle, Tencent, Alibaba, and Baidu— are projected to exceed $710 billion in 2026, representing approximately 61% YoY growth.
TrendForce’s latest analysis of the HBM industry reveals that as the ongoing expansion of AI infrastructure continues to fuel GPU demand, NVIDIA’s upcoming Rubin platform is expected to become a major catalyst for HBM4 adoption once mass production begins. The three leading memory suppliers—Samsung, SK hynix, and Micron—are now in the final stages of HBM4 validation, with completion anticipated by 2Q26.