server


2023-09-12

[News] Wistron Reportedly Enters Intel AI Server Supply Chain After NVIDIA and AMD

According to a report by Taiwanese media Money DJ, after establishing a stable position as a major supplier for NVIDIA GPU baseboard, Wistron has secured orders for AMD MI300 baseboards. Reliable sources indicate that Wistron has expanded its involvement beyond AMD baseboards and entered the module assembling segment.

In addition to NVIDIA and AMD, Wistron has also entered the Intel AI chip module and baseboard supply chain, encompassing orders from the three major AI chip manufacturers.

The NVIDIA AI server supply chain includes GPU modules, GPU baseboards, motherboards, server systems, complete server cabinets, and more. Wistron holds a significant share in GPU baseboard supply and is also involved in server system assembly.

Currently, NVIDIA commands a 70% market share in AI chips, but various chip manufacturers are eager to compete. Both AMD and Intel have introduced corresponding solutions. While Wistron was previously rumored to have entered AMD baseboard supply, it has also ventured into AMD GPU module assembling, serving as the sole source, according to reliable sources.

Regarding the news of Wistron’s involvement in AMD and Intel chip manufacturing, the company has chosen not to respond to market rumors.

(Photo credit: Google)

2023-09-05

[News] Taking NVIDIA Server Orders, Inventec Expands Production in Thailand

According to Taiwan’s Liberty Times, in response to the global supply chain restructuring, electronic manufacturing plants have been implementing a “China+ N” strategy in recent years, catering shipments to customers in different regions. Among them, Inventec continues to strengthen its server production line in Thailand and plans to enter the NVIDIA B200 AI server sector in the second half of next year.

Currently, Inventec’s overall server production capacity is distributed as follows: Taiwan 25%, China 25%, Czech Republic 15%, and Mexico, after opening new capacity this quarter, is expected to reach 35%. It is anticipated that next year’s capital expenditure will increase by 25%, reaching 10 billion NTD, primarily allocated for expanding the server production line in Thailand. The company has already started receiving orders for the B100 AI server water-cooling project from NVIDIA and plans to enter the B200 product segment in the second half of next year.

Inventec’s statistics show that its server motherboard shipments account for 20% of the global total. This year, the focus has been on shipping H100 and A100 training-type AI servers, while next year, the emphasis will shift to the L40S inference-type AI servers. The overall project quantity for next year is expected to surpass this year’s.

(Photo credit: Google)

2023-09-04

[News] Wistron Secures Both NVIDIA and AMD AI Server Orders

According to a report by Taiwan’s Commercial Times, Wistron AI server orders are surging. Following their successful acquisition of orders for NVIDIA’s next-generation DGX/HGX H100 series AI server GPU baseboards, there are industry sources suggesting that Wistron has secured orders for AMD’s next-generation MI300 series AI server baseboards. The earliest shipments are expected before the end of the year, making Wistron the first company to win orders from both major AI server providers. Wistron has refrained from commenting on specific products and individual customers.

The global AI server market is experiencing rapid growth. Industry estimates global production capacity to reach 500,000 units next year, with a market value exceeding a trillion NT dollars. NVIDIA still holds the dominant position in the AI chip market with a market share of over 90%. However, with the launch of AMD’s new products, they are poised to capture nearly 10% of the market share.

There have been recent reports of production yield issues with AMD’s MI300 series, potentially delaying the originally planned fourth-quarter shipments. Nevertheless, supply chain sources reveal that Wistron has secured exclusive large orders for MI300 series GPU baseboards and will begin supplying AMD in the fourth quarter. Meanwhile, in NVIDIA’s L10, Wistron has recently received an urgent order from a non-U.S. CSP (Cloud Service Provider) for at least 3,000 AI servers, expected to be delivered in February of next year.

Supply chain analysts note that while time is tight, Wistron is not billing its customers using the NRE (Non-Recurring Engineering), indicating their confidence in order visibility and customer demand growth. They aim to boost revenue and profit contributions through a “quantity-based” approach.

On another front, Wistron is currently accelerating shipments for not only NVIDIA DGX/HGX architecture’s H100-GPU baseboards but also exclusive supply orders for NVIDIA DGX architecture and AI server front-end L6 mainboard (SMT PCBA) orders for both NVIDIA and AMD architectures under the Dell brand. These orders have been steadily increasing Wistron’s shipment momentum since the third quarter.

(Photo credit: NVIDIA)

2023-09-01

[News] Inventec’s AI Strategy Will Boost Both NVIDIA’s and AMD’s AI Server Chips to Grow

According to Liberty Times Net, Inventec, a prominent player in the realm of digital technology, is making significant strides in research and development across various domains, including artificial intelligence, automotive electronics, 5G, and the metaverse. The company has recently introduced a new all-aluminum liquid-cooled module for its general-purpose graphics processing units (GPGPU) powered by NVIDIA’s A100 chips. Additionally, this innovative technology is being applied to AI server products featuring AMD’s 4th Gen EPYC dual processors, marking a significant step towards the AI revolution.

Inventec has announced that their Rhyperior general-purpose graphics processors previously offered two cooling solutions: air cooling and air cooling with liquid cooling. The new all-aluminum liquid-cooled module not only reduces material costs by more than 30% compared to traditional copper cooling plates but also comes with 8 graphics processors (GPUs) and includes 6 NVIDIA NVSwitch nodes. This open-loop cooling system eliminates the need for external refrigeration units and reduces fan power consumption by approximately 50%.

Moreover, Inventec’s AI server product, the K885G6, equipped with AMD’s 4th Gen EPYC dual processors, has demonstrated a significant reduction in data center air conditioning energy consumption of approximately 40% after implementing this new cooling solution. The use of water as a coolant, rather than environmentally damaging and costlier chemical fluids, further enhances the product’s appeal, as it can support a variety of hardware configurations to meet the diverse needs of AI customers.

Inventec’s new facility in Mexico has commenced mass production, with plans to begin supplying high-end NVIDIA AI chips, specifically the H100 motherboards, in September. They are poised to increase production further in the fourth quarter. Additionally, in the coming year, the company is set to release more Application-Specific Integrated Circuit (ASIC) products, alongside new offerings from NVIDIA and AMD. Orders for server system assembly from U.S. customers (L11 assembly line) are steadily growing. The management team anticipates showcasing their innovations at the Taiwan Excellence Exhibition in Dongguan, China, starting on October 7th, as they continue to deepen their collaboration with international customers.

(Source: https://ec.ltn.com.tw/article/breakingnews/4412765)
2023-09-01

[News] Rumored AI Chip Demand Spurs Price Hikes at TSMC, UMC, ASE

TSMC’s CoWoS advanced packaging capacity shortage is causing limitations in NVIDIA’s AI chip output. Reports are emerging that NVIDIA is willing to pay a premium for alternative manufacturing capacity outside of TSMC, setting off a surge in massive overflow orders. UMC, the supplier of interposer materials for CoWoS, has reportedly raised prices for super hot runs and initiated plans to double its production capacity to meet client demand. ASE, an advanced packaging provider, is also seeing movement in its pricing.

In response to this, both UMC and ASE declined to comment on pricing and market rumors. In addressing the CoWoS advanced packaging capacity issue, NVIDIA previously confirmed during its financial report conference that it had certified other CoWoS packaging suppliers for capacity support and would collaborate with them to increase production, with industry speculation pointing towards ASE and other professional packaging factories.

TSMC’s CEO, C.C. Wei, openly stated that their advanced packaging capacity is at full utilization, and as the company actively expands its capacity, they will also outsource to professional packaging and testing factories.

It’s understood that the overflow effect from the inadequate CoWoS advanced packaging capacity at TSMC is gradually spreading. As the semiconductor industry as a whole adjusts its inventory, advanced packaging has become a market favorite.

Industry insiders point out that the interposer, acting as a communication medium within small chips, is a critical material in advanced packaging. With a broad uptick in demand for advanced packaging, the market for interposer materials is growing in parallel. Faced with high demand and limited supply, UMC has raised prices for super-hot-run interposer components.

UMC revealed that it has a comprehensive solution in the interposer field, including carriers, customed ASICs, and memory, with cooperation from multiple factories forming a substantial advantage. If other competitors are entering this space now, they might not have the quick responsiveness or abundant peripheral resources that UMC does.

UMC emphasized that compared to competitors, its competitive advantage in the interposer field lies in its open architecture. Currently, UMC’s interposer production primarily takes place in its Singapore plant, with a current capacity of about 3,000 units, with a target of doubling to six or seven thousand to meet customer demand.

Industry analysts attribute TSMC’s tight CoWoS advanced packaging capacity to a sudden surge in NVIDIA’s orders. TSMC’s CoWoS packaging had primarily catered to long-term partners, with production schedules already set, making it unable to provide NVIDIA with additional capacity. Moreover, even with tight capacity, TSMC won’t arbitrarily raise prices, as it would disrupt existing client production schedules. Therefore, NVIDIA’s move to secure additional capacity support through a premium likely involves temporary outsourced partners.

(Photo credit: NVIDIA)

  • Page 3
  • 8 page(s)
  • 39 result(s)