AI server


[Tech Recap and Glimpse 5-5] Changes in the Landscape of the AI Chip Market After a Year of NVIDIA’s Dominance

Major Cloud Service Providers (CSPs) continue to see an increase in demand for AI servers over the next two years. The latest projections of TrendForce indicate a global shipment of approximately 1.18 million AI servers in 2023, with a year-on-year growth of 34.5%. The trend is expected to persist into the following year, with an estimated annual growth of around 40.2%, constituting over 12% of the total server shipments.

NVIDIA, with its key products including AI-accelerating GPU and the AI server reference architecture HGX, currently holds the highest market share in the AI sector. However, it is crucial to monitor CSPs developing their own chips and, in the case of Chinese companies restricted by U.S. sanctions, expanding investments in self-developed ASICs and general-purpose AI chips.

According to TrendForce data, AI servers equipped with NVIDIA GPUs accounted for approximately 65.1% this year, projected to decrease to 63.5% next year. In contrast, servers featuring AMD and CSP self-developed chips are expected to increase to 8.2% and 25.4%, respectively, in the coming year.

Another critical application, HBM (High Bandwidth Memory), is primarily supplied by major vendors Samsung, SK Hynix, and Micron, with market shares of approximately 47.5%, 47.5%, and 5.0%, respectively, this year. As the price difference between HBM and DDR4/DDR5 is 5 to 8 times, this is expected to contribute to a staggering 172% year-on-year revenue growth in the HBM market in 2024.

Currently, the three major manufacturers are expected to complete HBM3e verification in the first quarter of 2024. However, the results of each manufacturer’s HBM3e verification will determine the final allocation of procurement weight for NVIDIA among HBM suppliers in 2024. As the verifications are still underway, the market share for HBM in 2024 remain to be observed.

Read more

(Photo credit: NVIDIA)


[News] AI Server Makers Wistron and Wiwynn Stay Hot in Q4, Fueled by AI Shipment Surges

Wistron experienced a slowdown in shipments for product lines like PCs and displays in October, following the prior demand surge. However, their GPU-related AI server products continue to maintain their growth trajectory. Simultaneously, Wiwynn, a subsidiary of Wistron, witnessed a remarkable 20% month-over-month revenue increase due to the rising momentum in AI server-related project shipments, positioning them at the third-highest monthly revenue level in their history for the same period, reported by CTEE.

Both Wistron and Wiwynn hold an optimistic outlook for their AI server products, expecting the growth momentum to extend into the next year. In contrast, they foresee a return to growth trends for non-AI general-purpose servers and cloud data center servers next year, while AI server growth is expected to remain notably strong.

Wistron plays a pivotal role in the AI server supply chain and remains unaffected by high-end GPU shortages and U.S. export restrictions. Shipments in Q4 continue to exhibit consistent month-to-month growth, and the anticipated trend to peak in the second half of the year remains steadfast. Moreover, there are indications of a slight seasonal increase in general-purpose servers in Q4.

In a recent earnings call, Wiwynn maintains an optimistic stance for Q4 and the upcoming year. With the evident growth momentum from AI servers, they anticipate that developments in AI-related projects will lead to a continuous improvement in AI server product shipments.

Furthermore, Wiwynn’s third-largest customer business and AI server revenue both exceeded 10% in the third quarter, marking a significant milestone for the company. Back in October, Wiwynn had set up  a server plant in Malaysia to meet the surging demand for AI servers.

According to TrendForce’s anticipation, in 2023, the shipment of AI servers (including those equipped with GPUs, FPGAs, ASICs, etc.) is expected to exceed 1.2 million units, with a year-on-year increase of 37.7%, accounting for 9% of the total server shipments. In 2024, it is projected to further grow by more than 38%, with shipments reaching approximately 1.676 million units, and the share of AI servers will exceed 12%.
(Image: Wistron)

Explore more


TrendForce Foresees China’s Mature Wafer Processes to Expand to 33% by 2027, Japan Secures Advanced Processes

The research institution TrendForce held its AnnualForecast 2024 Seminar on November 3, where they delved into discussions about global wafer foundry trends, the applications of AI, the dynamics of AI servers, and the demand for High Bandwidth Memory (HBM).

Joanne Chiao, analyst from TrendForce, observed that while AI servers have experienced robust growth over the past two years, AI chips account for just 4% of wafer consumption, limiting their impact on the overall wafer industry. Nevertheless, both advanced and mature processes offer business opportunities. The former benefits from the desire of companies like CSPs to develop customized chips, leading them to seek the assistance of design service providers; while the latter can consider venturing into sector such as power management ICs and I/O solutions.

Persisting US export restrictions continue to affect China’s foundries, causing delays in their expansion plans. Furthermore, the regionalisation of wafer foundry services is exacerbating issues related to uneven resource distribution.

Due to lackluster end-market demand and fierce market competition, the capacity utilization rate of 8-inch wafer foundries continue to decline until the first quarter of the upcoming year. Inventory adjustments are underway in the fields of industrial control and automotive electronics. Chinese foundries are more willing to offer competitive prices, and outperforming their counterparts in Taiwan and Korea in terms of order performance.

In the realm of 12-inch wafer foundry services, success relies on technological leadership and exclusivity. Competition isn’t as intense as it is with 8-inch wafers. This resurgence is driven by inventory replenishment, the demand for iPhone 15, select Android smartphone brands, and the need for AI chips. A moderate recovery is expected in the latter part of this year.

TrendForce indicates that, with the expansion of processes beyond 28nm, mature process capacity is expected to occupy less than 70% of the capacity of the top ten foundries by 2027. Under the pressure to transition towards mature processes, China is anticipated to account for 33% of mature process capacity by 2027, with the possibility of further increases.

It’s noteworthy that Japan is actively promoting the revival of its semiconductor industry and, through incentives for foreign companies establishing fabs, may secure 3% of advanced process capacity.

TrendForce’s analyst, Frank Kung, predicts that the shipment of Nvidia’s high-end GPU processors will exceed 1.5 million units this year, with a YoY growth rate of over 70%, expected to reach 90% by 2024. Starting from the latter half of this year, Nvidia’s high-end GPU market will transition primarily to H100. As for AMD, its high-end AI solutions are mainly targeted at CSPs and supercomputers. The AI server market, equipped with MI300, is expected to experience significant expansion in the latter half of this year.

In the 2023-2024 period, major CSPs are poised to become the primary drivers of AI server demand, with Microsoft, Google, and AWS ranking among the top three. Additionally, the robust demand for cloud-based AI training is expected to propel the growth of advanced AI chips, which may, in turn, stimulate growth in power management or high-speed transmission-related ICs in the future.

Lastly, concerning HBM, TrendForce’s senior research vice president, Avril Wu, mentioned that as Nvidia’s H100 gradually gains momentum, HBM3 is set to become the industry standard in the latter half of this year. With the launch of B100 next year, HBM3e is poised to replace HBM3 as the mainstream memory in the latter half of the following year. Overall, HBM plays a pivotal role in DRAM revenue, with expectations of an increase from 9% in 2023 to 18% in 2024, potentially leading to higher DRAM prices in the coming year.
(Image: TechNews)


[News] GIGABYTE Aims for Over NT$100 Billion Revenue, Doubles Server Performance This Year  

GIGABYTE held an online earning call on November 1st, during which General Manager Etay Lee expressed optimism about the company’s performance. The growth momentum in server and motherboard sectors remains robust, allowing GIGABYTE to potentially reach the significant milestone of NT$100 billion in annual revenue ahead of schedule. Additionally, the company is increasing its server revenue contribution this year, aiming for a remarkable double-digit growth.

As reported by Anue, Lee focused on the server sector, noting that the third quarter demonstrated impressive server revenue, and this momentum is expected to continue into the fourth quarter. The company is poised for high double-digit revenue growth in the server sector this year, with the ambition to challenge triple-digit growth. These developments have led to an upward revision of the annual revenue target.

Etay Lee emphasized the current high demand for AI servers, with a majority being shipped as units or racks. These include high quality networking, high efficiency storage, and High Performance Computing (HPC) integration. The increased components in AI server systems has led to a boost in revenue and gross profit; however, there is a slight decrease in the gross profit margin.

Regarding the expanded chip ban controls imposed by the United States, Lee clarified that GIGABYTE’s AI server products have a limited presence in the Chinese market, thereby minimizing the impact of these restrictions. Furthermore, in regions such as the Middle East and Vietnam where approvals are required, the company will also submit applications, and the overall impact is minimal.

In terms of graphics cards, GIGABYTE reported that inventory adjustments are completed, and channels have returned to normal levels. This, coupled with competitive pricing for the company’s main products, the 4060Ti and 4070, has generated strong demand starting from late in the third quarter. Notably, the European and American regions have witnessed a resurgence in growth, with demand surpassing that of the Asia-Pacific region.


[Insights] Quanta, Wiwynn, and Major Manufacturers Scale Up to Meet Rising Demand for AI Servers

In October 2023, Quanta revealed plans to open three new factories in California, USA, with the goal of creating state-of-the-art assembly lines for AI servers. Around the same time, Wiwynn shared its intentions to launch a server cabinet assembly plant in Johor, Malaysia, featuring advanced liquid cooling technology. Additionally, server contract manufacturing giants, Foxconn and Inventec, are strategically positioning their AI server manufacturing facilities both domestically and internationally to meet the expected demand for AI server orders in the 2024 market.

 TrendForce’s Insights:

  1. Wiwynn and Quanta Open New AI Server Facilities, Enhancing Orders from Major U.S. Cloud Service Providers

Both Wiwynn and Quanta are contract manufacturers for cloud service giants such as Meta, Microsoft, and AWS. These three cloud service providers accounted for nearly 50% of the global server procurement in 2023. They’re doing this to keep up with the growing demand for AI servers, especially in the latter part of 2023, driven by applications like ChatGPT. Big cloud service providers, have allocated a significant chunk of their global server orders to these manufacturers, giving AI servers a top spot over regular servers.

Wiwynn, in particular, has set up shop in Malaysia to meet the surging demand for AI servers. Due to factors like the trade tensions between China and the U.S. and tariff avoidance measures, they are shifting their manufacturing capacity and equipment from their Guangdong factory in China to locations in Taiwan and Malaysia. This transition is expected to be completed between the end of 2023 and early 2024, making it easier to manage resources on a global scale.

Quanta’s smart move to open new assembly fabs near major U.S. cloud service providers allows them to deliver quickly to data centers in Europe and the U.S., saving on transportation costs and ensuring speedy deliveries. Both major Taiwanese manufacturers are optimistic about their orders for AI servers. This optimism allows them to expand their manufacturing capabilities in existing and new locations to strengthen partnerships with the big cloud players.

  1. Leading Taiwanese Server Manufacturers Expand Production at Home and Abroad, Anticipating a Multi-Fold Growth in AI Server Shipments by 2024

Major companies like Foxconn and Inventec are actively expanding their production facilities, both in their home country and abroad, to prepare for the expected increase in orders for AI servers from leading cloud service providers in 2024.

Fii and Ingrasys Inc., two important subsidiaries of Foxconn, dedicated to handling orders and manufacturing for servers. They have their own server assembly plants in various locations, including China, the United States, Europe, Vietnam, and Taiwan. They follow an integrated supply chain model, starting from producing motherboards to assembling complete server cabinets. Once the assembly is done, they ship the products to the data centers of cloud service providers. To meet the anticipated high-end AI server orders in 2024, Ingrasys Inc. added new production lines in the second quarter of 2023 to meet the demands of AI server manufacturers.

Inventec, a manufacturer that specializes in making server motherboards, expects a steady demand for AI servers in 2024-2025. With this expectation, they started construction at their factory in Thailand in Q3 of 2023. By Q4 of 2024, the factory will undergo production line testing, and mass production could begin as early as the first quarter of 2025 to meet the needs of major manufacturers. The new factory in Mexico is expected to match the production capacity of their Chinese facility. It has already started limited production and is expected to be in full operation by Q4 of 2024.

The four major contract manufacturers in Taiwan, specializing in server production, are either adding new production lines to their existing facilities or building new factories overseas between Q2 to Q4 of 2023. This undoubtedly shows their positive attitude to AI server shipments in 2024. As the use of AI servers continues to grow, market demand is expected to significantly increase year by year, which is likely to bring substantial revenue, profit, and production advantages to these contract manufacturers.

  • Page 1
  • 6 page(s)
  • 26 result(s)