[Insights] Immersive Liquid Cooling Becomes Key Tech for AI Server Heat, Companies Focus

Intel collaborates with Foxconn, Microloops, and Inventec to introduce a new cooling technology. Boasting superior performance compared to conventional liquid cooling, this initiative, alongside Gigabyte and Wiwynn, aims to enter the server immersion liquid cooling technology market, positioning themselves for potential orders.

TrendForce’s Insights:

  1. Intel Teams Up with Taiwanese Manufacturers to Pave the Way for Liquid Cooling Technologies, Targeting Immersive Water-Cooled AI Server Orders

Intel, in collaboration with Foxconn, Microloops, and Inventec, introduces a brand new Liquid Cooling solution. This technology, capable of managing Thermal Design Power (TDP) exceeding 1500W, utilizes the principles of physical pressurization to facilitate rapid liquid flow for efficient heat dissipation.

With a threefold improvement in performance over conventional liquid cooling, the liquid cooling plate circulates water to dissipate heat. Future developments include non-conductive liquid solutions to mitigate leakage risks.

Simultaneously, Gigabyte’s subsidiary, GIGA Computing Technology, partners with liquid cooling experts CoolIT and Motivair to unveil cutting-edge liquid cooling solutions. The strategic focus on liquid and immersive liquid cooling aims to enhance the sustainability and energy efficiency of data centers.

Wiwynn, also keenly interested in liquid cooling technology, has secured substantial orders from Middle Eastern clients by prioritizing two-phase immersion cooling technology, with rumors suggesting the order’s value is approximately USD 4 billion.

The advantages of two-phase cooling technology include fanless operation, rapid heat dissipation, noiseless and vibration-free performance, and higher cooling energy efficiency. Moreover, it is less susceptible to environmental influences, ensuring normal usage.

On the other hand, Taiwanese manufacturers, AURAS Technology and AVC (Asia Vital Components), are actively developing and mass-producing open-loop liquid cooling and immersion liquid cooling technologies.

Due to the novelty of immersion cooling technology, the current market demand visibility is relatively low. However, major Taiwanese contract manufacturers are increasingly focusing on the technical development and introduction of immersion cooling solutions. This is primarily driven by the growing demand for high-end AI servers, which require efficient computation and often generate high power consumption.

At present, only immersion liquid cooling can provide a cooling solution surpassing 1500W, meeting the requirements of large-scale data centers. Consequently, numerous Taiwanese manufacturers are actively engaging in the development of this technology to seize the opportunities presented by the initial wave of immersion liquid cooling products.

  1. Short-Term Dominance of 3D VC Air Cooling and Open Liquid Cooling Technologies in Heat Dissipation Solutions

Due to the need for adjustments in facility structure, including the layout of cooling spaces, immersion liquid cooling comes with higher construction costs. Cases of immersion liquid cooling in large-scale data centers are also relatively rare.

In consideration of cost, many enterprise users still opt for 3D VC (Vapor Chamber) air cooling technology to establish their data centers, aiming to save on the significant costs associated with facility modifications.

3D VC air cooling and open liquid cooling technologies are the current primary options for heat dissipation solutions. The retrofitting cost for 3D VC is twice that of traditional cooling modes, while the cost for liquid cooling solutions can be up to 10 times higher than traditional modes.

Therefore, enterprise users need to tailor their server deployments based on specific requirements, taking into account the Thermal Design Power (TDP) to choose the corresponding heat dissipation solution.

For instance, AI servers with power consumption ranging from 1000 to 1500W can utilize open liquid cooling solutions, while those below 1000W may adopt the 3D VC cooling approach.

With the target of achieving net-zero emissions by 2050, both China and Europe impose restrictions on the Power Usage Effectiveness (PUE) of data centers, limiting it to no higher than 1.4.

In anticipation of this, server OEMs and cooling solution providers are expected to increase the development and offerings of liquid and immersive liquid cooling products. This trend aims to provide data centers with customized solutions and services, aligning with the evolving energy efficiency requirements.


[Insights] Quanta, Wiwynn, and Major Manufacturers Scale Up to Meet Rising Demand for AI Servers

In October 2023, Quanta revealed plans to open three new factories in California, USA, with the goal of creating state-of-the-art assembly lines for AI servers. Around the same time, Wiwynn shared its intentions to launch a server cabinet assembly plant in Johor, Malaysia, featuring advanced liquid cooling technology. Additionally, server contract manufacturing giants, Foxconn and Inventec, are strategically positioning their AI server manufacturing facilities both domestically and internationally to meet the expected demand for AI server orders in the 2024 market.

 TrendForce’s Insights:

  1. Wiwynn and Quanta Open New AI Server Facilities, Enhancing Orders from Major U.S. Cloud Service Providers

Both Wiwynn and Quanta are contract manufacturers for cloud service giants such as Meta, Microsoft, and AWS. These three cloud service providers accounted for nearly 50% of the global server procurement in 2023. They’re doing this to keep up with the growing demand for AI servers, especially in the latter part of 2023, driven by applications like ChatGPT. Big cloud service providers, have allocated a significant chunk of their global server orders to these manufacturers, giving AI servers a top spot over regular servers.

Wiwynn, in particular, has set up shop in Malaysia to meet the surging demand for AI servers. Due to factors like the trade tensions between China and the U.S. and tariff avoidance measures, they are shifting their manufacturing capacity and equipment from their Guangdong factory in China to locations in Taiwan and Malaysia. This transition is expected to be completed between the end of 2023 and early 2024, making it easier to manage resources on a global scale.

Quanta’s smart move to open new assembly fabs near major U.S. cloud service providers allows them to deliver quickly to data centers in Europe and the U.S., saving on transportation costs and ensuring speedy deliveries. Both major Taiwanese manufacturers are optimistic about their orders for AI servers. This optimism allows them to expand their manufacturing capabilities in existing and new locations to strengthen partnerships with the big cloud players.

  1. Leading Taiwanese Server Manufacturers Expand Production at Home and Abroad, Anticipating a Multi-Fold Growth in AI Server Shipments by 2024

Major companies like Foxconn and Inventec are actively expanding their production facilities, both in their home country and abroad, to prepare for the expected increase in orders for AI servers from leading cloud service providers in 2024.

Fii and Ingrasys Inc., two important subsidiaries of Foxconn, dedicated to handling orders and manufacturing for servers. They have their own server assembly plants in various locations, including China, the United States, Europe, Vietnam, and Taiwan. They follow an integrated supply chain model, starting from producing motherboards to assembling complete server cabinets. Once the assembly is done, they ship the products to the data centers of cloud service providers. To meet the anticipated high-end AI server orders in 2024, Ingrasys Inc. added new production lines in the second quarter of 2023 to meet the demands of AI server manufacturers.

Inventec, a manufacturer that specializes in making server motherboards, expects a steady demand for AI servers in 2024-2025. With this expectation, they started construction at their factory in Thailand in Q3 of 2023. By Q4 of 2024, the factory will undergo production line testing, and mass production could begin as early as the first quarter of 2025 to meet the needs of major manufacturers. The new factory in Mexico is expected to match the production capacity of their Chinese facility. It has already started limited production and is expected to be in full operation by Q4 of 2024.

The four major contract manufacturers in Taiwan, specializing in server production, are either adding new production lines to their existing facilities or building new factories overseas between Q2 to Q4 of 2023. This undoubtedly shows their positive attitude to AI server shipments in 2024. As the use of AI servers continues to grow, market demand is expected to significantly increase year by year, which is likely to bring substantial revenue, profit, and production advantages to these contract manufacturers.


[News] Taking NVIDIA Server Orders, Inventec Expands Production in Thailand

According to Taiwan’s Liberty Times, in response to the global supply chain restructuring, electronic manufacturing plants have been implementing a “China+ N” strategy in recent years, catering shipments to customers in different regions. Among them, Inventec continues to strengthen its server production line in Thailand and plans to enter the NVIDIA B200 AI server sector in the second half of next year.

Currently, Inventec’s overall server production capacity is distributed as follows: Taiwan 25%, China 25%, Czech Republic 15%, and Mexico, after opening new capacity this quarter, is expected to reach 35%. It is anticipated that next year’s capital expenditure will increase by 25%, reaching 10 billion NTD, primarily allocated for expanding the server production line in Thailand. The company has already started receiving orders for the B100 AI server water-cooling project from NVIDIA and plans to enter the B200 product segment in the second half of next year.

Inventec’s statistics show that its server motherboard shipments account for 20% of the global total. This year, the focus has been on shipping H100 and A100 training-type AI servers, while next year, the emphasis will shift to the L40S inference-type AI servers. The overall project quantity for next year is expected to surpass this year’s.

(Photo credit: Google)


[News] Inventec’s AI Strategy Will Boost Both NVIDIA’s and AMD’s AI Server Chips to Grow

According to Liberty Times Net, Inventec, a prominent player in the realm of digital technology, is making significant strides in research and development across various domains, including artificial intelligence, automotive electronics, 5G, and the metaverse. The company has recently introduced a new all-aluminum liquid-cooled module for its general-purpose graphics processing units (GPGPU) powered by NVIDIA’s A100 chips. Additionally, this innovative technology is being applied to AI server products featuring AMD’s 4th Gen EPYC dual processors, marking a significant step towards the AI revolution.

Inventec has announced that their Rhyperior general-purpose graphics processors previously offered two cooling solutions: air cooling and air cooling with liquid cooling. The new all-aluminum liquid-cooled module not only reduces material costs by more than 30% compared to traditional copper cooling plates but also comes with 8 graphics processors (GPUs) and includes 6 NVIDIA NVSwitch nodes. This open-loop cooling system eliminates the need for external refrigeration units and reduces fan power consumption by approximately 50%.

Moreover, Inventec’s AI server product, the K885G6, equipped with AMD’s 4th Gen EPYC dual processors, has demonstrated a significant reduction in data center air conditioning energy consumption of approximately 40% after implementing this new cooling solution. The use of water as a coolant, rather than environmentally damaging and costlier chemical fluids, further enhances the product’s appeal, as it can support a variety of hardware configurations to meet the diverse needs of AI customers.

Inventec’s new facility in Mexico has commenced mass production, with plans to begin supplying high-end NVIDIA AI chips, specifically the H100 motherboards, in September. They are poised to increase production further in the fourth quarter. Additionally, in the coming year, the company is set to release more Application-Specific Integrated Circuit (ASIC) products, alongside new offerings from NVIDIA and AMD. Orders for server system assembly from U.S. customers (L11 assembly line) are steadily growing. The management team anticipates showcasing their innovations at the Taiwan Excellence Exhibition in Dongguan, China, starting on October 7th, as they continue to deepen their collaboration with international customers.


Server Supply Chain Becomes Fragmented, ODM’s Southeast Asia SMT Capacity Expected to Account for 23% in 2023, Says TrendForce

US-based CSPs have been establishing SMT production lines in Southeast Asia since late 2022 to mitigate geopolitical risks and supply chain disruptions. TrendForce reports that Taiwan-based server ODMs, including Quanta, Foxconn, Wistron (including Wiwynn), and Inventec, have set up production bases in countries like Thailand, Vietnam, and Malaysia. It’s projected that by 2023, the production capacity from these regions will account for 23%, and by 2026, it will approach 50%.

TrendForce reveals that Quanta, due to its geographical ties, has established several production lines in its Thai facilities centered around Google and Celestica, aiming for optimal positioning to foster customer loyalty. Meanwhile, Foxconn has renovated its existing facilities in Hanoi, Vietnam, and uses its Wisconsin plant to accommodate customer needs. Both Wistron and Wiwynn are progressively establishing assembly plants and SMT production lines in Malaysia. Inventec’s current strategy mirrors that of Quanta, with plans to build SMT production lines in Thailand by 2024 and commence server production in late 2024.

CSPs aim to control the core supply chain, AI server supply chain trends toward decentralization

TrendForce suggests that changes in the supply chain aren’t just about circumventing geopolitical risks—equally vital is increased control over key high-cost components, including CPUs, GPUs, and other critical materials. With rising demand for next-generation AI and Large Language Models, supply chain stockpiling grows each quarter. Accompanied by a surge in demand in 1H23, CSPs will become especially cautious in their supply chain management.

Google, with its in-house developed TPU machines, possesses both the core R&D and supply chain leadership. Moreover, its production stronghold primarily revolves around its own manufacturing sites in Thailand. However, Google still relies on cooperative ODMs for human resource allocation and production scheduling, while managing other materials internally. To avoid disruptions in the supply chain, companies like Microsoft, Meta, and AWS are not only aiming for flexibility in supply chain management but are also integrating system integrators into ODM production. This approach allows for more dispersed and meticulous coordination and execution of projects.

Initially, Meta heavily relied on direct purchases of complete server systems, with Intel’s Habana system being one of the first to be integrated into Meta’s infrastructure. This made sense since the CPU for their web-type servers were often semi-custom versions from Intel. Based on system optimization levels, Meta found Habana to be the most direct and seamless solution. Notably, it was only last year that Meta began to delegate parts of its Metaverse project to ODMs. This year, as part of its push into generative AI, Meta has also started adopting NVIDIA’s solutions extensively.

  • Page 1
  • 2 page(s)
  • 9 result(s)