AI server


[News] Asus AI Servers Swiftly Seize Business Opportunities

According to the news from Chinatimes, Asus, a prominent technology company, has announced on the 30th of this month the release of AI servers equipped with NVIDIA’s L40S GPUs. These servers are now available for order. The L40S GPU was introduced by NVIDIA in August to address the shortage of H100 and A100 GPUs. Remarkably, Asus has swiftly responded to this situation by unveiling AI server products within a span of less than two weeks, showcasing their optimism in the imminent surge of AI applications and their eagerness to seize the opportunity.

Solid AI Capabilities of Asus Group

Apart from being among the first manufacturers to introduce the NVIDIA OVX server system, Asus has leveraged resources from its subsidiaries, such as TaiSmart and Asus Cloud, to establish a formidable AI infrastructure. This not only involves in-house innovation like the Large Language Model (LLM) technology but also extends to providing AI computing power and enterprise-level generative AI applications. These strengths position Asus as one of the few all-encompassing providers of generative AI solutions.

Projected Surge in Server Business

Regarding server business performance, Asus envisions a yearly compounded growth rate of at least 40% until 2027, with a goal of achieving a fivefold growth over five years. In particular, the data center server business catering primarily to Cloud Service Providers (CSPs) anticipates a tenfold growth within the same timeframe, driven by the adoption of AI server products.

Asus CEO recently emphasized that Asus’s foray into AI server development was prompt and involved collaboration with NVIDIA from the outset. While the product lineup might be more streamlined compared to other OEM/ODM manufacturers, Asus had secured numerous GPU orders ahead of the AI server demand surge. The company is optimistic about the shipping momentum and order visibility for the new generation of AI servers in the latter half of the year.

Embracing NVIDIA’s Versatile L40S GPU

The NVIDIA L40S GPU, built on the Ada Lovelace architecture, stands out as one of the most powerful general-purpose GPUs in data centers. It offers groundbreaking multi-workload computations for large language model inference, training, graphics, and image processing. Not only does it facilitate rapid hardware solution deployment, but it also holds significance due to the current scarcity of higher-tier H100 and A100 GPUs, which have reached allocation stages. Consequently, businesses seeking to repurpose idle data centers are anticipated to shift their focus toward AI servers featuring the L40S GPU.

Asus’s newly introduced L40S GPU servers include the ESC8000-E11/ESC4000-E11 models with built-in Intel Xeon processors, as well as the ESC8000A-E12/ESC4000A-E12 models utilizing AMD EPYC processors. These servers can be configured with up to 4 or a maximum of 8 NVIDIA L40S GPUs. This configuration assists enterprises in enhancing training, fine-tuning, and inference workloads, facilitating AI model creation. It also establishes Asus’s platforms as the preferred choice for multi-modal generative AI applications.


[News] Taiwanese Computer Brand Manufacturers Rush into the AI Server Market

According to a report by Taiwan’s Economic Daily, a trend is taking shape as computer brand manufacturers venture into the AI server market. Notably swift on this path are Taiwan’s ASUS, Gigabyte, MSI, and MITAC. All four companies hold a positive outlook on the potential of AI server-related business, with expectations of reaping benefits starting in the latter half of this year and further enhancing their business contributions next year.

Presently, significant bulk orders for AI servers are stemming from large-scale cloud service providers (CSPs), which has also presented substantial opportunities for major electronic manufacturing services (EMS) players like Wistron and Quanta that have an early foothold in server manufacturing. As the popularity of generative AI surges, other internet-based enterprises, medical institutions, academic bodies, and more are intensifying their procurement of AI servers, opening doors for brand server manufacturers to tap into this burgeoning market.

ASUS asserts that with the sustained growth of data center/CSP server operations in recent years, the company’s internal production capacity is primed for action, with AI server business projected to at least double in growth by next year. Having established a small assembly plant in California, USA, and repurposing their Czech Republic facility from a repair center to a PC manufacturing or server assembly line, ASUS is actively expanding its production capabilities.

In Taiwan, investments are also being made to bolster server manufacturing capabilities. ASUS ‘s Shulin factory has set up a dedicated server assembly line, while the Luzhu plant in Taoyuan is slated for reconstruction to produce low-volume, high-complexity servers and IoT devices, expected to come online in 2024.

Gigabyte covers the spectrum of server products from L6 to L10, with a focus this year on driving growth in HPC and AI servers. Gigabyte previously stated that servers contribute to around 25% of the company’s revenue, with AI servers already in delivery and an estimated penetration rate of approximately 30% for AI servers equipped with GPUs.

MSI’s server revenue stands at around NT$5 billion, constituting roughly 2.7% of the company’s total revenue. While MSI primarily targets small and medium-sized customers with security and networking servers, the company has ventured into the AI server market with servers equipped with GPUs such as the NVIDIA RTX 4080/4090. In response to the surging demand for NVIDIA A100 and H100 AI chips, MSI plans to invest resources, with server revenue expected to grow by 20% to NT$6 billion in 2024, with AI servers contributing 10% to server revenue.

MITAC ‘s server business encompasses both OEM and branding. With MITAC’s takeover of Intel’s Data Center Solutions Group (DSG) business in July, the company inherited numerous small and medium-sized clients that were previously under Intel’s management.

(Photo credit: ASUS)


[News] TSMC Partners with ASE and Siliconware to Boost CoWoS Packaging Capacities

According to the news from Liberty Times Net, NVIDIA’s Q2 financials and Q3 forecasts have astounded the market, driven by substantial growth in their AI-centric data center operations. NVIDIA addresses CoWoS packaging supply issues by collaborating with other suppliers, boosting future capacity, and meeting demand. This move is echoed in South Korea’s pursuit of advanced packaging strategies.

South Korea’s Swift Pursuit on Advanced Packaging

The semiconductor industry highlights that the rapid development of generative AI has outpaced expectations, causing a shortage of advanced packaging production capacity. Faced with this supply-demand gap, TSMC has outsourced some of its capacity, with Silicon Interposer production being shared by facilities under the United Microelectronics Corporation and Siliconware Precision Industries. UMC has also strategically partnered with Siliconware Precision Industries, and Amkor’s Korean facilities have joined the ranks of suppliers to augment production capacity.

Due to equipment limitations, TSMC’s monthly CoWoS advanced packaging capacity is expected to increase from 10,000 units to a maximum of 12,000 units by the end of this year. Meanwhile, other suppliers could potentially raise their CoWoS monthly capacity to 3,000 units. TSMC aims to boost its capacity to 25,000 units by the end of next year, while other suppliers might elevate theirs to 5,000 units.

According to the source South Korean media, Samsung entered the scene, competing for advanced packaging orders against NVIDIA. South Korea initiated a strategic research project to rapidly narrow the gap in packaging technology within 5~7 years, targeting giants like TSMC, Amkor, and China’s JCET.


[News] TSMC Faces Capacity Shortage, Samsung May Provide Advanced Packaging and HBM Services to AMD

According to the Korea Economic Daily. Samsung Electronics’ HBM3 and packaging services have passed AMD’s quality tests. The upcoming Instinct MI300 series AI chips from AMD are planned to incorporate Samsung’s HBM3 and packaging services. These chips, which combine central processing units (CPUs), graphics processing units (GPUs), and HBM3, are expected to be released in the fourth quarter of this year.

Samsung is noted as the sole provider capable of offering advanced packaging solutions and HBM products simultaneously. Originally considering TSMC’s advanced packaging services, AMD had to alter its plans due to capacity constraints.

The surge in demand for high-performance GPUs within the AI landscape benefits not only GPU manufacturers like NVIDIA and AMD, but also propels the development of HBM and advanced packaging.

In the backdrop of the AI trend, AIGC model training and inference require the deployment of AI servers. These servers typically require mid-to-high-end GPUs, with HBM penetration nearing 100% among these GPUs.

Presently, Samsung, SK Hynix, and Micron are the primary HBM manufacturers. According to the latest research by TrendForce, driven by the expansion efforts of these original manufacturers, the estimated annual growth rate of HBM supply in 2024 is projected to reach 105%.

In terms of competitive dynamics, SK Hynix leads with its HBM3 products, serving as the primary supplier for NVIDIA’s Server GPUs. Samsung, on the other hand, focuses on fulfilling orders from other cloud service providers. With added orders from customers, the gap in market share between Samsung and SK Hynix is expected to narrow significantly this year. The estimated HBM market share for both companies is about 95% for 2023 to 2024. However, variations in customer composition might lead to sequential variations in bit shipments.

In the realm of advanced packaging capacity, TSMC’s CoWoS packaging technology dominates as the main choice for AI server chip suppliers. Amidst strong demand for high-end AI chips and HBM, TrendForce estimates that TSMC’s CoWoS monthly capacity could reach 12K by the end of 2023.

With strong demand driven by NVIDIA’s A100 and H100 AI Server requirements, demand for CoWoS capacity is expected to rise by nearly 50% compared to the beginning of the year. Coupled with the growth in high-end AI chip demand from companies like AMD and Google, the latter half of the year could experience tighter CoWoS capacity. This robust demand is expected to continue into 2024, potentially leading to a 30-40% increase in advanced packaging capacity, contingent on equipment readiness.

(Photo credit: Samsung)


Server Supply Chain Becomes Fragmented, ODM’s Southeast Asia SMT Capacity Expected to Account for 23% in 2023, Says TrendForce

US-based CSPs have been establishing SMT production lines in Southeast Asia since late 2022 to mitigate geopolitical risks and supply chain disruptions. TrendForce reports that Taiwan-based server ODMs, including Quanta, Foxconn, Wistron (including Wiwynn), and Inventec, have set up production bases in countries like Thailand, Vietnam, and Malaysia. It’s projected that by 2023, the production capacity from these regions will account for 23%, and by 2026, it will approach 50%.

TrendForce reveals that Quanta, due to its geographical ties, has established several production lines in its Thai facilities centered around Google and Celestica, aiming for optimal positioning to foster customer loyalty. Meanwhile, Foxconn has renovated its existing facilities in Hanoi, Vietnam, and uses its Wisconsin plant to accommodate customer needs. Both Wistron and Wiwynn are progressively establishing assembly plants and SMT production lines in Malaysia. Inventec’s current strategy mirrors that of Quanta, with plans to build SMT production lines in Thailand by 2024 and commence server production in late 2024.

CSPs aim to control the core supply chain, AI server supply chain trends toward decentralization

TrendForce suggests that changes in the supply chain aren’t just about circumventing geopolitical risks—equally vital is increased control over key high-cost components, including CPUs, GPUs, and other critical materials. With rising demand for next-generation AI and Large Language Models, supply chain stockpiling grows each quarter. Accompanied by a surge in demand in 1H23, CSPs will become especially cautious in their supply chain management.

Google, with its in-house developed TPU machines, possesses both the core R&D and supply chain leadership. Moreover, its production stronghold primarily revolves around its own manufacturing sites in Thailand. However, Google still relies on cooperative ODMs for human resource allocation and production scheduling, while managing other materials internally. To avoid disruptions in the supply chain, companies like Microsoft, Meta, and AWS are not only aiming for flexibility in supply chain management but are also integrating system integrators into ODM production. This approach allows for more dispersed and meticulous coordination and execution of projects.

Initially, Meta heavily relied on direct purchases of complete server systems, with Intel’s Habana system being one of the first to be integrated into Meta’s infrastructure. This made sense since the CPU for their web-type servers were often semi-custom versions from Intel. Based on system optimization levels, Meta found Habana to be the most direct and seamless solution. Notably, it was only last year that Meta began to delegate parts of its Metaverse project to ODMs. This year, as part of its push into generative AI, Meta has also started adopting NVIDIA’s solutions extensively.

  • Page 3
  • 6 page(s)
  • 26 result(s)