About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[Sponsored Content] Building Dedicated AI Factories: Confronting the “Power = Compute = National Power” Challenge


2026-03-26 Emerging Technologies editor

Following the 2023 surge in LLM-based automation—sparked by breakout projects like AutoGPT—and Andrew Ng’s advocacy for “Agentic AI,” the flourishing Generative AI (GenAI) sector is undergoing a profound transformation. It is evolving from systems that merely respond to prompts into sophisticated agents capable of autonomously executing tasks.

To support this era of autonomous decision-making, the “AI Factory” concept has emerged as the definitive blueprint for next-generation AI infrastructure. This approach prioritizes a holistic co-design of computing power, memory, interconnects, power supply, and thermal management. Taiwan-based companies, long pivotal to the industry, are well-positioned to seize new opportunities and reach new heights within this emerging landscape.

The rise of GenAI has revolutionized data center operations, shifting the primary workload to AI and triggering a construction boom for dedicated AI data centers (AIDC) in 2025. Major developments include Microsoft’s nearly USD 80 billion investment in AI infrastructure and Amazon’s USD 20 billion project to convert Pennsylvania’s Susquehanna nuclear plant into an AI campus. Global market intelligence firm TrendForce estimates that the top five US cloud service providers (CSPs) will increase capital expenditure by over 50% for the current year of 2026, thus signaling sustained, robust momentum in AI investments.

These investments highlight a strategic shift in the global AI race: the focus has moved from algorithmic superiority to power supply stability and energy security. Microsoft is restarting Unit 1 at the Three Mile Island nuclear plant, Google has signed the first US corporate agreement for small modular reactors (SMRs) with Kairos Power, and Amazon is partnering with Energy Northwest to deploy up to 12 SMRs. The world now faces a new reality defined by the trinity of “Power = Compute = National Power.”

Demand for AI Servers Drive AI Infrastructure Growth, and Liquid Cooling Goes Mainstream

TrendForce Analyst Randy Yang notes that global data center power capacity grew from 84 GW in 2023—the “inaugural year” of AI—to 98 GW in 2024, and then reached 120 GW by 2025. Looking further ahead, capacity is expected to surge to 152 GW, marking a year-on-year growth rate of nearly 30%. This trajectory underscores the market’s intensifying demand for AI computing infrastructure.

Crucially, this growth is driven almost entirely by AI servers. TrendForce estimates that the total power capacity of AI servers deployed worldwide will reach 55 GW—a staggering 74% YoY increase that accounts for over half (52%) of the total power capacity of all servers deployed worldwide. This marks a major industry watershed: for the first time, power demand for AI servers exceeds that of general-purpose servers. Consequently, global energy allocation and hardware spending will increasingly pivot toward AI infrastructure.

TrendForce Analyst Fion Chiu adds that alongside rising power consumption, the demand for liquid cooling is accelerating due to the drastically increased power density of server racks. Estimates suggest that over 50% of thermal solutions for AI servers will utilize liquid cooling technology this year.

Mirroring global trends, Taiwan’s AI-related electricity demand is climbing sharply. The Ministry of Economic Affairs forecasts an eightfold increase in AI-related power consumption, rising from 0.24 GW in 2023 to 2.24 GW by 2028. Currently, the Taiwan Research Institute reports 36 data centers across the island with a total load of 60 MW (or 0.16% of the island’s power consumption). However, the National Science and Technology Council of Taiwan projects that by 2029, the scale of private and state-owned AI computing centers on the island will reach 450 MW, representing a 6.5-fold surge in a short three-year span. Clearly, the rapid expansion of AI development places Taiwan before a critical test regarding the sufficiency of its power supply.

When AI Agents Meet LLMs: Unveiling a New Era of Autonomous Decision-Making

The convergence of long-established AI agents with large language models (LLMs) has ushered in a new era of autonomous decision-making. The evolution from GenAI to Agentic AI marks a pivotal shift: AI has graduated from merely speaking to actively executing tasks. Almost overnight, Agentic AI has become the industry’s hottest trend. Defined as systems capable of autonomously setting goals, formulating plans, executing actions, and adjusting strategies based on feedback, Agentic AI essentially represents the next generation of LLM-based agents.

Market forecasts underscore this potential. Grand View Research estimates the enterprise-grade Agentic AI market reached USD 3.67 billion in 2025 and will surge to USD 24.5 billion by 2030 at a CAGR of 46.2%.

Omdia offers an even more bullish outlook: while estimating the 2025 market at approximately USD 1.5 billion, it projects a leap to USD 41.8 billion by 2030. This implies a massive 2024-2029 CAGR of 175%, or nearly double that of the GenAI market (90%). Hence, Agentic AI is expected to be expanding at a much faster pace.

At the GPU Technology Conference (GTC) 2025, NVIDIA CEO Jensen Huang emphasized that the rise of Agentic AI and advanced reasoning capabilities will push computing power requirements 100 times higher than previously expected. Consequently, global data centers are racing to meet this explosive new demand.

As “Cloud + Edge” Hybrid Architecture Turns Mainstream, Energy Efficiency Also Becomes Top Priority

To minimize latency, enhance privacy, and enable real-time decision-making, nearly two-thirds of computing workloads are expected to shift to the edge in the coming years. This entails running AI models directly on devices where data is generated—such as smartphones, autonomous vehicles, local servers, and edge AI computers equipped with NVIDIA Jetson—rather than in remote data centers. These factors and related demand, combined with the rise of Agentic AI, are set to catalyze the rapid development of Edge AI.

TrendForce Senior Research Manager PK Tseng estimates that the global Edge AI market reached around USD 36 billion in 2025 and will expand to USD 84 billion by 2029. This represents a CAGR of 23.5% from 2025 to 2029 (see table below).

Category 2025(E) 2026(F) 2027(F) 2028(F) 2029(F)
Scale of Entire Edge AI Market 36 45 56 67 84
Hardware Portion 30 36 42 48 54
Software Portion 1.8 2.4 3.1 4.1 5.4

▲ Global Edge AI Market Size Forecast (2025-2029)
(Source: TrendForce. Note: (E) denotes estimated values, indicating the year is near completion or based on preliminary statistics; (F) denotes forecasted values, based on long-term trend outlooks.)

Likewise, Precedence Research values the global Edge AI market at USD 25.65 billion in 2025, forecasting it to reach approximately USD 143.06 billion by 2034 with a CAGR of 21.04%.

While Edge AI will decentralize a significant portion of computing, Yang argues this will not result in a zero-sum scenario where “the edge grows and the cloud shrinks.” Instead, the “Cloud + Edge” hybrid architecture will drive an overall increase in overall computing power. While the edge handles more real-time inference and low-latency tasks, the cloud remains indispensable for model training, iteration, cross-domain collaboration, and long-context reasoning.

In other words, the “Cloud + Edge” hybrid architecture is poised to become the mainstream model for future AI development, pushing total computing requirements to new heights. Consequently, energy efficiency optimization has become a top priority, particularly for data centers. Since reliance on semiconductor manufacturing process advancements alone can no longer fully address power consumption challenges, the focus is shifting toward optimizing upstream power distribution architectures—such as high-voltage direct current (HVDC). This approach is now critical for minimizing power conversion losses and improving overall power usage effectiveness (PUE).

The “AI Factory” Boom is Here! Triggering a Wave of Customized AIDC Construction

Driven by the rise of Agentic AI and Edge AI—and the need to tackle critical challenges regarding power, computing, energy consumption, and thermal management—NVIDIA CEO Jensen Huang has tirelessly promoted the concept of the “AI Factory” at major global venues such as GTC, CES, COMPUTEX, and Davos. Consequently, the term has become the hottest buzzword in the AI sector and serves as the reference architecture for building future AIDCs.

According to NVIDIA’s official website, an AI Factory is defined as “a dedicated computing infrastructure designed to manage the entire AI lifecycle—from data ingestion, training, and fine-tuning to large-scale AI inference—thereby creating value from data.” The definition further notes that its primary product is intelligence, measured in token throughput, which drives decision-making, automation, and new AI solutions.

Yang points out that “AI Factory” is essentially a synonym for the next generation of AIDCs. Although NVIDIA has proposed a reference architecture, major CSPs are actively developing customized AIDC designs based on their own business strategies, resulting in a diverse and evolving technological landscape.

The most critical difference between an AI Factory and a traditional AIDC lies in the emphasis on holistic planning and co-design. Its core objective is to maximize system-level efficiency across three major subsystems: computing, power, and switching. This involves optimizing computing systems to achieve higher compute density per watt; reshaping power architectures to minimize conversion losses at every stage from grid to chip; and innovating switching systems through optical communication to reduce latency.

However, these efficiency improvements will not dampen the aggregate demand for computing power and electricity. The overall demand curve is expected to maintain a steep upward trajectory. The ultimate goal of enhancing efficiency is to pack as much computing power as possible into fixed capacity limits, thereby generating greater model value.

EU Plans to Build 5 AI Gigafactories; Such Facilities Are Being Established Globally and in Taiwan

Regardless of how “AI factories” are defined across the industry, the construction of next-generation AIDCs and AI factories has undeniably become a top priority for governments and corporations alike. In the EU, the European High Performance Computing Joint Undertaking (EuroHPC JU) selected seven countries—including Finland, Germany, and Spain—in December 2024 to host the first batch of AI factories. This was followed in March 2025 by the addition of six more sites in countries such as Austria, France, and Poland. Furthermore, in February 2025, the European Commission pledged to mobilize EUR 20 billion through the InvestAI initiative to establish up to five “AI Gigafactories” across the bloc.

NVIDIA’s official press releases have highlighted three key AI factory investments: US pharmaceutical giant Eli Lilly and Company is constructing the industry’s “most powerful” supercomputer and an AI factory; the Mayo Clinic has established a similar facility dedicated to medical research and digital pathology; and NVIDIA has entered a strategic partnership with Saudi Arabia’s HUMAIN to build AI factories in phases over five years, beginning in 2025. Other notable projects include an Italian telecommunications company announcing the construction of the NeXXt AI Factory, and Dell collaborating with Indian IT service provider NxtGen to build a large-scale dedicated AI factory.

Since the onset of the US-China trade war, the Middle East has emerged as a critical battleground in the ensuing AI rivalry. During a visit to the region in mid-May 2025, US President Donald Trump signed a USD 200 billion agreement with Saudi Arabia and agreed to jointly develop AIDCs with the UAE. The Saudi agreement primarily involves HUMAIN—a startup backed by the country’s Public Investment Fund—signing individual contracts with NVIDIA, AWS, and AMD to develop AI factories, AI parks, and AI infrastructure, respectively. The AIDC initiative in the UAE involves local firm G42 collaborating with several US companies.

In Saudi Arabia, the Kingdom announced a USD 1.5 billion investment last February to expand the AIDC operated by AI unicorn Groq in Dammam. Additionally, NVIDIA is partnering with the Saudi Data and AI Authority (SDAIA) to construct a sovereign AI factory.

Taiwan is not falling behind in this global wave of AI factory construction. Big Innovation Company, a subsidiary of Hon Hai Technology Group (Foxconn), is partnering with NVIDIA to build an AI factory powered by 10,000 NVIDIA Blackwell GPUs. Similarly, US GPU cloud service provider GMI Cloud has announced a USD 500 million investment to collaborate with Taiwan Mobile, transforming the latter’s server room in Taoyuan into an AI factory.

Furthermore, in November of last year, a consortium of companies—including INFINITIX, SignalPro, Supermicro, Macnica, Stark Technology, and He Tong Enterprise—joined forces to build the next-generation “SiGTRON” AI factory near the Southern Taiwan Science Park, aiming to jointly promote a neocloud intelligent computing ecosystem.

Taiwan’s AI Supply Chain Mobilizes to Become “AI Infra Integrators”

Fueled by strong market demand for emerging applications such as Agentic AI and Edge AI, alongside the global construction boom of AIDCs and AI factories, Taiwan’s AI supply chain is undergoing rapid expansion and structural upgrading. The insatiable appetite for AI chips has created supply bottlenecks in advanced manufacturing processes and advanced packaging. In response, TSMC and ASE are actively expanding new facilities to boost overall production capacity. This surge has also catalyzed the market for specialized AI accelerators or ASICs, with the potential market scale estimated between USD 50 billion and USD 70 billion—a trend expected to contribute billions of dollars in revenue for MediaTek by 2027.

Taiwanese EMS providers (ODMs) already account for over 80% of global server shipments and more than 90% of global AI server shipments, underscoring their pivotal role in future AI infrastructure. Leading firms such as Foxconn, Quanta, Wistron, Wiwynn, Inventec, MiTAC, and Gigabyte have delivered impressive results amidst this new AI wave. Consequently, their investment focus and delivery capabilities have evolved from simple “production lines and assembly” to “rack-level delivery and data center-level integration.” Furthermore, with the rise of Edge AI, traditional industrial computer manufacturers are pivoting from image and sensor recognition to developing Edge AI systems capable of local inference, tool invocation, and process agency. Key players in this transition include Advantech, Asus, AAEON, IEI, and IBASE.

The rising power density of AI servers is establishing liquid cooling as the mainstream thermal solution for future AIDCs and AI factories. Taiwan-based firms Auras and AVC have achieved significant success with liquid cooling modules, cold plates, and quick disconnects. In the realm of power supply and distribution, major local suppliers—including Delta Electronics, LITE-ON, and AcBel—are not only showcasing higher-power, higher-efficiency AIDC power, thermal, and microgrid solutions but are also actively integrating into the high-voltage ecosystem driven by NVIDIA’s 800V requirement for data center power supply.

AI training and inference clusters demand networks with higher bandwidth and lower latency. To meet this need, the island’s networking companies like Accton and its subsidiary Edgecore are launching high-performance switch portfolios, thus collectively help pushing a wave of upgrades in AI-related switching technology.

Simultaneously, the demand for AI servers, accelerator cards, and switches is propelling high-end PCBs and AGF substrates into a long-cycle growth period. Manufacturers such as ZDT, GCE, and Tripod, along with the “ABF Trio” (Unimicron, Nan Ya PCB, and Kinsus), have emerged as the strongest drivers of this boom.

Taiwan’s AI Ecosystem
AI Servers Foxconn, Quanta, Wistron, Wiwynn, Inventec, MiTAC, Gigabyte
Liquid Cooling (Cold Plates, Quick Disconnects, etc.) Auras, AVC
Power and Power Distribution Delta Electronics, LITE-ON, AcBel
High-Performance Switches Accton, Alpha Networks, Senao Networks
PCB, AGF, and ABF ZDT, GCE, Tripod, Unimicron, Nan Ya PCB, Kinsus
Edge AI Advantech, Asus, AAEON, IEI, IBASE

▲ Overview of Players in the Taiwan AI Ecosystem.
(Source: Compiled by TechNews.)

In conclusion, propelled by the latest wave of demand for Agentic AI and AI factories, Taiwan’s AI supply chain is no longer limited to isolated pockets of growth. Instead, the entire ecosystem—from computing power, packaging, and substrates to board and rack assembly, cooling, power, and networking—is experiencing a comprehensive elevation. The robust demand generated by this trend has spilled over from the cloud and data centers to the edge, accelerating the island’s evolution from a “hardware manufacturing hub” into an “AI infra integrator.”

(Header image source: Shutterstock.)


Get in touch with us