ASE


2023-09-01

[News] Rumored AI Chip Demand Spurs Price Hikes at TSMC, UMC, ASE

TSMC’s CoWoS advanced packaging capacity shortage is causing limitations in NVIDIA’s AI chip output. Reports are emerging that NVIDIA is willing to pay a premium for alternative manufacturing capacity outside of TSMC, setting off a surge in massive overflow orders. UMC, the supplier of interposer materials for CoWoS, has reportedly raised prices for super hot runs and initiated plans to double its production capacity to meet client demand. ASE, an advanced packaging provider, is also seeing movement in its pricing.

In response to this, both UMC and ASE declined to comment on pricing and market rumors. In addressing the CoWoS advanced packaging capacity issue, NVIDIA previously confirmed during its financial report conference that it had certified other CoWoS packaging suppliers for capacity support and would collaborate with them to increase production, with industry speculation pointing towards ASE and other professional packaging factories.

TSMC’s CEO, C.C. Wei, openly stated that their advanced packaging capacity is at full utilization, and as the company actively expands its capacity, they will also outsource to professional packaging and testing factories.

It’s understood that the overflow effect from the inadequate CoWoS advanced packaging capacity at TSMC is gradually spreading. As the semiconductor industry as a whole adjusts its inventory, advanced packaging has become a market favorite.

Industry insiders point out that the interposer, acting as a communication medium within small chips, is a critical material in advanced packaging. With a broad uptick in demand for advanced packaging, the market for interposer materials is growing in parallel. Faced with high demand and limited supply, UMC has raised prices for super-hot-run interposer components.

UMC revealed that it has a comprehensive solution in the interposer field, including carriers, customed ASICs, and memory, with cooperation from multiple factories forming a substantial advantage. If other competitors are entering this space now, they might not have the quick responsiveness or abundant peripheral resources that UMC does.

UMC emphasized that compared to competitors, its competitive advantage in the interposer field lies in its open architecture. Currently, UMC’s interposer production primarily takes place in its Singapore plant, with a current capacity of about 3,000 units, with a target of doubling to six or seven thousand to meet customer demand.

Industry analysts attribute TSMC’s tight CoWoS advanced packaging capacity to a sudden surge in NVIDIA’s orders. TSMC’s CoWoS packaging had primarily catered to long-term partners, with production schedules already set, making it unable to provide NVIDIA with additional capacity. Moreover, even with tight capacity, TSMC won’t arbitrarily raise prices, as it would disrupt existing client production schedules. Therefore, NVIDIA’s move to secure additional capacity support through a premium likely involves temporary outsourced partners.

(Photo credit: NVIDIA)

2023-08-29

[News] CoWoS Demand Surges: TSMC Raises Urgent Orders by 20%, Non-TSMC Suppliers Benefit

According to a report from Taiwan’s TechNews, NVIDIA has delivered impressive results in its latest financial report, coupled with an optimistic outlook for its financial projections. This demonstrates that the demand for AI remains robust for the coming quarters. Currently, NVIDIA’s H100 and A100 chips both utilize TSMC’s CoWoS advanced packaging technology, making TSMC’s production capacity a crucial factor.

Examining the core GPU market, NVIDIA holds a dominant market share of 90%, while AMD accounts for about 10%. While other companies might adopt Google’s TPU or develop customized chips, they currently lack significant operational cost advantages.

In the short term, the shortage of CoWoS has led to tight chip supplies. However, according to a recent report by Morgan Stanley Securities, NVIDIA believes that TSMC’s CoWoS capacity won’t restrict shipments of the next quarter’s H100 GPUs. The company anticipates an increase in supply for each quarter next year. Simultaneously, TSMC is raising CoWoS prices by 20% for rush orders, indicating that the anticipated CoWoS bottleneck might alleviate.

According to industry sources, NVIDIA is actively diversifying its CoWoS supply chain away from TSMC. UMC, ASE, Amkor, and SPIL are significant players in this effort. Currently, UMC is expanding its interposer production capacity, aiming to double its capacity to relieve the tight CoWoS supply situation.

According to Morgan Stanley Securities, TSMC’s monthly CoWoS capacity this year is around 11,000 wafers, projected to reach 25,000 wafers by the end of next year. Non-TSMC CoWoS supply chain’s monthly capacity can reach 3,000 wafers, with a planned increase to 5,000 wafers by the end of next year.

(Photo credit: TSMC)

2023-08-22

TSMC’s CoWoS Dominance: Amkor, ASE, JCET’s Response

In response to the demands of high-performance computing, AI, 5G, and other applications, the shift towards chiplet and the incorporation of HBM memory has become inevitable for advanced chips. As a result, packaging has transitioned from 2D to 2.5D and 3D formats.

With chip manufacturing advancing towards more advanced process nodes, the model of directly packaging chips using advanced packaging technology from wafer foundries has emerged. However, this approach also signifies that wafer foundries will encroach upon certain aspects of traditional assembly and testing, leading to ongoing discussions about the ‘threat’ to traditional assembly and test firms since TSMC’s entry into advanced packaging in 2011.

But is this perspective accurate?

In fact, traditional assembly and test firms remain competitively positioned. Firstly, numerous electronic products still rely on their diverse traditional packaging techniques. Particularly, with the rapid growth of AIoT, electric vehicles, and drones, the required electronic components often still adopt traditional packaging methods. Secondly, faced with wafer foundries actively entering the advanced packaging domain, traditional assembly and test firms have not been idle, presenting concrete solutions to the challenge.

Advanced Packaging Innovations by Traditional Assembly and Test Firms

Since 2023, AI and AI server trends have rapidly emerged, driving the demand for AI chips. TSMC’s 2.5D advanced packaging technology, known as CoWoS, has played a pivotal role. However, the sudden surge in demand stretched TSMC’s capacity. In response, major traditional assembly and test firms such as ASE and Amkor have demonstrated their technical prowess and have no intention of being absent from this field.

For instance, ASE’s FOCoS technology enables the integration of HBM and ASIC. It restructures multiple chips into a fan-out module, which is then placed on the substrate, achieving the integration of multiple chips. Their FOCoS-Bridge technology, unveiled in May this year, utilizes silicon bridges (Si Bridge) to accomplish 2.5D packaging, bolstering the creation of advanced chips required for applications like AI, data centers, and servers.

Additionally, SPIL, a subsidiary of ASE, offers the FO-EB technology, a powerful integration of logic IC and HBM. As depicted below, this technology eschews silicon interposers, utilizing silicon bridges and redistribution layers (RDL) for connections, similarly capable of 2.5D packaging.

Another major player, Amkor, has not only collaborated with Samsung to develop the H-Cube advanced packaging solution but has also long been involved in ‘CoWoS-like technology.’ Through intermediary layers and through-silicon via (TSV) technology, Amkor can interconnect different chips, also possessing 2.5D advanced packaging capabilities.

China’s major assembly and test firm, Jiangsu Changjiang Electronics Technology (JCET), employs the XDFOI technology, integrating logic ICs with HBM through TSV, RDL, and microbump techniques, aimed at high-performance computing.

Given the recent surge in demand for high-end GPU chips, TSMC’s CoWoS capacity has fallen short, and NVIDIA is actively seeking support from second or even third suppliers. The ASE Group and Amkor have secured partial orders through their packaging technologies. This clearly illustrates that traditional assembly and test firms, even when faced with the entry of wafer foundries into the advanced packaging domain, still possess the capability to compete.

In terms of product types, wafer foundries focus on advanced packaging technology for major players like NVIDIA and AMD. Meanwhile, other products not in the highest-end category still opt for traditional assembly and test firms like ASE, Amkor, and JCET for manufacturing. Overall, with their presence in advanced packaging, as well as a hold on the expanding existing packaging market, traditional assembly and test firms continue to maintain their market competitiveness.

(Photo credit: Amkor)

2023-07-06

ASE, Amkor, UMC and Samsung Getting a Slice of the CoWoS Market from AI Chips, Challenging TSMC

AI Chips and High-Performance Computing (HPC) have been continuously shaking up the entire supply chain, with CoWoS packaging technology being the latest area to experience the tremors.

In the previous piece, “HBM and 2.5D Packaging: the Essential Backbone Behind AI Server,” we discovered that the leading AI chip players, Nvidia and AMD, have been dedicated users of TSMC’s CoWoS technology. Much of the groundbreaking tech used in their flagship product series – such as Nvidia’s A100 and H100, and AMD’s Instinct MI250X and MI300 – have their roots in TSMC’s CoWoS tech.

However, with AI’s exponential growth, chip demand from not just Nvidia and AMD has skyrocketed, but other giants like Google and Amazon are also catching up in the AI field, bringing an onslaught of chip demand. The surge of orders is already testing the limits of TSMC’s CoWoS capacity. While TSMC is planning to increase its production in the latter half of 2023, there’s a snag – the lead time of the packaging equipment is proving to be a bottleneck, severely curtailing the pace of this necessary capacity expansion.

Nvidia Shakes the foundation of the CoWoS Supply Chain

In these times of booming demand, maintaining a stable supply is viewed as the primary goal for chipmakers, including Nvidia. While TSMC is struggling to keep up with customer needs, other chipmakers are starting to tweak their outsourcing strategies, moving towards a more diversified supply chain model. This shift is now opening opportunities for other foundries and OSATs.

Interestingly, in this reshuffling of the supply chain, UMC (United Microelectronics Corporation) is reportedly becoming one of Nvidia’s key partners in the interposer sector for the first time, with plans for capacity expansion on the horizon.

From a technical viewpoint, interposer has always been the cornerstone of TSMC’s CoWoS process and technology progression. As the interposer area enlarges, it allows for more memory stack particles and core components to be integrated. This is crucial for increasingly complex multi-chip designs, underscoring Nvidia’s intention to support UMC as a backup resource to safeguard supply continuity.

Meanwhile, as Nvidia secures production capacity, it is observed that the two leading OSAT companies, Amkor and SPIL (as part of ASE), are establishing themselves in the Chip-on-Wafer (CoW) and Wafer-on-Substrate (WoS) processes.

The ASE Group is no stranger to the 2.5D packaging arena. It unveiled its proprietary 2.5D packaging tech as early as 2017, a technology capable of integrating core computational elements and High Bandwidth Memory (HBM) onto the silicon interposer. This approach was once utilized in AMD’s MI200 series server GPU. Also under the ASE Group umbrella, SPIL boasts unique Fan-Out Embedded Bridge (FO-EB) technology. Bypassing silicon interposers, the platform leverages silicon bridges and redistribution layers (RDL) for integration, which provides ASE another competitive edge.

Could Samsung’s Turnkey Service Break New Ground?

In the shifting landscape of the supply chain, the Samsung Device Solutions division’s turnkey service, spanning from foundry operations to Advanced Package (AVP), stands out as an emerging player that can’t be ignored.

After its 2018 split, Samsung Foundry started taking orders beyond System LSI for business stability. In 2023, the AVP department, initially serving Samsung’s memory and foundry businesses, has also expanded its reach to external clients.

Our research indicates that Samsung’s AVP division is making aggressive strides into the AI field. Currently in active talks with key customers in the U.S. and China, Samsung is positioning its foundry-to-packaging turnkey solutions and standalone advanced packaging processes as viable, mature options.

In terms of technology roadmap, Samsung has invested significantly in 2.5D packaging R&D. Mirroring TSMC, the company launched two 2.5D packaging technologies in 2021: the I-Cube4, capable of integrating four HBM stacks and one core component onto a silicon interposer, and the H-Cube, designed to extend packaging area by integrating HDI PCB beneath the ABF substrate, primarily for designs incorporating six or more HBM stack particles.

Besides, recognizing Japan’s dominance in packaging materials and technologies, Samsung recently launched a R&D center there to swiftly upscale its AVP business.

Given all these circumstances, it seems to be only a matter of time before Samsung carves out its own significant share in the AI chip market. Despite TSMC’s industry dominance and pivotal role in AI chip advancements, the rising demand for advanced packaging is set to undeniably reshape supply chain dynamics and the future of the semiconductor industry.

(Source: Nvidia)

2023-06-26

HBM and 2.5D Packaging: the Essential Backbone Behind AI Server

With the advancements in AIGC models such as ChatGPT and Midjourney, we are witnessing the rise of more super-sized language models, opening up new possibilities for High-Performance Computing (HPC) platforms.

According to TrendForce, by 2025, the global demand for computational resources in the AIGC industry – assuming 5 super-sized AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products equivalent to Midjourney, and 80 small-sized AIGC products – would be approximately equivalent to 145,600 – 233,700 units of NVIDIA A100 GPUs. This highlights the significant impact of AIGC on computational requirements.

Additionally, the rapid development of supercomputing, 8K video streaming, and AR/VR will also lead to an increased workload on cloud computing systems. This calls for highly efficient computing platforms that can handle parallel processing of vast amounts of data.
However, a critical concern is whether hardware advancements can keep pace with the demands of these emerging applications.

HBM: The Fast Lane to High-Performance Computing

While the performance of core computing components like CPUs, GPUs, and ASICs has improved due to semiconductor advancements, their overall efficiency can be hindered by the limited bandwidth of DDR SDRAM.

For example, from 2014 to 2020, CPU performance increased over threefold, while DDR SDRAM bandwidth only doubled. Additionally, the pursuit of higher transmission performance through technologies like DDR5 or future DDR6 increases power consumption, posing long-term impacts on computing systems’ efficiency.

Recognizing this challenge, major chip manufacturers quickly turned their attention to new solutions. In 2013, AMD and SK Hynix made separate debuts with their pioneering products featuring High Bandwidth Memory (HBM), a revolutionary technology that allows for stacking on GPUs and effectively replacing GDDR SDRAM. It was recognized as an industry standard by JEDEC the same year.

In 2015, AMD introduced Fiji, the first high-end consumer GPU with integrated HBM, followed by NVIDIA’s release of P100, the first AI server GPU with HBM in 2016, marking the beginning of a new era for server GPU’s integration with HBM.

HBM’s rise as the mainstream technology sought after by key players can be attributed to its exceptional bandwidth and lower power consumption when compared to DDR SDRAM. For example, HBM3 delivers 15 times the bandwidth of DDR5 and can further increase the total bandwidth by adding more stacked dies. Additionally, at system level, HBM can effectively manage power consumption by replacing a portion of GDDR SDRAM or DDR SDRAM.

As computing power demands increase, HBM’s exceptional transmission efficiency unlocks the full potential of core computing components. Integrating HBM into server GPUs has become a prominent trend, propelling the global HBM market to grow at a compound annual rate of 40-45% from 2023 to 2025, according to TrendForce.

The Crucial Role of 2.5D Packaging

In the midst of this trend, the crucial role of 2.5D packaging technology in enabling such integration cannot be overlooked.

TSMC has been laying the groundwork for 2.5D packaging technology with CoWoS (Chip on Wafer on Substrate) since 2011. This technology enables the integration of logic chips on the same silicon interposer. The third-generation CoWoS technology, introduced in 2016, allowed the integration of logic chips with HBM and was adopted by NVIDIA for its P100 GPU.

With development in CoWoS technology, the interposer area has expanded, accommodating more stacked HBM dies. The 5th-generation CoWoS, launched in 2021, can integrate 8 HBM stacks and 2 core computing components. The upcoming 6th-generation CoWoS, expected in 2023, will support up to 12 HBM stacks, meeting the requirements of HBM3.

TSMC’s CoWoS platform has become the foundation for high-performance computing platforms. While other semiconductor leaders like Samsung, Intel, and ASE are also venturing into 2.5D packaging technology with HBM integration, we think TSMC is poised to be the biggest winner in this emerging field, considering its technological expertise, production capacity, and order capabilities.

In conclusion, the remarkable transmission efficiency of HBM, facilitated by the advancements in 2.5D packaging technologies, creates an exciting prospect for the seamless convergence of these innovations. The future holds immense potential for enhanced computing experiences.

 

  • Page 3
  • 5 page(s)
  • 21 result(s)