The shortage of advanced packaging production capacity is anticipated to end earlier than expected. Industry suggests that Samsung’s inclusion in providing HBM3 production capacity has led to an increased supply of memory essential for advanced packaging. Coupled with TSMC’s strategy of enhancing advanced packaging production capacity through equipment modifications and partial outsourcing, and the adjustments made by some CSP in designs and placing orders, the bottleneck in advanced packaging capacity is poised to open up as early as the first quarter of the upcoming year, surpassing industry predictions by one quarter to half a year, according to the UDN News.
TSMC refrains from commenting on market speculations, while Samsung has already issued a press release signaling the expansion of HBM3 product sales to meet the growing demand for the new interface, concurrently boosting the share of advanced processes.
Industry indicates that the previous global shortage of AI chips primarily resulted from inadequate advanced packaging capacity. Now the shortage in advanced packaging capacity is expected to end sooner, it implies a positive shift in the supply of AI chips.
Samsung, alongside Micron and SK Hynix, is a key partner for TSMC in advanced packaging. In a recent press release, Samsung underscores its close collaboration with TSMC in previous generations and the current high-bandwidth memory (HBM) technology, supporting the compatibility of the CoWoS process and the interconnectivity of HBM. Having joined the TSMC OIP 3DFabric Alliance in 2022, Samsung is set to broaden its scope of work and provide solutions for future generations of HBM.
Previously, the industry points out that the earlier shortage of AI chips stemmed from three main factors: insufficient advanced packaging capacity, tight HBM3 memory capacity, and some CSPs repeatedly placing orders. Now, the obstacles related to these factors are gradually being overcome. In addition to TSMC and Samsung’s commitment to increasing advanced packaging capacity, CSPs are adjusting designs, reducing the usage of advanced packaging, and canceling previous repeated orders – all of which are the key factors.
TSMC’s ongoing collaboration with OSATs(Outsourced Semiconductor Assembly And Test) to expedite WoS capacity expansion is gaining momentum. NVIDIA confirmed during a recent financial calls that it has certified other CoWoS advanced packaging suppliers’ capacity as a backup. Industry speculation suggests that certifying the capacity of other CoWoS suppliers for both part of the front-end and back-end production will contribute to TSMC and its partners achieving the target of reaching a monthly CoWoS capacity of approximately 40,000 pieces in the first quarter of the next year.
Furthermore, previous challenges in expanding advanced packaging production capacity, especially in obtaining overseas equipment, are gradually being overcome. With equipment optimization, more capacity is being extracted, alleviating the shortage of AI chip capacity.
Please note that this article cites information from UDN News
According to Economic Daily News’ report, after a prolonged period of economy downturn, the market has gradually become optimistic about memories. The effective production reduction by the top five memory manufacturers has led to an increase in memory prices.
This, in turn, has prompted downstream module manufacturersto actively increase their procurement efforts, resulting in shortages of certain products. Industry source indicates that manufacturers, including Samsung and Micron, are expressing intentions to raise prices.
Memory Manufacturers Keen to Raise Prices, Future Demand Monitoring Required
On December 7th, Western Digital had sent out price increase notifications to its customers. In the notification, Western Digital stated that the company would review hard drive product pricing weekly, anticipating a price increase in the first half of the coming year.
Regarding flash memory components, the company expects prices to cyclically increase over the next few quarters, with the cumulative increase likely surpassing 55% of current levels.
It’s worth noting that, at present, many in the industry are optimistic about the cessation and rebound of NAND chip prices. However, currently, suppliers are individually notifying customers of adjusted quotes. In this context, Western Digital has directly issued a price increase notice to customers, with an expected remarkable increase, marking the industry’s first comprehensive significant price hike.
Meanwhile, the latest financial reports of many companies in the memory industry chain show significant improvement compared to the previous period.
On December 11th, SSD controller chip manufacturer Phison announced its performance report for November, with consolidated revenue reaching NTD 5.407 billion (approximately USD 171.8 million), representing nearly a 5% monthly growth.
According to Phison, the total shipment volume of SSD controller chips continued to recover in November. Among them, the total shipment volume of PCIe SSD controller ICs is expected to grow by nearly 40% year-on-year, setting a new record for the same period in history. This further substantiates the news of a significant surge in the memory market.
In the latest financial report from memory module manufacturer ADATA, the company’s consolidated revenue for October was NTD 3.791 billion (approximately USD 120.4 million), reflecting a monthly increase of 13.43% and a year-on-year increase of 39.59%.
ADATA’s Chairman, Simon Chen, recently mentioned that they anticipate the completion of NAND Flash inventory clearance by the end of this year or the end of January next year. There is an expectation that both DRAM and NAND Flash may face supply shortages next year.
In addition, DRAM manufacturer Nanya Technology observes a price increase in DDR5, while DDR4 prices have stabilized. There is an expectation of a slight improvement in DDR4 and DDR3 prices in the fourth quarter.
NAND Flash spot prices have surged since the end of September, driven by a collective production cut from suppliers. TrendForce analyst Avril Wu recently mentioned that Samsung’s production capacity has reduced by almost half from its peak, indicating that even cost-efficient manufacturers like Samsung can no longer endure losses. It is suggested that the average wafer price has likely passed its lowest point.
From the supply side, recent industry reports indicate that memory manufacturers are employing a “delaying tactic” in the supply of NAND Flash for the fourth quarter. Module manufacturers attempted to finalize orders for millions of chips in September, but memory manufacturerswere reluctant to release the products, and even when they were willing, the quantities and prices were unsatisfactory. Meanwhile, Samsung is reportedly pausing quotations and shipments for NAND products.
Looking ahead to the fourth quarter, the estimated average selling price increase for all NAND Flash products is expected to reach 13%, with an overall quarter-over-quarter revenue growth rate of over 20% in the NAND Flash industry.
It is worth noting that according to TrendForce analyst Avril Wu, with demand not showing explosive growth, the market will be focused on three key considerations. First, after production cuts, the decline in memory manufacturers’ inventory levels has begun, but it remains to be seen whether inventory can continue to shift towards buyers.
Second, it is anticipated that memory manufacturers’ production capacity will slowly increase, and if the market warms up, an early resumption of capacity could lead to supply-demand imbalances again. Lastly, whether end-demand can meet expectations for a recovery, with a particular focus on the sustained orders related to AI, will be crucial.
According to Taiwan’s Business Next, as Moore’s Law gradually reaches its limits, semiconductor manufacturers are transitioning from 2D to 3D chip stacking and packaging to increase transistor counts for improved performance. The final step, “packaging,” has become crucial. In line with this trend, Intel has announced the industry’s first glass-based substrate for advanced packaging, breaking traditional constraints, with mass production expected between 2026 and 2030.
Intel’s glass-based substrate packaging technology has been in development for a decade and was unveiled at the 2023 Innovation Day in Silicon Valley, USA. Intel aims to achieve the goal of accommodating 1 trillion transistors within a single package by 2030 using advanced glass-based packaging.
The rise of the AI wave has driven the demand for accelerated computing, increasing the requirements for chip density. Intel argues that current substrate materials consume more power and are more prone to expansion and warpage compared to glass, which better aligns with future needs. Industry analysts have noted that TSMC also has similar solutions.
According to Intel, Glass substrates can tolerate higher temperatures, offer 50% less pattern distortion, and have ultra-low flatness for improved depth of focus for lithography, and have the dimensional stability needed for extremely tight layer-to-layer interconnect overlay. As a result of these distinctive properties, a 10x increase in interconnect density is possible on glass substrates. Further, improved mechanical properties of glass enable ultra-large form-factor packages with very high assembly yields.
Glass substrates’ tolerance to higher temperatures also offers chip architects flexibility on how to set the design rules for power delivery and signal routing because it gives them the ability to seamlessly integrate optical interconnects, as well as embed inductors and capacitors into the glass at higher temperature processing.
According to a report from China’s Changjiang Securities released in May, the application of glass substrates in advanced packaging has been validated, and glass manufacturer Corning has introduced related products.
On the other hand, in a report by China’s Changjiang Securities released in May, the application of glass substrates in advanced packaging has been validated, with glass manufacturer Corning introducing related products.
As per a report from Taiwan’s TechNews,” TSMC, Samsung, and Intel have been actively deploying Backside Power Delivery Network (BSPDN) strategies recently, and have announced plans to incorporate BSPDN into their logic chip development roadmap. For instance, Samsung intends to implement BSPDN technology in its 2-nanometer chips, a move unveiled at the VLSI Symposium in Japan.
According to imec, BSPDN aims to alleviate the congestion issues faced by front-end logic chips in later-stage processes. Through Design Technology Co-Optimization (DTCO), more efficient wire designs are achieved in standard cells, aiding in the downsizing of logic standard cell.
In essence, BSPDN can be seen as a refinement of chiplet design. The conventional approach, where logic circuits and memory modules are integrated, is transformed into a configuration with logic functions on the front and power or signal delivery from the back.
While the traditional method of front-side wafer power delivery achieves its purpose, it leads to decreased power density and compromised performance. Nevertheless, the new BSPDN technique has not yet been adopted by foundries.
Samsung claims that, compared to the conventional method, BSPDN reduces area by 14.8%, providing more chip space for additional transistors and improved overall performance. Wire lengths are also cut by 9.2%, reducing resistance, allowing greater current flow, and thereby lowering power consumption while enhancing power transmission efficiency.
In June of this year, Intel also introduced its BSPDN-related innovations under the name ‘PowerVia.’ Team Blue plans to utilize this approach in the Intel 20A process, potentially achieving a chip utilization rate of 90%.
Intel believes PowerVia will address interconnect bottlenecks in silicon architecture, enabling continuous transmission through backside wafer powering. The company anticipates incorporating this novel approach into its Arrow Lake CPUs slated for release in 2024.
Furthermore, according to Taiwan’s supply chain sources, TSMC remains on track to launch its 2-nanometer process in 2025, with mass production expected in the latter half of the year in Hsinchu’s Baoshan. The company’s N2P process, planned for 2026, will feature BSPDN technology.
With the advancements in AIGC models such as ChatGPT and Midjourney, we are witnessing the rise of more super-sized language models, opening up new possibilities for High-Performance Computing (HPC) platforms.
According to TrendForce, by 2025, the global demand for computational resources in the AIGC industry – assuming 5 super-sized AIGC products equivalent to ChatGPT, 25 medium-sized AIGC products equivalent to Midjourney, and 80 small-sized AIGC products – would be approximately equivalent to 145,600 – 233,700 units of NVIDIA A100 GPUs. This highlights the significant impact of AIGC on computational requirements.
Additionally, the rapid development of supercomputing, 8K video streaming, and AR/VR will also lead to an increased workload on cloud computing systems. This calls for highly efficient computing platforms that can handle parallel processing of vast amounts of data.
However, a critical concern is whether hardware advancements can keep pace with the demands of these emerging applications.
HBM: The Fast Lane to High-Performance Computing
While the performance of core computing components like CPUs, GPUs, and ASICs has improved due to semiconductor advancements, their overall efficiency can be hindered by the limited bandwidth of DDR SDRAM.
For example, from 2014 to 2020, CPU performance increased over threefold, while DDR SDRAM bandwidth only doubled. Additionally, the pursuit of higher transmission performance through technologies like DDR5 or future DDR6 increases power consumption, posing long-term impacts on computing systems’ efficiency.
Recognizing this challenge, major chip manufacturers quickly turned their attention to new solutions. In 2013, AMD and SK Hynix made separate debuts with their pioneering products featuring High Bandwidth Memory (HBM), a revolutionary technology that allows for stacking on GPUs and effectively replacing GDDR SDRAM. It was recognized as an industry standard by JEDEC the same year.
In 2015, AMD introduced Fiji, the first high-end consumer GPU with integrated HBM, followed by NVIDIA’s release of P100, the first AI server GPU with HBM in 2016, marking the beginning of a new era for server GPU’s integration with HBM.
HBM’s rise as the mainstream technology sought after by key players can be attributed to its exceptional bandwidth and lower power consumption when compared to DDR SDRAM. For example, HBM3 delivers 15 times the bandwidth of DDR5 and can further increase the total bandwidth by adding more stacked dies. Additionally, at system level, HBM can effectively manage power consumption by replacing a portion of GDDR SDRAM or DDR SDRAM.
As computing power demands increase, HBM’s exceptional transmission efficiency unlocks the full potential of core computing components. Integrating HBM into server GPUs has become a prominent trend, propelling the global HBM market to grow at a compound annual rate of 40-45% from 2023 to 2025, according to TrendForce.
The Crucial Role of 2.5D Packaging
In the midst of this trend, the crucial role of 2.5D packaging technology in enabling such integration cannot be overlooked.
TSMC has been laying the groundwork for 2.5D packaging technology with CoWoS (Chip on Wafer on Substrate) since 2011. This technology enables the integration of logic chips on the same silicon interposer. The third-generation CoWoS technology, introduced in 2016, allowed the integration of logic chips with HBM and was adopted by NVIDIA for its P100 GPU.
With development in CoWoS technology, the interposer area has expanded, accommodating more stacked HBM dies. The 5th-generation CoWoS, launched in 2021, can integrate 8 HBM stacks and 2 core computing components. The upcoming 6th-generation CoWoS, expected in 2023, will support up to 12 HBM stacks, meeting the requirements of HBM3.
TSMC’s CoWoS platform has become the foundation for high-performance computing platforms. While other semiconductor leaders like Samsung, Intel, and ASE are also venturing into 2.5D packaging technology with HBM integration, we think TSMC is poised to be the biggest winner in this emerging field, considering its technological expertise, production capacity, and order capabilities.
In conclusion, the remarkable transmission efficiency of HBM, facilitated by the advancements in 2.5D packaging technologies, creates an exciting prospect for the seamless convergence of these innovations. The future holds immense potential for enhanced computing experiences.