AMD


2024-01-09

[News] NVIDIA and AMD Clash in AI Chip Market, as TSMC Dominates Orders with Strong Momentum in Advanced Processes

In the intense battle of AI chips between NVIDIA and AMD this year, AMD’s MI300 has entered mass production and shipment 1H24, gaining positive adoption from clients. In response, NVIDIA is gearing up to launch upgraded AI chips. TSMC emerges as the big winner by securing orders from both NVIDIA and AMD.

Industry sources have revealed optimism as NVIDIA’s AI chip shipment momentum is expected to reach around 3 million units this year, representing multiple growth compared to 2023.

With the production ramp-up of the AMD MI300 series chips, the total number of AI high-performance computing chips from NVIDIA and AMD for TSMC in 2024 is anticipated to reach 3.5 million units. This boost in demand is expected to contribute to the utilization rate of TSMC’s advanced nodes.

According to a report from the Economic Daily News, TSMC has not commented on rumors regarding customers and orders.

Industry sources have further noted that the global AI boom ignited in 2023, and 2024 continues to be a focal point for the industry. A notable shift from 2023 is that NVIDIA, which has traditionally dominated the field of high-performance computing (HPC) in AI, is now facing a challenge from AMD’s MI300 series products, which have begun shipping, intensifying competition for market share.

Reportedly, the AMD MI300A series products have commenced mass production and shipment this quarter. The central processing unit (CPU) and graphics processing unit (GPU) tile are manufactured using TSMC’s 5nm process, while the IO tile use TSMC’s 6nm process.

These chips are integrated through TSMC’s new System-on-Integrated-Chip (SoIC) and Chip-on-Wafer-on-Substrate (CoWoS) advanced packaging technologies. Additionally, AMD’s MI300X, which does not integrate the CPU, is also shipping simultaneously.

Compared to NVIDIA’s GH200, which integrates CPU and GPU, and the H200, focusing solely on GPU computation, AMD’s new AI chip performance exceeds expectations. It offers a lower price and a high cost-performance advantage, attracting adoption by ODMs.

In response to strong competition from AMD, NVIDIA is upgrading its product line. Apart from its high-demand H200 and GH200, NVIDIA is expected to launch new products such as B100 and GB200, utilizing TSMC’s 3nm process, by the end of the year.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Economic Daily News

2024-01-03

[News] AMD Eyes CoWoS-like Supply for AI Chips with TSMC Full Capacity

TSMC operates at full capacity, AMD aims for AI chips reportedly seeks CoWoS-like supply chain.

In 2023, NVIDIA led the global AI chip development, and in 2024, the global demand for AI chips is expected to continue to surge due to the expansion of end-user applications such as PCs and mobile phones.

Meanwhile, AMD has not stopped in AI chip development either, with the expected MI300 products poised to heat up the global AI business opportunities. However, the key to supply lies in advanced packaging, and AMD will seek outsourced semiconductor assembly and test (OSAT) service providers to offer support similar to CoWoS.

According to Taiwan’s Commercial Times, TSMC’s CoWoS capacity has long been fully loaded, and even if it expands production this year, it will mainly be reserved for NVIDIA. Market sources pointed out that TSMC will continue to increase CoWoS capacity to support AMD’s demand, but it takes six to nine months to establish a new production line. Therefore, it is expected that AMD will seek cooperation with other companies with CoWoS-like packaging capabilities. ASE, Amkor, Powertech, and KYEC are the first batch of potential partners.

TSMC has been outsourcing part of its CoWoS operations for some time, mainly targeting small-volume, high-performance chips. TSMC maintains in-house production of the CoW, while the back-end WoS is handed over to test and assembly houses to improve production efficiency and flexibility. This model will continue in the future 3D IC generation.

ASE and Amkor both received WoS orders last year. ASE has strengthened the development of advanced packaging technology and has a complete solution for the entire CoWoS process. ASE previously stated that it sees the strong potential of AI and expects related revenue to double in 2024.

According to reports citing market sources, the monthly production capacity of the ASE Group’s 2.5D packaging is about 2,000 to 2,500 pieces. Some experts believe that test and assembly houses will maintain the business model of TSMC or UMC providing the interposer. Therefore, in 2024, a significant increase in CoWoS production capacity is expected.

KYEC is responsible for testing Nvidia AI chips and is expected to benefit from AMD’s search for CoWoS-like capacity. Nvidia is currently KYEC’s second-largest customer.

KYEC’s testing of Nvidia A100 and H100 chips is mainly in the final test (FT), with a market share of up to 70%. KYEC provides comprehensive IC burn-in testing, has self-developed burn-in equipment, and has been in the industry for more than a decade, accumulating many patents and technologies.

AMD stated at the end of 2023 that AI chip revenue could reach US$2 billion in 2024, excluding other HPC chips. AMD pointed out that the annual compound growth rate of the AI chip market in the next four years will reach 70%, and it is estimated that it will reach US$400 billion in 2027.

(Image: AMD)

Please note that this article cites information from Commercial Times

2023-12-21

[News] TSMC’s Arizona Plant Rumored for Q1 2024 Trial Production, Securing Orders from Three U.S. Clients

According to a report by TechNews, TSMC’s Arizona-based Fab21, currently in the intensive equipment installation phase, has initiated the construction of a small-scale trial production line. With a small amount of equipment expected from multiple supply chain by the end of 2023, industry sources suggest that Fab21 is planning to commence trial production in the first quarter of 2024.

The reason behind TSMC’s anticipated trial production in the first quarter of 2024 stems from orders from its U.S. clients. Market reports indicate that among Fab21’s U.S. clients, in addition to major players like , NVIDIA CEO Jensen Huang has not ruled out placing orders with Fab21. Furthermore, there are indications that Intel, planning to outsource core computing to TSMC’s N3B process, is likely to place orders to Fab21 in the near future.

However, due to cost considerations, despite the commencement of a small-scale trial production line, the initial capacity increase for Fab21’s 4-nanometer process will not accelerate. This situation is expected to persist into the subsequent second phase of the 3-nanometer production line.

Looking back at TSMC’s progress in Arizona, the company announced the construction of the 12-inch wafer Fab21 in Arizona back in 2020, anticipating the commencement of formal equipment installation in the first quarter of 2024 and official mass production before the end of 2024. The initial phase of Fab21 will produce on the 5-nanometer process, with a monthly production capacity of 20,000 wafers.

TSMC later upgraded the initial processs from 5-nanometer to 4-nanometer. However, due to a shortage of skilled installation workers in the region, TSMC postponed the mass production start date to 2025.

In addition, the second phase of the project is currently slated for mass production in 2026, introducing the 3-nanometer process. The total investment for both phases amounts to $40 billion.

Industry sources also acknowledge that Fab21’s manufacturing costs are high, and its capacity cannot compete with TSMC’s fab in Taiwan, making U.S. client orders primarily a response to U.S. government requirements, with the majority of production still centered in Taiwan.

(Image: TSMC)

Please note that this article cites information from TechNews

2023-12-18

[News] Nvidia vs. AMD, the Expensive Duel as Two Major Buyers Opt for AMD’s Latest AI GPU

AMD has long aspired to gain more favor for its AI chips, aiming to break into Nvidia’s stronghold in the AI chip market. Key players like Meta, OpenAI, and Microsoft, who are major buyers of AI chips, also desire a diversified market with multiple AI chip suppliers to avoid vendor lock-in issues and reduce costs.

With AMD’s latest AI chip, Instinct MI300X slated for significant shipments in early 2024, these three major AI chip buyers have publicly announced their plans to place orders as they consider AMD’s solution a more cost-effective alternative.

At the AMD “Advancing AI” event on December 6th, Meta, OpenAI, Microsoft, and Oracle declared their preference for AMD’s latest AI chip, Instinct MI300X. This marks a groundbreaking move by AI tech giants actively seeking alternatives to Nvidia’s expensive GPUs.

For applications like OpenAI’s ChatGPT, Nvidia GPUs have played a crucial role. However, if the AMD MI300X can provide a significant cost advantage, it has the potential to impact Nvidia’s sales performance and challenge its market dominance in AI chips.

AMD’s Three Major Challenges

AMD grapples with three major challenges: convincing enterprises to consider substitutions, addressing industry standards compared to Nvidia’s CUDA software, and determining competitive GPU pricing. Lisa Su, AMD’s CEO, highlighted at the event that the new MI300X architecture features 192GB of high-performance HBM3, delivering not only faster data transfer but also meeting the demands of larger AI models. Su emphasized that such a notable performance boost translates directly into an enhanced user experience, enabling quicker responses to complex user queries.

However, AMD is currently facing critical challenges. Companies that heavily rely on Nvidia may hesitate to invest their time and resources in an alternative GPU supplier like AMD. Su believes that there is an opportunity to make efforts in persuading these AI tech giants to adopt AMD GPUs.

Another pivotal concern is that Nvidia has established its CUDA software as the industry standard, resulting in a highly loyal customer base. In response, AMD has made improvements to its ROCm software suite to effectively compete in this space. Lastly, pricing is a crucial issue, as AMD did not disclose the price of the MI300X during the event. Convincing customers to choose AMD over Nvidia, whose chips are priced around USD 40,000 each, will require substantial cost advantages in both the purchase and operation of AMD’s offerings.

The Overall Size of the AI GPU Market is Expected to Reach USD 400 Billion by 2027

AMD has already secured agreements with companies eager for high-performance GPUs to use MI300X. Meta plans to leverage MI300X GPUs for AI inference tasks like AI graphics, image editing, and AI assistants. On the other hands, Microsoft’s CTO, Kevin Scott, announced that the company will provide access to MI300X through Azure web service.

Additionally, OpenAI has decided to have its GPU programming language Triton, a dedication to machine learning algorithm development, support AMD MI300X. Oracle Cloud Infrastructure (OCI) intends to introduce bare-metal instances based on AMD MI300X GPUs in its high-performance accelerated computing instances for AI.

AMD anticipates that the annual revenue from its GPUs for data centers will reach USD 2 billion by 2024. This projected figure is substantially lower than Nvidia’s most recent quarterly sales related to the data center business (i.e., over USD 14 billion, including sales unrelated to GPUs). AMD emphasizes that with the rising demand for high-end AI chips, the AI GPU market’s overall size is expected to reach USD 400 billion by 2027. This strategic focus on AI GPU products underscores AMD’s optimism about capturing a significant market share. Lisa Su is confident that AMD is poised for success in this endeavor.

Please note that this article cites information from TechNews

(Image: AMD)

2023-12-15

[News] TSMC to Expedite Production of NVIDIA’s Specialized Chips for China

According to a news report from IJIWEI, sources have revealed that NVIDIA has placed urgent orders with TSMC for the production of AI GPU destined for China. These orders fall under the category of “Super Hot Run” (SHR), with plans to commence fulfillment in the first quarter of 2024.

Respond to the United States implementing stricter export controls on the Chinese semiconductor industry, sources stated in the report indicate that NVIDIA plans to provide a new “specialized” AI chip to China by lowering specifications, replacing the export-restricted H800, A800, and L40S series.

Insiders suggest that NVIDIA intends to resume supplying the RTX 4090 chip to China in January of next year but also release a modified version later to comply with U.S. export restrictions. 

On the other hand, NVIDIA continues to increase its orders with TSMC. This move aims to secure TSMC’s manufacturing capacity to meet the demand for the H100. However, due to limitations in CoWoS (Chip-on-Wafer-on-Substrate) production capacity, the H100 GPU is currently facing severe shortages.

It is noted that following NVIDIA, Intel and AMD are also expected to tailor AI chips for China. TSMC, as the primary pure-play foundry partner for these AI chip suppliers, will continue to enjoy a competitive advantage.

According to sources from semiconductor equipment manufacturers, despite TSMC’s efforts to increase CoWoS production capacity, the foundry still cannot meet the growing demand for NVIDIA GPUs. Additionally, the MI300 chip that was recently launched by AMD is also competing for the foundry industry’s production capacity.

Insiders note that TSMC’s ability to expand CoWoS production capacity is limited, with delays in equipment replacement speed, machine installation speed, and labor deployment. The new capacity is expected to be ready by the second quarter of 2024.

Equipment is identified as one of the key variables affecting TSMC’s expansion of CoWoS production capacity. Unexpected impacts on production and delivery times from Japanese equipment supplier Shibaura have delayed the development and installation of new capacity across TSMC’s production lines, including those in Longtan and Zhunan.

TSMC Chairman Mark Liu mentioned in a press conference in September that the shortage of CoWoS packaging capacity at TSMC is temporary, and it will be addressed through capacity expansion within the next year and a half to meet the growing demand.

(Photo credit: TSMC)

Read more

Please note that this article cites information from IJIWEI.

  • Page 1
  • 10 page(s)
  • 48 result(s)