AI chip


[News] NVIDIA’s Exclusive Chips for China Now Reported to be Available for Pre-Order, Priced Similar to Huawei Products

NVIDIA has begun accepting pre-orders for its customized artificial intelligence (AI) chips tailored for the Chinese market, as per a report from Reuters. The prices of the chips are said to be comparable to those of its competitor Huawei’s products.

The H20 graphics card, exclusively designed by NVIDIA for the Chinese market, is the most powerful among the three chips developed, although its computing power is lower than its own flagship AI chips, the H100 and H800. The H800, also tailored for China, was banned in October last year.

According to industry sources cited in the report, the specifications of the H20 are inferior to Huawei’s Ascend 910B in some critical areas. Additionally, NVIDIA has priced orders from Chinese H20 distributors between $12,000 and $15,000 per unit in recent weeks.

It is noteworthy that servers provided by distributors with 8 pre-configured AI chips are priced at CNY 1.4 million. In comparison, servers equipped with 8 H800 chips were priced at around CNY 2 million when they were launched a year ago.

Furthermore, it’s added in the report that distributors have informed customers that they will be able to begin small-scale deliveries of H20 products in the first quarter of 2024, with bulk deliveries starting in the second quarter.

In terms of specifications, the H20 appears to lag behind the 910B in FP32 performance, a critical metric that measures the speed at which chips process common tasks, with the H20’s performance being less than half of its competitor’s.

However, according to the source cited in the report, the H20 seems to have an advantage over the 910B in terms of interconnect speed, which measures the speed of data transfer between chips.

The source further indicates that in applications requiring numerous chips to be interconnected and function as a system, the H20 still possesses competitive capabilities compared to the 910B.

NVIDIA reportedly plans to commence mass production of the H20 in the second quarter of this year. Additionally, the company intends to introduce two other chips targeted at the Chinese market, namely the L20 and L2. However, the status of these two chips cannot be confirmed at the moment, as neither the H20, L20, nor L2 are currently listed on NVIDIA’s official website.

TrendForce believes Chinese companies will continue to buy existing AI chips in the short term. NVIDIA’s GPU AI accelerator chips remain a top priority—including H20, L20, and L2—designed specifically for the Chinese market following the ban.

At the same time, major Chinese AI firms like Huawei, will continue to develop general-purpose AI chips to provide AI solutions for local businesses. Beyond developing AI chips, these companies aim to establish a domestic AI server ecosystem in China.

TrendForce recognizes that a key factor in achieving success will come from the support of the Chinese government through localized projects, such as those involving Chinese telecom operators, which encourage the adoption of domestic AI chips.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Reuters.


[News] Expert Insights on NVIDIA’s AI Chip Strategy – Downgraded Version Targeted for China, High-End Versions Aimed Overseas

NVIDIA CEO Jensen Huang has reportedly gone to Taiwan once again, with reports suggesting a recent visit to China. Industry sources believe NVIDIA is planning to introduce downgraded AI chips to bypass U.S. restrictions on exporting high-end chips to China. Huang’s visit to China is seen as an effort to alleviate concerns among customers about adopting the downgraded versions.

Experts indicate that due to the expanded U.S. semiconductor restriction on China, NVIDIA’s sales in the Chinese market will decline. To counter this, NVIDIA might adjust its product portfolio and expand sales of high-end AI chips outside China.

The export of NVIDIA’s A100 and H100 chips to China and Hong Kong was prohibited in September 2022. Following that, the A800 and H800 chips, which were further designed with downgraded adjustments for the Chinese market, were also prohibited for export to China in October of the previous year.

In November 2023, the NVIDIA’s management acknowledged the significant impact of the U.S. restrictions on China’s revenue for the fourth quarter of 2023 but expressed confidence that revenue from other regions can offset this impact.

CEO Jensen Huang revealed in December in Singapore that NVIDIA was closely collaborating with the U.S. government to ensure compliance with export restrictions on new chips for the Chinese market.

According to reports in Chinese media The Paper, Jensen Huang recently made a low-profile visit to China. The market is closely watching the status of NVIDIA’s AI chip strategy in China and the company’s subsequent development strategies in response to U.S. restrictions. The fate of the newly designed AI chips, H20, L20, and L2, to comply with U.S. export regulations remains uncertain and will be closely observed.

Liu Pei-Chen, a researcher and director at the Taiwan Institute of Economic Research, discussed with CNA’s reporter about NVIDIA’s active planning to introduce a downgraded version of AI chips in China. 

The most urgent task, according to Liu, is to persuade Chinese customers to adopt these downgraded AI chips. Chinese clients believe that there isn’t a significant performance gap between NVIDIA’s downgraded AI chips and domestically designed AI chips.

Liu mentioned that this is likely the reason why Jensen Huang visited China. It serves as an opportunity to promote NVIDIA’s downgraded AI chips and alleviate concerns among Chinese customers. 

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from CNA.


[News] NVIDIA and AMD Clash in AI Chip Market, as TSMC Dominates Orders with Strong Momentum in Advanced Processes

In the intense battle of AI chips between NVIDIA and AMD this year, AMD’s MI300 has entered mass production and shipment 1H24, gaining positive adoption from clients. In response, NVIDIA is gearing up to launch upgraded AI chips. TSMC emerges as the big winner by securing orders from both NVIDIA and AMD.

Industry sources have revealed optimism as NVIDIA’s AI chip shipment momentum is expected to reach around 3 million units this year, representing multiple growth compared to 2023.

With the production ramp-up of the AMD MI300 series chips, the total number of AI high-performance computing chips from NVIDIA and AMD for TSMC in 2024 is anticipated to reach 3.5 million units. This boost in demand is expected to contribute to the utilization rate of TSMC’s advanced nodes.

According to a report from the Economic Daily News, TSMC has not commented on rumors regarding customers and orders.

Industry sources have further noted that the global AI boom ignited in 2023, and 2024 continues to be a focal point for the industry. A notable shift from 2023 is that NVIDIA, which has traditionally dominated the field of high-performance computing (HPC) in AI, is now facing a challenge from AMD’s MI300 series products, which have begun shipping, intensifying competition for market share.

Reportedly, the AMD MI300A series products have commenced mass production and shipment this quarter. The central processing unit (CPU) and graphics processing unit (GPU) tile are manufactured using TSMC’s 5nm process, while the IO tile use TSMC’s 6nm process.

These chips are integrated through TSMC’s new System-on-Integrated-Chip (SoIC) and Chip-on-Wafer-on-Substrate (CoWoS) advanced packaging technologies. Additionally, AMD’s MI300X, which does not integrate the CPU, is also shipping simultaneously.

Compared to NVIDIA’s GH200, which integrates CPU and GPU, and the H200, focusing solely on GPU computation, AMD’s new AI chip performance exceeds expectations. It offers a lower price and a high cost-performance advantage, attracting adoption by ODMs.

In response to strong competition from AMD, NVIDIA is upgrading its product line. Apart from its high-demand H200 and GH200, NVIDIA is expected to launch new products such as B100 and GB200, utilizing TSMC’s 3nm process, by the end of the year.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Economic Daily News


[News] Nvidia vs. AMD, the Expensive Duel as Two Major Buyers Opt for AMD’s Latest AI GPU

AMD has long aspired to gain more favor for its AI chips, aiming to break into Nvidia’s stronghold in the AI chip market. Key players like Meta, OpenAI, and Microsoft, who are major buyers of AI chips, also desire a diversified market with multiple AI chip suppliers to avoid vendor lock-in issues and reduce costs.

With AMD’s latest AI chip, Instinct MI300X slated for significant shipments in early 2024, these three major AI chip buyers have publicly announced their plans to place orders as they consider AMD’s solution a more cost-effective alternative.

At the AMD “Advancing AI” event on December 6th, Meta, OpenAI, Microsoft, and Oracle declared their preference for AMD’s latest AI chip, Instinct MI300X. This marks a groundbreaking move by AI tech giants actively seeking alternatives to Nvidia’s expensive GPUs.

For applications like OpenAI’s ChatGPT, Nvidia GPUs have played a crucial role. However, if the AMD MI300X can provide a significant cost advantage, it has the potential to impact Nvidia’s sales performance and challenge its market dominance in AI chips.

AMD’s Three Major Challenges

AMD grapples with three major challenges: convincing enterprises to consider substitutions, addressing industry standards compared to Nvidia’s CUDA software, and determining competitive GPU pricing. Lisa Su, AMD’s CEO, highlighted at the event that the new MI300X architecture features 192GB of high-performance HBM3, delivering not only faster data transfer but also meeting the demands of larger AI models. Su emphasized that such a notable performance boost translates directly into an enhanced user experience, enabling quicker responses to complex user queries.

However, AMD is currently facing critical challenges. Companies that heavily rely on Nvidia may hesitate to invest their time and resources in an alternative GPU supplier like AMD. Su believes that there is an opportunity to make efforts in persuading these AI tech giants to adopt AMD GPUs.

Another pivotal concern is that Nvidia has established its CUDA software as the industry standard, resulting in a highly loyal customer base. In response, AMD has made improvements to its ROCm software suite to effectively compete in this space. Lastly, pricing is a crucial issue, as AMD did not disclose the price of the MI300X during the event. Convincing customers to choose AMD over Nvidia, whose chips are priced around USD 40,000 each, will require substantial cost advantages in both the purchase and operation of AMD’s offerings.

The Overall Size of the AI GPU Market is Expected to Reach USD 400 Billion by 2027

AMD has already secured agreements with companies eager for high-performance GPUs to use MI300X. Meta plans to leverage MI300X GPUs for AI inference tasks like AI graphics, image editing, and AI assistants. On the other hands, Microsoft’s CTO, Kevin Scott, announced that the company will provide access to MI300X through Azure web service.

Additionally, OpenAI has decided to have its GPU programming language Triton, a dedication to machine learning algorithm development, support AMD MI300X. Oracle Cloud Infrastructure (OCI) intends to introduce bare-metal instances based on AMD MI300X GPUs in its high-performance accelerated computing instances for AI.

AMD anticipates that the annual revenue from its GPUs for data centers will reach USD 2 billion by 2024. This projected figure is substantially lower than Nvidia’s most recent quarterly sales related to the data center business (i.e., over USD 14 billion, including sales unrelated to GPUs). AMD emphasizes that with the rising demand for high-end AI chips, the AI GPU market’s overall size is expected to reach USD 400 billion by 2027. This strategic focus on AI GPU products underscores AMD’s optimism about capturing a significant market share. Lisa Su is confident that AMD is poised for success in this endeavor.

Please note that this article cites information from TechNews

(Image: AMD)


[News] Samsung Reportedly Organizing Next-Gen Chip Fabrication Team, Aiming to Seize the Initiative in the AI Field

According to the South Korean media The Korea Economic Daily’s report, Samsung Electronics has established a new business unit dedicated to developing next-generation chip processing technology. The aim is to secure a leading position in the field of AI chips and foundry services.

The report indicates that the recently formed research team at Samsung will be led by Hyun Sang-jin, who was promoted to the position of general manager on November 29. He has been assigned the responsibility of ensuring a competitive advantage against competitors like TSMC in the technology landscape.

The team will be placed under Samsung’s chip research center within its Device Solutions (DS) division, which oversees its semiconductor business, as mentioned in the report.

Reportedly, insiders claim that Samsung aims for the latest technology developed by the team to lead the industry for the next decade or two, similar to the gate-all-around (GAA) transistor technology introduced by Samsung last year.

Samsung has previously stated that compared to the previous generation process, the 3-nanometer GAA process can deliver a 30% improvement in performance, a 50% reduction in power consumption, and a 45% reduction in chip size. In the report, Samsung also claimed that it is more energy-efficient compared to FinFET technology, which is utilized by the TSMC’s 3-nanometer process.

Read more

(Photo credit: Samsung)

  • Page 1
  • 7 page(s)
  • 31 result(s)