IC Design


[News] RISC-V Architecture in AI Chips Features “Three Advantages,” Meta’s in-house chip MTIA

In the global landscape of self-developed chips, the industry has predominantly embraced the Arm architecture for IC design. However, Meta’s decision to employ the RISC-V architecture in its self-developed AI chip has become a topic of widespread discussion. It is said the growing preference for RISC-V is attributed to three key advantages including low power consumption, high openness, and relatively lower development costs, according to reports from UDN News.

Noted that Meta exclusively deploys its in-house AI chip, “MTIA,” within its data centers to expedite AI computation and inference. In this highly tailored setting, this choice ensures not only robust computational capabilities but also the potential for low power consumption, with an anticipated power usage of under 25W per RISC-V core. By strategically combining the RISC-V architecture with GPU accelerators or Arm architecture, Meta aims to achieve an overall reduction in power consumption while boosting computing power simultaneously.

Meta’s confirmation of adopting RISC-V architecture form Andes Technology Corporation, a CPU IP and Platform IP supplier from Taiwan, for AI chip development underscores RISC-V’s capability to support high-speed computational tasks and its suitability for integration into advanced manufacturing processes. This move positions RISC-V architecture to potentially make significant inroads into the AI computing market,  and stands as the third computing architecture opportunity, joining the ranks of x86 and Arm architectures.

Regarding the development potential of different chip architectures in the AI chip market, TrendForce points out that in the current overall AI market, GPUs (such as NVIDIA, AMD, etc.) still dominate, followed by Arm architecture. This includes major data centers, with active investments from NVIDIA, CSPs, and others in the Arm architecture field. RISC, on the other hand, represents another niche market, targeting the open-source AI market or enterprise niche applications.
(Image: Meta)


[Insights] Infinite Opportunities in Automotive Sector as IC Design Companies Compete for Self-Driving SoC

In TrendForce’s report on the self-driving System-on-Chip (SoC) market, it has witnessed rapid growth, which is anticipated to soar to $28 billion by 2026, boasting a Compound Annual Growth Rate (CAGR) from 2022 to 2026.

  1. Rapid Growth in the Self-Driving SoC Market Becomes a Key Global Opportunity for IC Design Companies

In 2022, the global market for self-driving SoC is approximately $10.8 billion, and it is projected to grow to $12.7 billion in 2023, representing an 18% YoY increase. Fueled by the rising penetration of autonomous driving, the market is expected to reach $28 billion in 2026, with a CAGR of approximately 27% from 2022 to 2026.

Given the slowing growth momentum in the consumer electronics market, self-driving SoC has emerged as a crucial global opportunity for IC design companies.

  1. Computing Power Reigns Supreme, with NVIDIA and Qualcomm Leading the Pack

Due to factors such as regulations, technology, costs, and network speed, most automakers currently operate at Level 2 autonomy. In practical terms, computing power exceeding 100 TOPS (INT8) is sufficient. However, as vehicles typically have a lifespan of over 15 years, future upgrades in autonomy levels will rely on Over-The-Air (OTA) updates, necessitating reserved computing power.

Based on the current choices made by automakers, computing power emerges as a primary consideration. Consequently, NVIDIA and Qualcomm are poised to hold a competitive edge. In contrast, Mobileye’s EyeQ Ultra, set to enter mass production in 2025, offers only 176 TOPS, making it susceptible to significant competitive pressure.

  1. Software-Hardware Integration, Decoupling, and Openness as Key Competitors

Seamless integration of software and hardware can maximize the computational power of SoCs. Considering the imperative for automakers to reduce costs and enhance efficiency, the degree of integration becomes a pivotal factor in a company’s competitiveness. However, not only does integration matter, but the ability to decouple software and hardware proves even more critical.

Through a high degree of decoupling, automakers can continually update SoC functionality via Over-The-Air (OTA) updates. The openness of the software ecosystem assists automakers in establishing differentiation, serving as a competitive imperative that IC design firms cannot overlook.


[News] Microsoft First In-House AI Chip “Maia” Produced by TSMC’s 5nm

On the 15th, Microsoft introducing its first in-house AI chip, “Maia.” This move signifies the entry of the world’s second-largest cloud service provider (CSP) into the domain of self-developed AI chips. Concurrently, Microsoft introduced the cloud computing processor “Cobalt,” set to be deployed alongside Maia in selected Microsoft data centers early next year. Both cutting-edge chips are produced using TSMC’s advanced 5nm process, as reported by UDN News.

Amidst the global AI fervor, the trend of CSPs developing their own AI chips has gained momentum. Key players like Amazon, Google, and Meta have already ventured into this territory. Microsoft, positioned as the second-largest CSP globally, joined the league on the 15th, unveiling its inaugural self-developed AI chip, Maia, at the annual Ignite developer conference.

These AI chips developed by CSPs are not intended for external sale; rather, they are exclusively reserved for in-house use. However, given the commanding presence of the top four CSPs in the global market, a significant business opportunity unfolds. Market analysts anticipate that, with the exception of Google—aligned with Samsung for chip production—other major CSPs will likely turn to TSMC for the production of their AI self-developed chips.

TSMC maintains its consistent policy of not commenting on specific customer products and order details.

TSMC’s recent earnings call disclosed that 5nm process shipments constituted 37% of Q3 shipments this year, making the most substantial contribution. Having first 5nm plant mass production in 2020, TSMC has introduced various technologies such as N4, N4P, N4X, and N5A in recent years, continually reinforcing its 5nm family capabilities.

Maia is tailored for processing extensive language models. According to Microsoft, it initially serves the company’s services such as $30 per month AI assistant, “Copilot,” which offers Azure cloud customers a customizable alternative to Nvidia chips.

Borkar, Corporate VP, Azure Hardware Systems & Infrastructure at Microsoft, revealed that Microsoft has been testing the Maia chip in Bing search engine and Office AI products. Notably, Microsoft has been relying on Nvidia chips for training GPT models in collaboration with OpenAI, and Maia is currently undergoing testing.

Gulia, Executive VP of Microsoft Cloud and AI Group, emphasized that starting next year, Microsoft customers using Bing, Microsoft 365, and Azure OpenAI services will witness the performance capabilities of Maia.

While actively advancing its in-house AI chip development, Microsoft underscores its commitment to offering cloud services to Azure customers utilizing the latest flagship chips from Nvidia and AMD, sustaining existing collaborations.

Regarding the cloud computing processor Cobalt, adopting the Arm architecture with 128 core chip, it boasts capabilities comparable to Intel and AMD. Developed with chip designs from devices like smartphones for enhanced energy efficiency, Cobalt aims to challenge major cloud competitors, including Amazon.
(Image: Microsoft)


[Insights] China Advances In-House AI Chip Development Despite U.S. Controls

On October 17th, the U.S. Department of Commerce announced an expansion of export control, tightening further restrictions. In addition to the previously restricted products like NVIDIA A100, H100, and AMD MI200 series, the updated measures now include a broader range, encompassing NVIDA A800, H800, L40S, L40, L42, AMD MI300 series, Intel Gaudi 2/3, and more, hindering their import into China. This move is expected to hasten the adoption of domestically developed chips by Chinese communications service providers (CSPs).

TrendForce’s Insights:

  1. Chinese CSPs Strategically Invest in Both In-House Chip Development and Related Companies

In terms of the in-house chip development strategy of Chinese CSPs, Baidu announced the completion of tape out for the first generation Kunlun Chip in 2019, utilizing the XPU. It entered mass production in early 2020, with the second generation in production by 2021, boasting a 2-3 times performance improvement. The third generation is expected to be released in 2024. Aside from independent R&D, Baidu has invested in related companies like Nebula-Matrix, Phytium, Smartnvy, and. In March 2021, Baidu also established Kunlunxin through the split of its AI chip business.

Alibaba, in April 2018, fully acquired Chinese CPU IP supplier C-Sky and established T-head semiconductor in September of the same year. Their first self-developed chip, Hanguang 800, was launched in September 2020. Alibaba also invested in Chinese memory giant CXMT, AI IC design companies Vastaitech, Cambricon and others.

Tencent initially adopted an investment strategy, investing in Chinese AI chip company Enflame Tech in 2018. In 2020, it established Tencent Cloud and Smart Industries Group(CSIG), focusing on IC design and R&D. In November 2021, Tencent introduced AI inference chip Zixiao, utilizing 2.5D packaging for image and video processing, natural language processing, and search recommendation.

Huawei’s Hisilicon unveiled Ascend 910 in August 2019, accompanied by the AI open-source computing framework MindSpore. However, due to being included in the U.S. entity list, Ascend 910 faced production restrictions. In August 2023, iFLYTEK, a Chinese tech company, jointly introduced the “StarDesk AI Workstation” with Huawei, featuring the new AI chip Ascend 910B. This is likely manufactured using SMIC’s N+2 process, signifying Huawei’s return to self-developed AI chips.

  1. Some Chinese Companies Turn to Purchasing Huawei’s Ascend 910B, Yet It Lags Behind A800

Huawei’s AI chips are not solely for internal use but are also sold to other Chinese companies. Baidu reportedly ordered 1,600 Ascend 910B chips from Huawei in August, valued at approximately 450 million RMB, to be used in 200 Baidu data center servers. The delivery is expected to be completed by the end of 2023, with over 60% of orders delivered as of October. This indicates Huawei’s capability to sell AI chips to other Chinese companies.

Huawei’s Ascend 910B, expected to be released in the second half of 2024, boasts hardware figures comparable to NVIDIA A800. According to tests conducted by Chinese companies, its performance is around 80% of A800. However, in terms of software ecosystem, Huawei still faces a significant gap compared to NVIDIA.

Overall, using Ascend 910B for AI training may be less efficient than A800. Yet with the tightening U.S. policies, Chinese companies are compelled to turn to Ascend 910B. As user adoption increases, Huawei’s ecosystem is expected to improve gradually, leading more Chinese companies to adopt its AI chips. Nevertheless, this will be a protracted process.



[News] A battle on 4nm: AMD Teams Up with Samsung, while Google Weighs Supplier Split

Rumors swirl around AMD’s upcoming chip architecture, codenamed “Prometheus,” featuring the Zen 5C core. As reported by TechNews, the chip is poised to leverage both TSMC’s 3nm and Samsung’s 4nm processes simultaneously, marking a shift in the competitive landscape from process nodes, yield, and cost to factors like capacity, ecosystem, and geopolitics, are all depends on customer considerations.

Examining yields, TSMC claims an estimated 80% yield for its 4nm process, while Samsung has surged from 50% to an impressive 75%, aligning with TSMC’s standards and raising the likelihood of chip customers returning. Speculation abounds that major players such as Qualcomm and Nvidia may reconsider their suppliers, with industry sources suggesting Samsung’s 4nm capacity is roughly half of TSMC’s.

Revegnus, a reputable X(formerly Twitter) source, unveiled information from high-level Apple meetings, indicating a 63% yield for TSMC’s 3nm process but at double the price of the 4nm process. In the 4nm realm, Samsung’s yield mirrors TSMC’s, with Samsung showing a faster-than-expected yield recovery.

Consequently, with Samsung’s significant improvements in yield and capacity, coupled with TSMC’s decision to raise prices, major clients may explore secondary suppliers to diversify outsourcing orders, factoring in considerations such as cost and geopolitics. Recent reports suggest Samsung is in final negotiations for a 4nm collaboration with AMD, planning to shift some 4nm processor orders from TSMC to Samsung.

Beyond AMD, the Tensor G3 processor in Google’s Pixel 8 series this year adopts Samsung’s 4nm process. Samsung’s new fabs in Taylor, Texas, sees its inaugural customer in its Galaxy smartphones, producing Exynos processors.

Furthermore, Samsung announced that U.S.-based AI solution provider Groq will entrust the company to manufacture next-generation AI chips using the 4nm process, slated to commence production in 2025, marking the first order for the new Texas plant.

Regarding TSMC’s 4nm clients, alongside longstanding partners like Apple, Nvidia, Qualcomm, MediaTek, AMD, and Intel, indications propose a potential transition to TSMC’s 4nm process for Tensor G4, while Tensor G5 will be produced using TSMC’s 3nm process. Ending the current collaboration with Samsung, TSMC’s chip manufacturing debut is anticipated to be delayed until 2025.

Last year, rumors circulated about Tesla, the electric vehicle giant, shifting orders for the 5th generation self-driving chip, Hardware 5 (HW 5.0), to TSMC. This decision was prompted by Samsung’s lagging 4nm process yield at that time. However, with Samsung’s improved yield, industry inclination leans towards splitting orders between the two companies.

  • Page 4
  • 15 page(s)
  • 71 result(s)