ASIC


2023-10-13

[News] TSMC’s Investor Meeting on the 19th with Market’s Attention on Five Key Topics

TSMC is set to conduct an investor meeting on the 19th, with Morgan Stanley, UBS, and Bank of America Securities releasing their latest reports ahead of the event. These reports highlight five main areas of interest:

1. Q4 Operational Outlook
2. Future Gross Margin Trends
3. Potential Adjustments to Full-Year Revenue Estimates and Capital Expenditure
4. Economic and Operational Outlook for the Coming Year
5. 2nm Production Plans

Despite market uncertainties surrounding factors such as end-market demand, the Chinese mainland’s economic trajectory, and semiconductor industry cycles, Morgan Stanley Securities anticipates a 10% QoQ increase in TSMC’s Q4 revenue. They attribute this to strong demand for AI GPUs and ASICs, urgent orders from products like smartphone system-on-chips (SoCs) and PC GPUs, as well as sustained demand for Apple’s iPhones. Additionally, the gross margin is expected to benefit from the depreciation of the New Taiwan Dollar, potentially reaching 53%, surpassing the market consensus of 52.2%.

Bank of America Securities similarly projects a 10% QoQ revenue growth for TSMC in Q4, with a gross margin estimate of 52.7%. UBS Securities, on the other hand, has adjusted its Q4 revenue growth forecast from 10% to 7% while maintaining their expectation of a 10% YoY decline in full-year revenue.

In terms of capital expenditures, Morgan Stanley Securities, taking into account factors such as Intel’s 3nm outsourcing and delays in the U.S. factory expansion, estimates that TSMC’s capital expenditures will remain around $28 billion for both this year and the next. UBS Securities, however, believes that due to a slower short-term business recovery, capital expenditures for this year and the next will be adjusted to $31 billion and $30 billion, respectively.

Explore More

(Photo credit: TSMC)

2023-06-29

AI and HPC Demand Set to Boost HBM Volume by Almost 60% in 2023, Says TrendForce

High Bandwidth Memory (HBM) is emerging as the preferred solution for overcoming memory transfer speed restrictions due to the bandwidth limitations of DDR SDRAM in high-speed computation. HBM is recognized for its revolutionary transmission efficiency and plays a pivotal role in allowing core computational components to operate at their maximum capacity. Top-tier AI server GPUs have set a new industry standard by primarily using HBM. TrendForce forecasts that global demand for HBM will experience almost 60% growth annually in 2023, reaching 290 million GB, with a further 30% growth in 2024.

TrendForce’s forecast for 2025, taking into account five large-scale AIGC products equivalent to ChatGPT, 25 mid-size AIGC products from Midjourney, and 80 small AIGC products, the minimum computing resources required globally could range from 145,600 to 233,700 Nvidia A100 GPUs. Emerging technologies such as supercomputers, 8K video streaming, and AR/VR, among others, are expected to simultaneously increase the workload on cloud computing systems due to escalating demands for high-speed computing.

HBM is unequivocally a superior solution for building high-speed computing platforms, thanks to its higher bandwidth and lower energy consumption compared to DDR SDRAM. This distinction is clear when comparing DDR4 SDRAM and DDR5 SDRAM, released in 2014 and 2020 respectively, whose bandwidths only differed by a factor of two. Regardless of whether DDR5 or the future DDR6 is used, the quest for higher transmission performance will inevitably lead to an increase in power consumption, which could potentially affect system performance adversely. Taking HBM3 and DDR5 as examples, the former’s bandwidth is 15 times that of the latter and can be further enhanced by adding more stacked chips. Furthermore, HBM can replace a portion of GDDR SDRAM or DDR SDRAM, thus managing power consumption more effectively.

TrendForce concludes that the current driving force behind the increasing demand is AI servers equipped with Nvidia A100, H100, AMD MI300, and large CSPs such as Google and AWS, which are developing their own ASICs. It is estimated that the shipment volume of AI servers, including those equipped with GPUs, FPGAs, and ASICs, will reach nearly 1.2 million units in 2023, marking an annual growth rate of almost 38%. TrendForce also anticipates a concurrent surge in the shipment volume of AI chips, with growth potentially exceeding 50%.

2023-06-14

AI Servers: The Savior of the Supply Chain, Examining Key Industries

NVIDIA’s robust financial report reveals the true impact of AI on the technology industry, particularly in the AI server supply chain.

2023-05-22

Beyond the SoC Paradigm: Where Are Next-Gen Mobile AI Chips Going to Land?

The excitement surrounding ChatGPT has sparked a new era in generative AI. This fresh technological whirlwind is revolutionizing everything, from cloud-based AI servers all the way down to edge-computing in smartphones.

Given that generative AI has enormous potential to foster new applications and boost user productivity, smartphones have unsurprisingly become a crucial vehicle for AI tech. Even though the computational power of an end device isn’t on par with the cloud, it has the double benefit of reducing the overall cost of computation and protecting user privacy. This is primarily why smartphone OEMs started using AI chips to explore and implement new features a few years ago.

However, Oppo’s recent decision to shut down its chip design company, Zheku, casted some doubts on the future of smartphone OEMs’ self-developed chips, bringing the smartphone AI chip market into focus.

Pressing Needs to Speed Up AI Chips Iterations

The industry’s current approach to running generative AI models on end devices involves two-pronged approaches: software efforts focus on reducing the size of the models to lessen the burden and energy consumption of chips, while the hardware side is all about increasing computational power and optimizing energy use through process shrinkage and architectural upgrades.

IC design houses, like Qualcomm with its Snapdragon8 Gen.2, are now hurrying to develop SoC products that are capable of running these generative AI base models.
Here’s the tricky part though: models are constantly evolving at a pace far exceeding the SoC development cycle – with updates like GPT occurring every six months. This gap between hardware iterations and new AI model advancements might only get wider, making the rapid expansion of computational requirements the major pain point that hardware solution providers need to address.

Top-tier OEMs pioneering Add-on AI Accelerators

It’s clear that in this race for AI computational power, the past reliance on SoCs is being challenged. Top-tier smartphone OEMs are no longer merely depending on standard products from SoC suppliers. Instead, they’re aggressively adopting AI accelerator chips to fill the computational gap.

The approaches of integrating and add-on AI accelerator were first seen in 2017:

  • Integrated: This strategy is represented by Huawei’s Kirin970 and Apple’s A11 Bionic, which incorporated an AI engine within SoC.
  • Add-on: Initially implemented by Google Pixel 2, which used a custom Pixel Visual Core chip alongside Snapdragon 835. It wasn’t until the 2021 Pixel 6 series, which introduced Google’s self-developed Tensor SoC, that the acceleration unit was directly integrated into the Tensor.

Clearly, OEMs with self-developing SoC+ capabilities usually embed their models into AI accelerators at the design stage. This hardware-software synergy supplies the required computing power for specific AI scenarios.

New Strategic Models on the Rise

For OEMs without self-development capabilities, the hefty cost of SoC development keeps them reliant on chip manufacturers’ SoC iterations. Yet, they’re also applying new strategies within the supply chain to keep pace with swift changes.

Here’s the interesting part – brands are leveraging simpler specialized chips to boost AI-enabled applications, making standalone ICs like ISPs(Image Signal Processors) pivotal for new features of photography and display. Meanwhile, we’re also seeing potential advancements in the field of productivity tools – from voice assistants to photo editing – where the implementation of small-scale ASICs is seriously being considered to fulfill computational demands.

From Xiaomi’s collaboration with Altek and Vivo’s joint effort with Novatek to develop ISPs, the future looks bright for ASIC development, opening up opportunities for small-scale IC design and IP service providers.

Responding to the trend, SoC leader MediaTek is embracing an open 5G architecture strategy for market expansion through licensing and custom services. However, there’s speculation about OEMs possibly replacing MediaTek’s standard IP with self-developed ones for deeper product differentiation.

Looking at this, it’s clear that the battle of AI chips continues with no winning strategy for speeding up smartphone AI chip product iteration.

Considering the substantial resources required for chip development and the saturation of the smartphone market, maintaining chip-related strategies adds a layer of uncertainty for OEMs.With Oppo’s move to discontinue its chip R&D, other brands like Vivo and Xiaomi are likely reconsidering their game plans. The future, therefore, warrants close watch.

Read more:

AI Sparks a Revolution Up In the Cloud

2023-05-08

China’s Pivot: Tech Giants Seek Self-Sufficiency Amid US Chip Ban

The US ban on Chinese industries has left China struggling with a seemingly severe shortage of chips. However, China’s tech giants refuse to surrender; instead, they’re pivoting quickly to survive the game.

Since 2019, the US Department of Commerce has added Chinese leading companies like Huawei to its entity list. Restrictions were expanded in 2020 to include semiconductor manufacturing, making a huge impact on SMIC’s advanced processes below 14nm.

Starting in 2021, the US has been intensifying its control by placing more IC design houses on the list, which include Jingjia (GPU), Shenwei (CPU), Loongson Tech (CPU), Cambricon (AI), Wayzim (RF&GPS), and Yangtze (NAND Flash). Furthermore, the export of advanced EDA tools, equipment, CPUs, and GPUs to China has also been banned.

The goal of such measures is to hinder China’s progress in high-tech fields such as 5G/6G, AI, Cloud computing, and autonomous driving by eroding the dominance of its tech giants over time.

China has been aggressively pursuing a policy of domestic substitution in response to the US’s increasing control. As part of this effort, leading domestic IC design companies like Horizon, Cambricon, Enflame, Biren, Gigadevice, and Nations Technologies have been ramping up their efforts for comprehensive chip upgrades in a variety of applications.

Chinese Brands Ramping up for ASICs

There is a particularly intriguing phenomenon in recent years. Since 2019, China’s leading brands have been venturing into chip design to develop highly specialized ASICs (Application Specific Integrated Circuits) at an unprecedented speed. This move is aimed at ensuring a stable supply of chips and also advancing their technical development.

A closer look at how top companies across diverse application fields integrate ASIC chips into their technology roadmap:

  • AI Cloud computing: Alibaba, Baidu, Tencent

China’s tech giants are leveraging advanced foundry processes, such as TSMC’s 5nm and Samsung’s 7nm, to produce cutting-edge AI chips for high-end applications like cloud computing, image coding, AI computing, and network chips.

Alibaba launched its AI chip, Hanguang 800, and server CPU, Yitian 710, in 2019 and 2021, respectively. Both chips were manufactured at TSMC’s 5nm process and are extensively used on Alibaba’s cloud computing platform.

In December 2019, Baidu released its AI chip, Kunlun Xin, which uses Samsung’s 14nm process, followed by its 2nd generation, which uses a 7nm process, for AI and image coding.

  • Smartphone: Xiaomi, Vivo, OPPO

Due to the high technical threshold of SoC technology used in smartphones, mobile phone brands mainly develop their own chips by optimizing image, audio, and power processing.

In the year of 2021, Xiaomi released the ISP Surge C1, followed by the PMIC Surge P1. Vivo first released the ISP V1 in September 2021, followed by an upgraded product, V1+, in April 2022, and then V2 in November 2022.
OPPO, on the other hand, unveiled the MariSilicon X NPU in December 2021, which enhances the image processing performance of smartphones, using TSMC’s 6nm process, and later revealed the MariSilicon Y Bluetooth audio SoC TSMC’s 6nm RF process later in 2022.

  • Home appliance: Konka, Midea, Changhong, Skyworth

The brands are focusing primarily on MCU and PMIC chips that are essential to a wide range of home appliances. They’re also incorporating SoC chips into their smart TVs.

For example, Hisense has jumped into the SoC game in January 2022 by releasing an 8K AI image chip for their smart TVs. Changhong manufactured an MCU with RISC-V architecture and a 40nm process in December 2022.

  • Autonomous driving: NIO, Xiaopeng, Li Auto, BYD

The leading companies are developing ISP and highly technical SoC chips for autonomous driving, which has resulted in a slower development process.

In 2020, NIO formed a semiconductor design team for Autonomous driving chips and ISP. Xiaopeng started its Autonomous driving and ISP chip R&D project in the first half of 2021. Li Auto established two subsidiaries in 2022, with a primary focus on power semiconductors and ISP chips.

Finally, BYD, which has a long history of working on MCU and power semiconductor components, also announced its entry into the autonomous driving chip market in 2022.

Navigating the US’ Tech Crackdown

So why are these brands investing so heavily in self-developed ASICs?

One reason is to avoid the risks associated with export control policies from the US and its allies. Developing their own chips would mitigate the risk of supply chain disruptions caused by potential blockades, ensuring a stable supply and the sustainability of their technology roadmap.

In addition, there are many internal incentives for these brands – for instance, companies that have self-developed chips will be eligible for more government subsidies, as this aligns with the government’s aggressive policy to foster the semiconductor industry. Brands can also reduce their reliance on external suppliers by using their own ASIC chips, which can further lower the operating costs.

Technology wise, ASIC chips allow brands to enhance the features they require and enable better integration with the software, which could provide efficiency gains at system level – similar strategies are also being employed by Google and AWS with their AI chips, as well as by Apple with its M1 SoC.

With all things considered, it is certainly possible that we will see a persistent trend of more self-developed ASIC chips made by Chinese brands, which could potentially lead to significant changes in China’s semiconductor supply chain from the ground up.

  • Page 1
  • 1 page(s)
  • 5 result(s)