[News] Tech Giants Launch AI Arms Race, Aiming to Spark a Wave of Smartphone and Computer Upgrades

According to CNA’s news, the potential business opportunities in artificial intelligence have spurred major tech giants, with NVIDIA, AMD, Intel, MediaTek, and Qualcomm sequentially launching products featuring the latest AI capabilities.

This AI arms race has expanded its battleground from servers to smartphones and laptops, as companies hope that the infusion of AI will inject vitality into mature markets.

Generative AI is experiencing robust development, with MediaTek considering this year as the “Generative AI Year.” They anticipate a potential paradigm shift in the IC design industry, contributing to increased productivity and significantly impacting IC products.

This not only brings forth new applications but also propels the demand for new algorithms and computational processors.

MediaTek and Qualcomm recently introduced their flagship 5G generative AI mobile chips, the Dimensity 9300 and Snapdragon 8 Gen 3, respectively. The Dimensity 9300, integrated with the built-in APU 790, enables faster and more secure edge AI computing, capable of generating images within 1 second.

MediaTek points out that the smartphone industry is experiencing a gradual growth slowdown, and generative AI is expected to provide new services, potentially stimulating a new wave of consumer demand growth. Smartphones equipped with the Dimensity 9300 and Snapdragon 8 Gen 3 are set to be released gradually by the end of this year.

Targeting the AI personal computer (PC) market, Intel is set to launch the Meteor Lake processor on December 14. Two major computer brands, Acer and ASUS, are both customers for Intel’s AI PC.

High-speed transmission interface chip manufacturer Parade and network communication chip manufacturer Realtek are optimistic. The integration of AI features into personal computers and laptops is expected to stimulate demand for upgrades, leading to a potential increase in PC shipments next year.

In TrendForces’ report on November 8th, it has indicated that the emerging market for AI PCs does not have a clear definition at present, but due to the high costs of upgrading both software and hardware associated with AI PCs, early development will be focused on high-end business users and content creators.

For consumers, current PCs offer a range of cloud AI applications sufficient for daily life and entertainment needs. However, without the emergence of a groundbreaking AI application in the short term to significantly enhance the AI experience, it will be challenging to rapidly boost the adoption of consumer AI PCs.

For the average consumer, with disposable income becoming increasingly tight, the prospect of purchasing an expensive, non-essential computer is likely wishful thinking on the part of suppliers. Nevertheless, looking to the long term, the potential development of more diverse AI tools—along with a price reduction—may still lead to a higher adoption rate of consumer AI PCs.

Read more

(Image: Qualcomm)


[Insights] Infinite Opportunities in Automotive Sector as IC Design Companies Compete for Self-Driving SoC

In TrendForce’s report on the self-driving System-on-Chip (SoC) market, it has witnessed rapid growth, which is anticipated to soar to $28 billion by 2026, boasting a Compound Annual Growth Rate (CAGR) from 2022 to 2026.

  1. Rapid Growth in the Self-Driving SoC Market Becomes a Key Global Opportunity for IC Design Companies

In 2022, the global market for self-driving SoC is approximately $10.8 billion, and it is projected to grow to $12.7 billion in 2023, representing an 18% YoY increase. Fueled by the rising penetration of autonomous driving, the market is expected to reach $28 billion in 2026, with a CAGR of approximately 27% from 2022 to 2026.

Given the slowing growth momentum in the consumer electronics market, self-driving SoC has emerged as a crucial global opportunity for IC design companies.

  1. Computing Power Reigns Supreme, with NVIDIA and Qualcomm Leading the Pack

Due to factors such as regulations, technology, costs, and network speed, most automakers currently operate at Level 2 autonomy. In practical terms, computing power exceeding 100 TOPS (INT8) is sufficient. However, as vehicles typically have a lifespan of over 15 years, future upgrades in autonomy levels will rely on Over-The-Air (OTA) updates, necessitating reserved computing power.

Based on the current choices made by automakers, computing power emerges as a primary consideration. Consequently, NVIDIA and Qualcomm are poised to hold a competitive edge. In contrast, Mobileye’s EyeQ Ultra, set to enter mass production in 2025, offers only 176 TOPS, making it susceptible to significant competitive pressure.

  1. Software-Hardware Integration, Decoupling, and Openness as Key Competitors

Seamless integration of software and hardware can maximize the computational power of SoCs. Considering the imperative for automakers to reduce costs and enhance efficiency, the degree of integration becomes a pivotal factor in a company’s competitiveness. However, not only does integration matter, but the ability to decouple software and hardware proves even more critical.

Through a high degree of decoupling, automakers can continually update SoC functionality via Over-The-Air (OTA) updates. The openness of the software ecosystem assists automakers in establishing differentiation, serving as a competitive imperative that IC design firms cannot overlook.


[News] MediaTek Teams Up with Meta to Develop Next-Gen AR Smart Glasses, Edging Out Qualcomm

According to anue’s news, during the recent MediaTek 2023 Summit, major IC design firm MediaTek held an overseas summit in the United States and announced a new collaboration with Meta. MediaTek will take charge of developing the chip for Ray-Ban Meta smart glasses, replacing the competitor Qualcomm’s Snapdragon AR1 Gen 1 chip.

Notably, in October 2023,  Meta launched the new generation of Ray-Ban Meta smart glasses. These feature the Qualcomm Snapdragon AR1 Gen 1 chip, a 12-megapixel camera, and 5 microphones for sending and receiving messages. It is the world’s first smart glasses with Facebook and Instagram live streaming capabilities, enabling the recording of high-quality videos.

MediaTek has long been dedicated to developing low-power, high-performance SoC. This collaboration with Meta focuses on jointly creating a custom chip specifically designed for AR smart glasses, meeting the requirements of lightweight and compact devices. The collaborative product, Ray-Ban Meta smart glasses, is expected to be launched in the future.

(Photo credit: MediaTek)


[Insights] MediaTek’s New Dimensity 9300 Flagship Chip Impresses in Benchmarks, but Qualcomm Remains a Strong Competitor

On November 6, 2023, MediaTek unveiled its latest flagship smartphone SoC, the Dimensity 9300. Benchmark data reveals that the Dimensity 9300 outperforms Qualcomm’s flagship chips in both CPU (multi-core) and GPU performance.

However, Qualcomm enjoys a strong brand reputation and recognition in the consumer market, making it a challenging task for MediaTek to capture the top market position for high-end SoCs in the Android camp with this chip.

TrendForce’s Insights:

  1. Brand New Design Boosts Dimensity 9300 Benchmark Performance

Diverging from the conventional “big core + small core” architecture, MediaTek introduces a groundbreaking “4+4” design in the Dimensity 9300. This chip comprises 4 Cortex-X4 cores (clocking up to 3.25GHz) and 4 Cortex-A720 cores (running at 2.0GHz).

According to MediaTek’s official data, this new architecture delivers a 40% increase in multi-core peak performance compared to the previous-generation Dimensity 9200. Additionally, it achieves a 33% reduction in power consumption while maintaining the same level of performance.

In addition to its outstanding chip performance and energy efficiency, the Dimensity 9300 achieves a significant breakthrough in the field of AI. According to official data, powered by the seventh-generation AI processor APU790, the Dimensity 9300 achieves AI edge computing processing speeds up to 8 times faster than the previous generation. In practical terms, this means it can generate image content from text in less than 1 second.

Furthermore, through MediaTek’s exclusive memory compression technology, known as NeuroPilot Compression, it significantly reduces the memory footprint of AI Large Language Models (LLMs) on end-user devices. This enables smooth operation of LLMs with up to 1 billion, 7 billion, 13 billion, and even a maximum of 33 billion parameters on terminal devices.

  1. Dimensity 9300 Outperforms, but Qualcomm Remains a Strong Contender

While Dimensity 9300 exhibits superior performance in both CPU (multi-core) and GPU compared to Qualcomm’s concurrent flagship Snapdragon 8 Gen 3 SoC, a consumer survey conducted by a well-known international tech product comparison website tells a different story.

Out of 1,833 valid respondents, even after being informed about Dimensity 9300’s better benchmark results, 946 people (representing 51.6% of respondents) still chose the Qualcomm chip with lower scores.

This is likely because major smartphone manufacturers have historically favored Qualcomm over MediaTek for their flagship devices. Qualcomm has built a stronger brand image and enjoys higher recognition in the market. Therefore, MediaTek faces a formidable challenge in its quest to capture the top spot for Android high-end smartphone SoC market share with Dimensity 9300.

Furthermore, since the current benchmark results are derived from engineering samples, and the ultimate products from smartphone manufacturers have not been officially released, it remains to be seen whether Dimensity 9300 can deliver the expected performance efficiency. The final verdict on its performance will depend on the adjustments and optimizations made by these manufacturers.

Read more

(Photo credit: MediaTek)


[News] AI PCs and Smartphones on the Rise as Generative AI Expands to the Edge

The fusion of AIGC with end-user devices is highlighting the importance of personalized user experiences, cost efficiency, and faster response times in generative AI applications. Major companies like Lenovo and Xiaomi are ramping up their efforts in the development of edge AI, extending the generative AI wave from the cloud to the edge and end-user devices.

On October 24th, Lenovo hosted its 9th Lenovo Tech World 2023, announcing deepening collaborations with companies like Microsoft, NVIDIA, Intel, AMD, and Qualcomm in the areas of smart devices, infrastructure, and solutions. At the event, Lenovo also unveiled its first AI-powered PC. This compact AI model, designed for end-user applications, offers features such as photo editing, intelligent video editing, document editing, and auto task-solving based on user thought patterns. 

Smartphone manufacturers are also significantly extending their efforts into edge AI. Xiaomi recently announced their first use of Qualcomm Snapdragon 8 Gen 3, significantly enhancing their ability to handle LLMs at the end-user level. Xiaomi has also embedded AI LLMs into their HyperOS system to enhance user experiences.

During the 2023 vivo Developer Conference on November 1st, vivo introduced their self-developed Blue Heart model, offering five products with parameters ranging from billions to trillions, covering various core scenarios. Major smartphone manufacturers like Huawei, OPPO, and Honor are also actively engaged in developing LLMs.

Speeding up Practical Use of AI Models in Business

While integrating AI models into end-user devices enhances user experiences and boosts the consumer electronics market, it is equally significant for advancing the practical use of AI models. As reported by Jiwei, Jian Luan, the head of the AI Lab Big Model Team from Xiaomi, explains that large AI models have gain attention because they effectively drive the production of large-scale informational content. This is made possible through users’ extensive data, tasks, and parameter of AI model training. The next step in achieving lightweight models, to ensure effective operation on end-user devices, will be the main focus of industry development.

In fact, generative AI’s combination with smart terminal has several advantages:

  1. Personal data will not be uploaded to the cloud, reducing privacy and data security risks.
  2. AI models can connect to end-user databases and personal information, potentially transforming general AI LLMs into personalized small models, offering personalized services to individual users.
  3. By compressing AI LLMs and optimizing end-user hardware and software, edge AI can reduce operating costs, enhance response times, and increase service efficiency.

Users often used to complain about the lack of intelligence in AI devices, stating that AI systems would reset to a blank state after each interaction. This is a common issue with cloud-based LLMs. Handling such concerns at the end-user device level can simplify the process.

In other words, the expansion of generative AI from the cloud to the edge integrates AI technology with hardware devices like PCs and smartphones. This is becoming a major trend in the commercial application and development of large AI models. It has the potential to enhance or resolve challenges in AI development related to personalization, security and privacy risks, high computing costs, subpar performance, and limited interactivity, thereby accelerating the commercial use of AI models.

Integrated Chips for End-User Devices: CPU+GPU+NPU

The lightweight transformation and localization of AI LLMs rely on advancements in chip technology. Leading manufacturers like Qualcomm, Intel, NVIDIA, AMD, and others have been introducing products in this direction. Qualcomm’s Snapdragon X Elite, the first processor in the Snapdragon X series designed for PCs, integrates a dedicated Neural Processing Unit (NPU) capable of supporting large-scale language models with billions of parameters.

The Snapdragon 8 Gen 3 platform supports over 20 AI LLMs from companies like Microsoft, Meta, OpenAI, Baidu, and others. Intel’s latest Meteor Lake processor integrates an NPU in PC processors for the first time, combining NPU with the processor’s AI capabilities to improve the efficiency of AI functions in PCs. NVIDIA and AMD also plan to launch PC chips based on Arm architecture in 2025 to enter the edge AI market.

Kedar Kondap, Senior Vice President and General Manager of Compute and Gaming Business at Qualcomm, emphasizes the advantages of LLM localization. He envisions highly intelligent PCs that actively understand user thoughts, provide privacy protection, and offer immediate responses. He highlights that addressing these needs at the end-user level provides several advantages compared to solving them in the cloud, such as simplifying complex processes and offering enhanced user experiences.

To meet the increased demand for AI computing when extending LLMs from the cloud to the edge and end-user devices, the integration of CPU+GPU+NPU is expected to be the future of processor development. This underscores the significance of Chiplet technology.

Feng Wu, Chief Engineer of Signal Integrity and Power Integrity at Sanechips/ZTE, explains that by employing Die to Die and Fabric interconnects, it is possible to densely and efficiently connect more computing units, achieving large-scale chip-level hyperscale computing.

Additionally, by connecting the CPU, GPU, and NPU at high speeds in the same system, chip-level heterogeneity enhances data transfer rates, reduces data access power, increases data processing speed, and lowers storage access power to meet the parameter requirements of LLMs.

(Image: Qualcomm)

  • Page 2
  • 8 page(s)
  • 37 result(s)