Artificial Intelligence


[News] U.S. Department of Commerce Introduces New Regulations to Restrict China from Training AI Using U.S. Cloud Services

U.S. Commerce Secretary Gina Raimondo stated on January 26th that the U.S. government will propose that American cloud computing companies determine whether foreign entities are accessing U.S. data centers to train artificial intelligence models.

The proposed “know your customer” regulation was made available for public inspection on January 26th and is scheduled for publication on January 29th.

According to a report from Reuters, Raimondo stated during her interview that, “We can’t have non-state actors or China or folks who we don’t want accessing our cloud to train their models.”

“We use export controls on chips,” she noted. “Those chips are in American cloud data centers so we also have to think about closing down that avenue for potential malicious activity.”

Raimondo further claimed that, the United States is “trying as hard as we can to deny China the compute power that they want to train their own (AI) models, but what good is that if they go around that to use our cloud to train their models?”

Since the U.S. government introduced chip export controls to China last year, NVIDIA initially designed downgraded AI chips A800 and H800 for Chinese companies. However, new regulations in October of 2023 by the U.S. Department of Commerce brought A800, H800, L40S, and other chips under control.

Raimondo stated that the Commerce Department would not permit NVIDIA to export its most advanced and powerful AI chips, which could facilitate China in developing cutting-edge models.

In addition to the limitations on NVIDIA’s AI chips, the U.S. government has also imposed further restrictions on specific equipment. For example, ASML, a leading provider of semiconductor advanced lithography equipment, announced on January 1st, 2024, that it was partially revoking export licenses for its DUV equipment in relation to the U.S. government.

Read more

(Photo credit: iStock)

Please note that this article cites information from Reuters.


[News] Samsung Reportedly Organizing Next-Gen Chip Fabrication Team, Aiming to Seize the Initiative in the AI Field

According to the South Korean media The Korea Economic Daily’s report, Samsung Electronics has established a new business unit dedicated to developing next-generation chip processing technology. The aim is to secure a leading position in the field of AI chips and foundry services.

The report indicates that the recently formed research team at Samsung will be led by Hyun Sang-jin, who was promoted to the position of general manager on November 29. He has been assigned the responsibility of ensuring a competitive advantage against competitors like TSMC in the technology landscape.

The team will be placed under Samsung’s chip research center within its Device Solutions (DS) division, which oversees its semiconductor business, as mentioned in the report.

Reportedly, insiders claim that Samsung aims for the latest technology developed by the team to lead the industry for the next decade or two, similar to the gate-all-around (GAA) transistor technology introduced by Samsung last year.

Samsung has previously stated that compared to the previous generation process, the 3-nanometer GAA process can deliver a 30% improvement in performance, a 50% reduction in power consumption, and a 45% reduction in chip size. In the report, Samsung also claimed that it is more energy-efficient compared to FinFET technology, which is utilized by the TSMC’s 3-nanometer process.

Read more

(Photo credit: Samsung)


[News] AI PCs and Smartphones on the Rise as Generative AI Expands to the Edge

The fusion of AIGC with end-user devices is highlighting the importance of personalized user experiences, cost efficiency, and faster response times in generative AI applications. Major companies like Lenovo and Xiaomi are ramping up their efforts in the development of edge AI, extending the generative AI wave from the cloud to the edge and end-user devices.

On October 24th, Lenovo hosted its 9th Lenovo Tech World 2023, announcing deepening collaborations with companies like Microsoft, NVIDIA, Intel, AMD, and Qualcomm in the areas of smart devices, infrastructure, and solutions. At the event, Lenovo also unveiled its first AI-powered PC. This compact AI model, designed for end-user applications, offers features such as photo editing, intelligent video editing, document editing, and auto task-solving based on user thought patterns. 

Smartphone manufacturers are also significantly extending their efforts into edge AI. Xiaomi recently announced their first use of Qualcomm Snapdragon 8 Gen 3, significantly enhancing their ability to handle LLMs at the end-user level. Xiaomi has also embedded AI LLMs into their HyperOS system to enhance user experiences.

During the 2023 vivo Developer Conference on November 1st, vivo introduced their self-developed Blue Heart model, offering five products with parameters ranging from billions to trillions, covering various core scenarios. Major smartphone manufacturers like Huawei, OPPO, and Honor are also actively engaged in developing LLMs.

Speeding up Practical Use of AI Models in Business

While integrating AI models into end-user devices enhances user experiences and boosts the consumer electronics market, it is equally significant for advancing the practical use of AI models. As reported by Jiwei, Jian Luan, the head of the AI Lab Big Model Team from Xiaomi, explains that large AI models have gain attention because they effectively drive the production of large-scale informational content. This is made possible through users’ extensive data, tasks, and parameter of AI model training. The next step in achieving lightweight models, to ensure effective operation on end-user devices, will be the main focus of industry development.

In fact, generative AI’s combination with smart terminal has several advantages:

  1. Personal data will not be uploaded to the cloud, reducing privacy and data security risks.
  2. AI models can connect to end-user databases and personal information, potentially transforming general AI LLMs into personalized small models, offering personalized services to individual users.
  3. By compressing AI LLMs and optimizing end-user hardware and software, edge AI can reduce operating costs, enhance response times, and increase service efficiency.

Users often used to complain about the lack of intelligence in AI devices, stating that AI systems would reset to a blank state after each interaction. This is a common issue with cloud-based LLMs. Handling such concerns at the end-user device level can simplify the process.

In other words, the expansion of generative AI from the cloud to the edge integrates AI technology with hardware devices like PCs and smartphones. This is becoming a major trend in the commercial application and development of large AI models. It has the potential to enhance or resolve challenges in AI development related to personalization, security and privacy risks, high computing costs, subpar performance, and limited interactivity, thereby accelerating the commercial use of AI models.

Integrated Chips for End-User Devices: CPU+GPU+NPU

The lightweight transformation and localization of AI LLMs rely on advancements in chip technology. Leading manufacturers like Qualcomm, Intel, NVIDIA, AMD, and others have been introducing products in this direction. Qualcomm’s Snapdragon X Elite, the first processor in the Snapdragon X series designed for PCs, integrates a dedicated Neural Processing Unit (NPU) capable of supporting large-scale language models with billions of parameters.

The Snapdragon 8 Gen 3 platform supports over 20 AI LLMs from companies like Microsoft, Meta, OpenAI, Baidu, and others. Intel’s latest Meteor Lake processor integrates an NPU in PC processors for the first time, combining NPU with the processor’s AI capabilities to improve the efficiency of AI functions in PCs. NVIDIA and AMD also plan to launch PC chips based on Arm architecture in 2025 to enter the edge AI market.

Kedar Kondap, Senior Vice President and General Manager of Compute and Gaming Business at Qualcomm, emphasizes the advantages of LLM localization. He envisions highly intelligent PCs that actively understand user thoughts, provide privacy protection, and offer immediate responses. He highlights that addressing these needs at the end-user level provides several advantages compared to solving them in the cloud, such as simplifying complex processes and offering enhanced user experiences.

To meet the increased demand for AI computing when extending LLMs from the cloud to the edge and end-user devices, the integration of CPU+GPU+NPU is expected to be the future of processor development. This underscores the significance of Chiplet technology.

Feng Wu, Chief Engineer of Signal Integrity and Power Integrity at Sanechips/ZTE, explains that by employing Die to Die and Fabric interconnects, it is possible to densely and efficiently connect more computing units, achieving large-scale chip-level hyperscale computing.

Additionally, by connecting the CPU, GPU, and NPU at high speeds in the same system, chip-level heterogeneity enhances data transfer rates, reduces data access power, increases data processing speed, and lowers storage access power to meet the parameter requirements of LLMs.

(Image: Qualcomm)


[Insights] Apple’s Quiet Pursuit of AI and the Advantage in AI Subscription Models

According to Bloomberg, Apple is quietly catching up with its competitors in the AI field. Observing Apple’s layout for the AI field, in addition to acquiring AI-related companies to gain relevant technology quickly, Apple is now developing its large language model (LLM).

TrendForce’s insights:

  1. Apple’s Low-Profile Approach to AI: Seizing the Next Growth Opportunity

As the smartphone market matures, brands are not only focusing on hardware upgrades, particularly in camera modules, to stimulate device replacements, but they are also observing the emergence of numerous brands keen on introducing new AI functionalities in smartphones. This move is aimed at reigniting the growth potential of smartphones. Some Chinese brands have achieved notable progress in the AI field, especially in large language models.

For instance, Xiaomi introduced its large language model MiLM-6B, ranking tenth in the C-Eval list (a comprehensive evaluation benchmark for Chinese language models developed in collaboration with Tsinghua University, Shanghai Jiao Tong University, and the University of Edinburgh) and topping the list in its category in terms of parameters. Meanwhile, Vivo has launched the large model VivoLM, with its VivoLM-7B model securing the second position on the C-Eval ranking.

As for Apple, while it may appear to be in a mostly observatory role as other Silicon Valley companies like OpenAI release ChatGPT, and Google and Microsoft introduce AI versions of search engines, the reality is that since 2018, Apple has quietly acquired over 20 companies related to AI technology from the market. Apple’s approach is characterized by its extreme discretion, with only a few of these transactions publicly disclosing their final acquisition prices.

On another front, Apple has been discreetly developing its own large language model called Ajax. It commits daily expenditures of millions of dollars for training this model with the aim of making its performance even more robust compared to OpenAI’s ChatGPT 3.5 and Meta’s LLaMA.

  1. Apple’s Advantage in Developing a Paid Subscription Model for Large Language Models Compared to Other Brands

Analyzing the current most common usage scenarios for smartphones among general consumers, these typically revolve around activities like taking photos, communication, and information retrieval. While there is potential to enhance user experiences with AI in some functionalities, these usage scenarios currently do not fall under the category of “essential AI features.”

However, if a killer application involving large language models were to emerge on smartphones in the future, Apple is poised to have an exclusive advantage in establishing such a service as a subscription-based model. This advantage is due to recent shifts in Apple’s revenue composition, notably the increasing contribution of “Service” revenue.

In August 2023, Apple CEO Tim Cook highlighted in Apple’s third-quarter financial report that Apple’s subscription services, which include Apple Arcade, Apple Music, iCloud, AppleCare, and others, had achieved record-breaking revenue and amassed over 1 billion paying subscribers.

In other words, compared to other smartphone brands, Apple is better positioned to monetize a large language model service through subscription due to its already substantial base of paying subscription users. Other smartphone brands may find it challenging to gain consumer favor for a paid subscription service involving large language models, as they lack a similarly extensive base of subscription users.

Read more


[News] Lenovo’s AI PC and ‘AI Twins’ Unveiled, Market Entry Expected After September

At Global Tech World Event on October 24th, Lenovo Group’s Chairman and CEO, Yuanqing Yang, has presented AI-powered PCs and enterprise-level “AI Twins”(AI assistant) to a global audience, heralding a new dawn for personal computers. He revealed that AI PCs are slated to hit the market no sooner than September of the following year.

Yang said that the journey of AI PCs involves a maturation process. Historically, they start with a 10% market share but are destined to become the norm, envisioning a future where every computer is an AI PC.

Regarding foundation models, Yang pointed out that some companies are hastily jumping on the bandwagon. However, he emphasized Lenovo’s commitment to not rush into trends and noted the drawbacks and vulnerabilities in China’s existing public foundation models, including concerns about personal privacy and data security. Lenovo’s focus is on establishing hybrid foundation models.

Given the need to compress models for device deployment, Lenovo is currently concentrating on research related to domain-adaptive model fine-tuning, lightweight model compression, and privacy protection techniques.

Moreover, Yang highlighted Lenovo’s prior announcement of a US$1 billion investment in the AI Innovation over the next three years. However, he clarified that this amount falls short of the financial demands since virtually all of Lenovo’s business domains involve AI and fundamental services, requiring substantial financial backing.

Lenovo’s Q1 earnings report from mid-August had unveiled the company’s plan to allocate an additional $1 billion over the next three years for expediting AI technology and applications. This encompasses the development of AI devices, AI infrastructure, and the integration of generative AI technologies like AIGC into industry vertical solutions.

Besides, chip manufacturers like Intel is joining forces to expedite the development of the AI PC ecosystem. Their objective is to realize AI applications on over 100 million personal computers by 2025. This endeavor has piqued the interest of well-known international brands like Acer, Asus, HP, and Dell, all of which have a positive outlook on the potential of AI-powered PCs. It is anticipated that AI PCs will be a pivotal factor in revitalizing the PC industry’s annual growth by 2024.

Currently, there are no brands selling AI PCs in the true sense, but leading manufacturers have already revealed their plans for related products. The industry anticipates a substantial release of AI PCs in 2024.

(Image: Lenovo)

  • Page 1
  • 3 page(s)
  • 12 result(s)