AI


2024-02-05

[News] SK hynix’s HBM4 Reportedly to Enter Mass Production in 2026

During the “SEMICON Korea 2024” event held recently in Seoul, Chun-hwan Kim, Vice President of global memory giant SK hynix, revealed that the company’s HBM3e has entered mass production, with plans to commence large-scale production of HBM4 in 2026.

According to a report from Business Korea, Chun-hwan Kim stated that SK hynix’s HBM3e memory is currently in mass production, with plans to initiate mass production of HBM4 in 2026.

He noted that with the advent of the AI computing era, generative AI is rapidly advancing, and the market is expected to grow at a rate of 35% annually. The rapid growth of the generative AI market requires a significant number of higher-performance AI chips to support it, further driving the demand for higher-bandwidth memory.

He further commented that the semiconductor industry would face intense survival competition this year to meet the increasing demand and customer needs for memory.

Kim also projected that the HBM market would grow by 40% by 2025, with SK hynix already strategically positioning itself in the market and planning to commence production of HBM4 in 2026.

Meanwhile, previous reports have also indicated that SK hynix expected to establish an advanced packaging facility in the state of Indiana, USA, to meet the demands of American companies, including NVIDIA.

Driven by the wave of AI advancement and demand from China, the Ministry of Trade, Industry and Energy of South Korea recently announced that South Korea’s semiconductor product exports experienced a rebound in 2024. In January, exports reached approximately USD 9.4 billion, marking a year-on-year increase of 56.2% and the largest growth in 73 months.

TrendForce has previously reported the progress of HBM3e, as outlined in the timeline below, which shows that SK hynix already provided its 8hi (24GB) samples to NVIDIA in mid-August.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from Business Korea.

2024-02-02

[News] Reports Suggest SK Hynix to Establish Advanced Packaging Facility in the US

According to sources cited by the Financial Times, South Korean chip manufacturer SK Hynix is reportedly planning to establish a packaging facility in Indiana, USA. This move is expected to significantly advance the US government’s efforts to bring more artificial intelligence (AI) chip supply chains into the country.

SK Hynix’s new packaging facility will specialize in stacking standard dynamic random-access memory (DRAM) chips to create high-bandwidth memory (HBM) chips. These chips will then be integrated with NVIDIA’s GPUs for training systems like OpenAI’s ChatGPT.

Per one source close to SK Hynix cited by the report, the increasing demand for HBM from American customers and the necessity of close collaboration with chip designers have deemed the establishment of advanced packaging facilities in the US essential.

Regarding this, SK Hynix reportedly responded, “Our official position is that we are currently considering a possible investment in the US but haven’t made a final decision yet.”

The report quoted Kim Yang-paeng, a researcher at the Korea Institute for Industrial Economics and Trade, as saying, “If SK Hynix establishes an advanced HBM memory packaging facility in the United States, along with TSMC’s factory in Arizona, this means Nvidia can ultimately produce GPUs in the United States.”

Previously, the United States was reported to announce substantial chip subsidies by the end of March. The aim is to pave the way for chip manufacturers like TSMC, Samsung, and Intel by providing them with billions of dollars to accelerate the expansion of domestic chip production.

These subsidies are a core component of the US 2022 “CHIPS and Science Act,” which allocates a budget of USD 39 billion to directly subsidize and revitalize American manufacturing.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from Financial Times.

2024-01-29

[News] U.S. Department of Commerce Introduces New Regulations to Restrict China from Training AI Using U.S. Cloud Services

U.S. Commerce Secretary Gina Raimondo stated on January 26th that the U.S. government will propose that American cloud computing companies determine whether foreign entities are accessing U.S. data centers to train artificial intelligence models.

The proposed “know your customer” regulation was made available for public inspection on January 26th and is scheduled for publication on January 29th.

According to a report from Reuters, Raimondo stated during her interview that, “We can’t have non-state actors or China or folks who we don’t want accessing our cloud to train their models.”

“We use export controls on chips,” she noted. “Those chips are in American cloud data centers so we also have to think about closing down that avenue for potential malicious activity.”

Raimondo further claimed that, the United States is “trying as hard as we can to deny China the compute power that they want to train their own (AI) models, but what good is that if they go around that to use our cloud to train their models?”

Since the U.S. government introduced chip export controls to China last year, NVIDIA initially designed downgraded AI chips A800 and H800 for Chinese companies. However, new regulations in October of 2023 by the U.S. Department of Commerce brought A800, H800, L40S, and other chips under control.

Raimondo stated that the Commerce Department would not permit NVIDIA to export its most advanced and powerful AI chips, which could facilitate China in developing cutting-edge models.

In addition to the limitations on NVIDIA’s AI chips, the U.S. government has also imposed further restrictions on specific equipment. For example, ASML, a leading provider of semiconductor advanced lithography equipment, announced on January 1st, 2024, that it was partially revoking export licenses for its DUV equipment in relation to the U.S. government.

Read more

(Photo credit: iStock)

Please note that this article cites information from Reuters.

2023-11-20

[News] AI PC Transforms the Digital Landscape with Innovation and Integration

In the dynamic wave of generative AI, AI PCs emerge as a focal point in the industry’s development. Technological upgrades across the industry chain and the distinctive features of on-device AI, such as security, low latency, and high reliability, drive their rapid evolution. AI PCs are poised to become a mainstream category within the PC market, converging with the PC replacement trend, reported by Jiwei.

On-Device AI, driven by technologies like lightweighting language large models (LLMs), signifies the next stage in AI development. PC makers aim to propel innovative upgrades in AI PC products by seamlessly integrating resources both upstream and downstream. The pivotal upgrade lies in the chip, with challenges in hardware-software coordination, data storage, and application development being inevitable. Nevertheless, AI PCs are on track to evolve at an unprecedented pace, transforming into a “hybrid” encompassing terminals, edge computing, and cloud technology.

Is AI PC Industry Savior?

In the face of consecutive quarters of global PC shipment decline, signs of a gradual easing in the downward trend are emerging. The industry cautiously anticipates a potential recovery, considering challenges such as structural demand cooling and supply imbalances.

Traditionally viewed as a mature industry grappling with long-term growth challenges, the PC industry is witnessing a shift due to the evolution of generative AI technology and the extension of the cloud to the edge. This combination of AI technology with terminal devices like PCs is seen as a trendsetter, with the ascent of AI PCs considered an “industry savior” that could open new avenues for growth in the PC market.

Yuanqing Yang, Chairman and CEO of Lenovo, elaborates on the stimulation of iterative computation and upgrades in AI-enabled terminals by AIGC. Recognizing the desire to enjoy the benefits of AIGC while safeguarding privacy, personal devices or home servers are deemed the safest. Lenovo is poised to invest approximately 7 billion RMB in the AI field over the next three years.

Analysis from Orient Securities, also known as DFZQ, reveals that the surge in consumer demand from the second half of 2020 to 2021 is expected to trigger a substantial PC replacement cycle from the second half of 2024 to 2025, initiating a new wave of PC upgrades.

Undoubtedly, AI PCs are set to usher in a transformative wave and accelerate development against the backdrop of the PC replacement trend. Guotai Junan Securities said that AI PCs feature processors with enhanced computing capabilities and incorporating multi-modal algorithms. This integration is anticipated to fundamentally reshape the PC experience, positioning AI PCs as a hybrid terminals, edge computing, and cloud technology to meet the new demands of generative AI workloads.

PC Ecosystem Players Strategically Positioning for Dominance

The AI PC field is experiencing vibrant development, with major PC ecosystem companies actively entering the scene. Companies such as Lenovo, Intel, Qualcomm, and Microsoft have introduced corresponding innovative initiatives. Lenovo showcased the industry’s first AI PC at the 2023 TechConnect World Innovation, Intel launched the AI PC Acceleration Program at its Innovation 2023, and Qualcomm introduced the Snapdragon X Elite processor specifically designed for AI at the Snapdragon Summit. Meanwhile, Microsoft is accelerating the optimization of office software, integrating Bing and ChatGPT into the Windows.

While current promotions of AI PC products may exceed actual user experiences, terminals displayed by Lenovo, Intel’s AI PC acceleration program, and the collaboration ecosystem deeply integrated with numerous independent software vendors (ISVs) indicate that the upgrade of on-device AI offers incomparable advantages compared to the cloud. This includes integrating the work habits of individual users, providing a personalized and differentiated user experience.

Ablikim Ablimiti, Vice President of Lenovo, highlighted five core features of AI PCs: possessing personal large models, natural language interaction, intelligent hybrid computing, open ecosystems, and ensuring real privacy and security. He stated that the encounter of AI large models with PCs is naturally harmonious, and terminal makers are leading this innovation by integrating upstream and downstream resources to provide a complete intelligent service for AI PCs.

In terms of chips, Intel Core Ultra is considered a significant processor architecture change in 40 years. It adopts the advanced Meteor Lake architecture, fully integrating chipset functions into the processor, incorporating NPU into the PC processor for the first time, and also integrating the dazzling series core graphics card. This signifies a significant milestone in the practical commercial application of AI PCs.

TrendForce: AI PC Demand to Expand from High-End Enterprises           

TrendForce believes that due to the high costs of upgrading both software and hardware associated with AI PCs, early development will be focused on high-end business users and content creators. This group has a strong demand for leveraging AI processing capabilities to improve productivity efficiency and can also benefit immediately from related applications, making them the primary users of the first generation. The emergence of AI PCs is not expected to necessarily stimulate additional PC purchase demand. Instead, most upgrades to AI PC devices will occur naturally as part of the business equipment replacement cycle projected for 2024.

(Image: Qualcomm)

Explore more

2023-11-09

[News] Alibaba to Open Source China’s Largest AI Model

According to IThome’ report, Alibaba Group CEO Eddie Wu, speaking at the 2023 World Internet Conference Wuzhen Summit today, announced that Alibaba is gearing up to open-source a massive model with 72 billion parameters. This model is set to become the largest-scale open-source model in China.

Wu expressed that with AI becoming a crucial breakthrough in China’s digital economy innovation, Alibaba aims to evolve into an open technology platform. The goal is to provide foundational infrastructure for AI innovation and transformation across various industries.

It’s reported that Alibaba has, up to this point, open-sourced Tongyi Qianwen’s Qwen-14B model with 14 billion parameters and the Qwen-7B model with 7 billion parameters.

According to guandian’s report, Eddie Wu mentioned at the 2023 World Internet Conference Wuzhen Summit that AI technology will fundamentally transform the ways knowledge iteration and social collaboration occur, creating a profound impact on productivity, production relationships, the digital world, and the physical world.

He emphasized that society is currently at a turning point from traditional computing to AI computing, with AI eventually taking over all computing resources. The dual drive of AI and cloud computing is the underlying capability that Alibaba Cloud relies on to provide services for the future AI infrastructure.

In addition, on October 31st, Alibaba Cloud announced the release of the large-scale model Tongyi Qianwen 2.0 at the Apsara Conference. On the same day, the Tongyi Qianwen app was officially launched on major mobile application markets. Compared to the 1.0 version released in April, Tongyi Qianwen 2.0 has shown improvements in capabilities such as complex instruction understanding, literary creation, general mathematics, knowledge retention, and illusion resistance.

(Photo credit: Alibaba)

  • Page 1
  • 4 page(s)
  • 17 result(s)