Meta


2023-11-20

[News] RISC-V Architecture in AI Chips Features “Three Advantages,” Meta’s in-house chip MTIA

In the global landscape of self-developed chips, the industry has predominantly embraced the Arm architecture for IC design. However, Meta’s decision to employ the RISC-V architecture in its self-developed AI chip has become a topic of widespread discussion. It is said the growing preference for RISC-V is attributed to three key advantages including low power consumption, high openness, and relatively lower development costs, according to reports from UDN News.

Noted that Meta exclusively deploys its in-house AI chip, “MTIA,” within its data centers to expedite AI computation and inference. In this highly tailored setting, this choice ensures not only robust computational capabilities but also the potential for low power consumption, with an anticipated power usage of under 25W per RISC-V core. By strategically combining the RISC-V architecture with GPU accelerators or Arm architecture, Meta aims to achieve an overall reduction in power consumption while boosting computing power simultaneously.

Meta’s confirmation of adopting RISC-V architecture form Andes Technology Corporation, a CPU IP and Platform IP supplier from Taiwan, for AI chip development underscores RISC-V’s capability to support high-speed computational tasks and its suitability for integration into advanced manufacturing processes. This move positions RISC-V architecture to potentially make significant inroads into the AI computing market,  and stands as the third computing architecture opportunity, joining the ranks of x86 and Arm architectures.

Regarding the development potential of different chip architectures in the AI chip market, TrendForce points out that in the current overall AI market, GPUs (such as NVIDIA, AMD, etc.) still dominate, followed by Arm architecture. This includes major data centers, with active investments from NVIDIA, CSPs, and others in the Arm architecture field. RISC, on the other hand, represents another niche market, targeting the open-source AI market or enterprise niche applications.
(Image: Meta)

2023-11-17

[News] MediaTek Teams Up with Meta to Develop Next-Gen AR Smart Glasses, Edging Out Qualcomm

According to anue’s news, during the recent MediaTek 2023 Summit, major IC design firm MediaTek held an overseas summit in the United States and announced a new collaboration with Meta. MediaTek will take charge of developing the chip for Ray-Ban Meta smart glasses, replacing the competitor Qualcomm’s Snapdragon AR1 Gen 1 chip.

Notably, in October 2023,  Meta launched the new generation of Ray-Ban Meta smart glasses. These feature the Qualcomm Snapdragon AR1 Gen 1 chip, a 12-megapixel camera, and 5 microphones for sending and receiving messages. It is the world’s first smart glasses with Facebook and Instagram live streaming capabilities, enabling the recording of high-quality videos.

MediaTek has long been dedicated to developing low-power, high-performance SoC. This collaboration with Meta focuses on jointly creating a custom chip specifically designed for AR smart glasses, meeting the requirements of lightweight and compact devices. The collaborative product, Ray-Ban Meta smart glasses, is expected to be launched in the future.

(Photo credit: MediaTek)

2023-08-08

[News] US Tech Giants Unite for AI Server Domination, Boosting Taiwan Supply Chain

According to the news from Commercial Times, in a recent press conference, the four major American cloud service providers (CSPs) collectively expressed their intention to expand their investment in AI application services. Simultaneously, they are continuing to enhance their cloud infrastructure. Apple has also initiated its foray into AI development, and both Intel and AMD have emphasized the robust demand for AI servers. These developments are expected to provide a significant boost to the post-market prospects of Taiwan’s AI server supply chain.

Industry insiders have highlighted the ongoing growth of the AI spillover effect, benefiting various sectors ranging from GPU modules, substrates, cooling systems, power supplies, chassis, and rails, to PCB manufacturers.

The American CSP players, including Microsoft, Google, Meta, and Amazon, which recently released their financial reports, have demonstrated growth in their cloud computing and AI-related service segments in their latest quarterly performance reports. Microsoft, Google, and Amazon are particularly competitive in the cloud services arena, and all have expressed optimistic outlooks for future operations.

The direct beneficiaries among Taiwan’s cloud data center suppliers are those in Tier 1, who are poised to reap positive effects on their average selling prices (ASP) and gross margins, driven by the strong demand for AI servers from these CSP giants in the latter half of the year.

Among them, the ODM manufacturers with over six years of collaboration with NVIDIA in multi-GPU architecture AI high-performance computing/cloud computing, including Quanta, Wistron, Wistron, Inventec, Foxconn, and Gigabyte, are expected to see operational benefits further reflected in the latter half of the year. Foxconn and Inventec are the main suppliers of GPU modules and GPU substrates, respectively, and are likely to witness noticeable shipment growth starting in the third quarter.

Furthermore, AI servers not only incorporate multiple GPU modules but also exhibit improvements in aspects such as chassis height, weight, and thermal design power (TDP) compared to standard servers. As a result, cooling solution providers like Asia Vital Components, Auras Technology, and SUNON; power supply companies such as Delta Electronics and Lite-On Technology; chassis manufacturers Chenbro; rail industry players like King Slide, and PCB/CCL manufacturers such as EMC, GCE are also poised to benefit from the increasing demand for AI servers.

(Source: https://ctee.com.tw/news/tech/915830.html)

2023-06-13

Comparison of Meta Quest Pro and Apple Vision Pro

considering factors such as pricing and the absence of certain essential features, TrendForce anticipates a modest shipment volume of approximately 200,000 units for Apple Vision Pro in 2024. The market’s response will heavily depend on the subsequent introduction of consumer-oriented Apple Vision models and the ability of Apple to offer enticing everyday functionalities that will drive the rapid growth of the AR market as a whole.

VR/AR shipments are expected to drop to 7.45 million in 2023

In the meantime, TrendForce forecasts a global downturn in AR and VR device shipments for 2023, predicting a shipment total of roughly 7.45 million units—an 18.2% YoY decrease. VR devices are expected to shoulder the majority of this decline, with projected shipments hovering around 6.67 million units.

Conversely, shipments of AR devices are expected to remain stable, with projected shipments exceeding 780,000 units. While Apple’s latest offerings could stimulate some demand, the high price tags attached to these units continue to pose a significant barrier to broader market growth.

TrendForce posits that the trajectory of the VR and AR device market may encounter certain limitations between 2023 and 2025. While affordable VR devices could pique the interest of mainstream consumers, the prospect of minimal profitability might dissuade manufacturers from substantial investment in the VR market in the immediate future. A shift towards AR devices and their corresponding applications seems more probable.

Nevertheless, the expansion of the AR device market hinges on a broader acceptance of consumer applications. Therefore, TrendForce anticipates that a significant rise in the VR and AR market, potentially nearing a 40% annual increase in shipments, might not be realized until 2025.

2023-04-25

AI Sparks a Revolution Up In the Cloud

OpenAI’s ChapGPT, Microsoft’s Copilot, Google’s Bard, and latest Elon Musk’s TruthGPT – what will be the next buzzword for AI? In just under six months, the AI competition has heated up, stirring up ripples in the once-calm AI server market, as AI-generated content (AIGC) models take center stage.

The convenience unprecedentedly brought by AIGC has attracted a massive number of users, with OpenAI’s mainstream model, GPT-3, receiving up to 25 million daily visits, often resulting in server overload and disconnection issues.

Given the evolution of these models has led to an increase in training parameters and data volume, making computational power even more scarce, OpenAI has reluctantly adopted measures such as paid access and traffic restriction to stabilize the server load.

High-end Cloud Computing is gaining momentum

According to Trendforce, AI servers currently have a merely 1% penetration rate in global data centers, which is far from sufficient to cope with the surge in data demand from the usage side. Therefore, besides optimizing software to reduce computational load, increasing the number of high-end AI servers in hardware will be another crucial solution.

Take GPT-3 for instance. The model requires at least 4,750 AI servers with 8 GPUs for each, and every similarly large language model like ChatGPT will need 3,125 to 5,000 units. Considering ChapGPT and Microsoft’s other applications as a whole, the need for AI servers is estimated to reach some 25,000 units in order to meet the basic computing power.

As the emerging applications of AIGC and its vast commercial potential have both revealed the technical roadmap moving forward, it also shed light on the bottlenecks in the supply chain.

The down-to-earth problem: cost

Compared to general-purpose servers that use CPUs as their main computational power, AI servers heavily rely on GPUs, and DGX A100 and H100, with computational performance up to 5 PetaFLOPS, serve as primary AI server computing power. Given that GPU costs account for over 70% of server costs, the increase in the adoption of high-end GPUs has made the architecture more expansive.

Moreover, a significant amount of data transmission occurs during the operation, which drives up the demand for DDR5 and High Bandwidth Memory (HBM). The high power consumption generated during operation also promotes the upgrade of components such as PCBs and cooling systems, which further raises the overall cost.

Not to mention the technical hurdles posed by the complex design architecture – for example, a new approach for heterogeneous computing architecture is urgently required to enhance the overall computing efficiency.

The high cost and complexity of AI servers has inevitably limited their development to only large manufacturers. Two leading companies, HPE and Dell, have taken different strategies to enter the market:

  • HPE has continuously strengthened its cooperation with Google and plans to convert all products to service form by 2022. It also acquired startup Pachyderm in January 2023 to launch cloud-based supercomputing services, making it easier to train and develop large models.
  • In March 2023, Dell launched its latest PowerEdge series servers, which offers options equipped with NVIDIA H100 or A100 Tensor Core GPUs and NVIDIA AI Enterprise. They use the 4th generation Intel Xeon Scalable processor and introduce Dell software Smart Flow, catering to different demands such as data centers, large public clouds, AI, and edge computing.

With the booming market for AIGC applications, we seem to be one step closer to a future metaverse centered around fully virtualized content. However, it remains unclear whether the hardware infrastructure can keep up with the surge in demand. This persistent challenge will continue to test the capabilities of cloud server manufacturers to balance cost and performance.

(Photo credit: Google)

  • Page 3
  • 5 page(s)
  • 22 result(s)

Get in touch with us