Articles


2024-05-06

[Insights] Big Four CSPs Continue to Shine in Q1 2024 Financial Reports, AI Returns Garnering Attention

Four major cloud service providers (CSPs) including Google, Microsoft, Amazon, and Meta, sequentially released their first-quarter financial performance for the year 2024 (January 2024 to March 2024) at the end of April.

Each company has achieved double-digit growth of the revenue, with increased capital expenditures continuing to emphasize AI as their main development focus. The market’s current focus remains on whether AI investment projects can successfully translate into revenue from the previous quarter to date.

TrendForce’s Insights:

1. Strong Financial Performance of Top Four CSPs Driven by AI and Cloud Businesses

Alphabet, the parent company of Google, reported stellar financial results for the first quarter of 2024. Bolstered by growth in search engine, YouTube, and cloud services, revenue surpassed USD 80 billion, marking a 57% increase in profit. The company also announced its first-ever dividend payout, further boosting its stock price as all metrics exceeded market expectations, pushing its market capitalization past USD 2 trillion for the first time.For Google, the current development strategy revolves around its in-house LLM Gemini layout, aimed at strengthening its cloud services, search interaction interfaces, and dedicated hardware development.

Microsoft’s financial performance is equally impressive. This quarter, its revenue reached USD 61.9 billion, marking a year-on-year increase of 17%. Among its business segments, the Intelligent Cloud sector saw the highest growth, with a 21% increase in revenue, totaling $26.7 billion. Notably, the Azure division experienced a remarkable 31% growth, with Microsoft attributing 7% of this growth to AI demand.

In other words, the impact of AI on its performance is even more pronounced than in the previous quarter, prompting Microsoft to focus its future strategies more on the anticipated benefits from Copilot, both in software and hardware.

This quarter, Amazon achieved a remarkable revenue milestone, surpassing USD 140 billion, representing a year-on-year increase of 17%, surpassing market expectations. Furthermore, its profit reached USD 10.4 billion, far exceeding the USD 3.2 billion profit recorded in the same period in 2023.

The double-digit growth in advertising business and AWS (Amazon Web Services) drove this performance, with the latter being particularly highlighted for its AI-related opportunities. AWS achieved a record-high operating profit margin of 37.6% this quarter, with annual revenue expected to exceed $100 billion, and short-term plans to invest USD 150 billion in expanding data centers.

On the other hand, Meta reported revenue of USD 36.46 billion this quarter, marking a significant year-on-year growth of 27%, the largest growth rate since 2021. Profit also doubled compared to the same period in 2023, reaching USD 12.37 billion.

Meta’s current strategy focuses on allocating resources to areas such as smart glasses and mixed reality (MR) in the short and medium term. The company continues to leverage AI to enhance the user value of the virtual world.

2. Increased Capital Expenditure to Develop AI is a Common Consensus, Yet Profitability Remains Under Market Scrutiny

Observing the financial reports of major cloud players, the increase in capital expenditure to solidify their commitment to AI development can be seen as a continuation of last quarter’s focus.

In the first quarter of 2024, Microsoft’s capital expenditure surged by nearly 80% compared to the same period in 2023, reaching USD 14 billion. Google expects its quarterly expenditure to remain above USD 12 billion. Similarly, Meta has raised its capital expenditure guidance for 2024 to the range of USD 35 to USD 40 billion.

Amazon, considering its USD 14 billion expenditure in the first quarter as the minimum for the year, anticipates a significant increase in capital expenditure over the next year, exceeding the USD 48.4 billion spent in 2023. However, how these increased investments in AI will translate into profitability remains a subject of market scrutiny.

While the major cloud players remain steadfast in their focus on AI, market expectations may have shifted. For instance, despite impressive financial reports last quarter, both Google and Microsoft saw declines in their stock prices, unlike the significant increases seen this time. This could partly be interpreted as an expectation of short- to medium-term AI investment returns from products and services like Gemini and Copilot.

In contrast, Meta, whose financial performance is similarly impressive to other cloud giants, experienced a post-earnings stock drop of over 15%. This may be attributed partly to its conservative financial outlook and partly to the less-than-ideal investment returns from its focused areas of virtual wearable devices and AI value-added services.

Due to Meta’s relatively limited user base compared to the other three CSPs in terms of commercial end-user applications, its AI development efforts, such as the practical Llama 3 and the value-added Meta AI virtual assistant for its products, have not yielded significant benefits. While Llama 3 is free and open-source, and Meta AI has limited shipment, they evidently do not justify the development costs.

Therefore, Meta still needs to expand its ecosystem to facilitate the promotion of its AI services, aiming to create a business model that can translate technology into tangible revenue streams.

For example, Meta recently opened up the operating system Horizon OS of its VR device Quest to brands like Lenovo and Asus, allowing them to produce their own branded VR/MR devices. The primary goal is to attract developers to enrich the content database and thereby promote industry development.

Read more

2024-05-03

[News] NVIDIA Reportedly Fueling Samsung and SK Hynix Competition, Impacting HBM Pricing?

According to South Korean media outlet BusinessKorea’s report on May 2nd, NVIDIA is reported to be fueling competition between Samsung Electronics and SK Hynix, possibly in an attempt to lower the prices of High Bandwidth Memory (HBM).

The report on May 2nd has cited sources, indicating that the prices of the third-generation “HBM3 DRAM” have soared more than fivefold since 2023. For NVIDIA, the significant increase in the pricing of critical component HBM is bound to affect research and development costs.

The report from BusinessKorea thus accused that NVIDIA is intentionally leaking information to fan current and potential suppliers to compete against each other, aiming to lower HBM prices. On April 25th, SK Group Chairman Chey Tae-won traveled to Silicon Valley to meet with NVIDIA CEO Jensen Huang, potentially related to these strategies.

Although NVIDIA has been testing Samsung’s industry-leading 12-layer stacked HBM3e for over a month, it has yet to indicate willingness to collaborate. BusinessKorea’s report has cited sources, suggesting this is a strategic move aimed at motivate Samsung Electronics. Samsung only recently announced that it will commence mass production of 12-layer stacked HBM3e starting from the second quarter.

SK Hynix CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for 2024 has already been fully sold out, and 2025’s capacity is also nearly sold out.  He mentioned that samples of the 12-layer stacked HBM3e will be sent out in May, with mass production expected to begin in the third quarter.

Kwak Noh-Jung further pointed out that although AI is currently primarily centered around data centers, it is expected to rapidly expand to on-device AI applications in smartphones, PCs, cars, and other end devices in the future. Consequently, the demand for memory specialized for AI, characterized by “ultra-fast, high-capacity and low-power,” is expected to skyrocket.

Kwak Noh-Jung also addressed that SK Hynix possesses industry-leading technological capabilities in various product areas such as HBM, TSV-based high-capacity DRAM, and high-performance eSSD. In the future, SK Hynix looks to provide globally top-tier memory solutions tailored to customers’ needs through strategic partnerships with global collaborators.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from BusinessKorea.

2024-05-03

[News] Controlled Market Demand Amid Earthquake Impact, Second Quarter DRAM Prices Expected to Rise by Over 20%

According to a report from TechNews, there has been a significant surge in demand in the memory market, driving prices steadily upward. This surge has led to the latest quotations from Korean memory manufacturers, with DDR5 prices set to increase by 13% in May.

Additionally, DDR4 prices are also expected to rise by 10%. As for DDR3, which currently serves as the main supply for Taiwanese memory manufacturers, there is also room for a 10% to 15% increase. Overall, contract prices for the second quarter are anticipated to rise by 20% to 25%.

Following in the wake of an earthquake that struck on April 3rd, TrendForce undertook an in-depth analysis of its effects on the DRAM industry, uncovering a sector that has shown remarkable resilience and faced minimal interruptions. Despite some damage and the necessity for inspections or disposal of wafers among suppliers, the facilities’ strong earthquake preparedness of the facilities has kept the overall impact to a minimum.

Leading DRAM producers, including Micron, Nanya, PSMC, and Winbond had all returned to full operational status by April 8th. In particular, Micron’s progression to cutting-edge processes—specifically the 1alpha and 1beta nm technologies—is anticipated to significantly alter the landscape of DRAM bit production. In contrast, other Taiwanese DRAM manufacturers are still working with 38 and 25nm processes, contributing less to total output. TrendForce estimates that the earthquake’s effect on DRAM production for the second quarter will be limited to a manageable 1%.

With the earthquake’s impact under control, for each manufacturer, Micron temporarily suspended quotations due to the earthquake’s impact on production. After completing the assessment of losses, it notified customers of a 25% increase in DRAM and SSD contract prices.

Additionally, due to Samsung’s production line conversion, the early cessation of DDR3 production has prompted many customers to turn to Nanya and Winbond for procurement orders, leading to the completion of product verification. As a result, Nanya and Winbond have informed customers in the second quarter that DDR3 prices are expected to increase by 10-15%.

Per TrendForce’s observations, as the three major memory manufacturers continue to transition their production capacity to highly demanded products such as HBM and DDR5, the production capacity of products like DDR4 and DDR3 has significantly decreased.

Overall, the supply-demand gap is expected to exceed 20-30% by the second half of 2024. This situation will lead to a substantial increase in DDR3 prices in the second half of the year, with price hikes reaching 50-100% as current prices remain below costs. This scenario will also be advantageous for the operational performance of various domestic memory manufacturers.

Read more

Please note that this article cites information from TechNews.

2024-05-03

[News] PSMC’s New Tongluo Plant Unveiled, CoWoS Packaging Ready to Roll

Powerchip Semiconductor Manufacturing Corporation (PSMC) held the inauguration ceremony for its new Tongluo plant on May 2nd. This investment project, totaling over NTD 300 billion  for a 12-inch fab, has completed the installation of its initial equipment and commenced trial production. According to a report from Commercial Times, it will serve as PSMC’s primary platform for advancing process technology and pursuing orders from large international clients.

Additionally, PSMC has ventured into advanced CoWoS packaging, primarily producing Silicon Interposers, with mass production expected in the second half of the year and a monthly capacity of several thousand units.

Frank Huang, Chairman of PSMC, stated that construction of the new Tongluo plant began in March 2021. Despite challenges posed by the pandemic, the plant was completed and commenced operations after a three-year period.

As of now, the investment in this 12-inch fab project has exceeded NTD 80 billion, underscoring the significant time, technology, and financial requirements for establishing new semiconductor production capacity. Fortunately, the company made swift decisions and took action to build the plant. Otherwise, with the recent international inflation driving up costs of various raw materials, the construction costs of this new plant would undoubtedly be even higher.

The land area of Powerchip Semiconductor Manufacturing Corporation’s Tongluo plant exceeds 110,000 square meters. The first phase of the newly completed plant comprises a cleanroom spanning 28,000 square meters. It is projected to house 12-inch wafer production lines for 55nm, 40nm, and 28nm nodes with a monthly capacity of 50,000 units. In the future, as the business grows, the company can still construct a second phase of the plant on the Tongluo site to continue advancing its 2x nanometer technology.

Frank Huang indicated that the first 12-inch fab in Taiwan was established by the Powerchip group. To date, they have built eight 12-inch fabs and plan to construct four more in the future. Some of these fabs will adopt the “Fab IP” technology licensing model. For example, the collaboration with Tata Group in India operates under this model.

According to a previous report from TechNews, Frank Huang believes that IP transfer will also become one of the important sources of revenue in the future. “Up to 7-8 countries have approached PSMC,” including Vietnam, Thailand, India, Saudi Arabia, France, Poland, Lithuania, and others, showing interest in investing in fabs, indicating optimism for PSMC’s future Fab IP operating model.

PSMC’s Fab IP strategy, according to the same report, leverages its long-term accumulated experience in plant construction and semiconductor manufacturing technology to assist other countries, extending from Japan and India to countries in the Middle East and Europe, in building semiconductor plants while earning royalties for technology transfers.

Looking ahead to the second half of the year, Frank Huang indicated that the current issue lies in the less-than-stellar performance of the economies of the United States and China. While the United States is showing relatively better performance in AI and technology, China’s performance is not as strong.

Huang believes that after the fourth quarter of this year, there is a chance for accelerated deployment of AI application products such as smartphones, PCs, and notebooks. With the explosive demand brought about by AI, 2025 is expected to be a very good year for the semiconductor industry, and PSMC has already seized the opportunity.

In addition, PSMC also mentioned that since last year, there has been a continuous tight supply of advanced CoWoS packaging. In response to the demands of global chip clients, the company has also ventured into CoWoS-related businesses, primarily providing the Silicon Interposer needed for advanced CoWoS packaging. Currently in the validation stage, mass production is expected to commence in the second half of the year, with an initial monthly capacity of several thousand units.

Read more

(Photo credit: PSMC)

Please note that this article cites information from Commercial Times and TechNews.

2024-05-03

[News] TSMC Reportedly Commences Production of Tesla’s Next-Generation Dojo Chips, Anticipates 40x Increase in Computing Power in 3 Years

Tesla’s journey towards autonomous driving necessitates substantial computational power. Earlier today, TSMC has confirmed the commencement of production for Tesla’s next-generation Dojo supercomputer training chips, heralding a significant leap in computing power by 2027.

As per a report from TechNews, Elon Musk’s plan reportedly underscores that software is the true key to profitability. Achieving this requires not only efficient algorithms but also robust computing power. In the hardware realm, Tesla adopts a dual approach: procuring tens of thousands of NVIDIA H100 GPUs while simultaneously developing its own supercomputing chip, Dojo. TSMC, reportedly being responsible for producing the supercomputer chips, has confirmed the commencement of production.

Previously reported by another report from TechNews, Tesla primarily focuses on the demand for autonomous driving and has introduced two AI chips to date: the Full Self-Driving (FSD) chip and the Dojo D1 chip. The FSD chip is used in Tesla vehicles’ autonomous driving systems, while the Dojo D1 chip is employed in Tesla’s supercomputers. It serves as a general-purpose CPU, constructing AI training chips to power the Dojo system.

At the North American technology conference, TSMC provided detailed insights into semiconductor technology and advanced packaging, enabling the establishment of system-level integration on the entire wafer and creating ultra-high computing performance. TSMC stated that the next-generation Dojo training modules for Tesla have begun production. By 2027, TSMC is expected to offer more complex wafer-scale systems with computing power exceeding current systems by over 40 times.

The core of Tesla’s designed Dojo supercomputer lies in its training modules, wherein 25 D1 chips are arranged in a 5×5 matrix. Manufactured using a 7-nanometer process, each chip can accommodate 50 billion transistors, providing 362 TFLOPs of processing power. Crucially, it possesses scalability, allowing for continuous stacking and adjustment of computing power and power consumption based on software demands.

Wafer Integration Offers 40x Computing Power

According to TSMC, the approach for Tesla’s new product differs from the wafer-scale systems provided to Cerebras. Essentially, the Dojo training modules (a 5×5 grid of pretested processors) are placed on a single carrier wafer, with all empty spaces filled in. Subsequently, TSMC’s integrated fan-out (InFO) technology is utilized to apply a layer of high-density interconnects. This process significantly enhances the inter-chip data bandwidth, enabling them to function like a single large chip.

By 2027, TSMC predicts a comprehensive wafer integration offering 40 times the computing power, surpassing 40 optical mask silicon, and accommodating over 60 HBMs.

As per Musk, if NVIDIA provides enough GPUs, Tesla probably won’t need to develop Dojo on its own. Preliminary estimates suggest that this batch of next-generation Dojo supercomputers will become part of Tesla’s new Dojo cluster, located in New York, with an investment of at least USD 500 million.

Despite substantial computing power investments, Tesla’s AI business still faces challenges. In December last year, two senior engineers responsible for the Dojo project left the company. Now, Tesla is continuously downsizing to cut costs, requiring more talented individuals to contribute their brains and efforts. This is crucial to ensure the timely launch of self-driving taxis and to enhance the Full Self-Driving (FSD) system.

The next-generation Dojo computer for Tesla will be located in New York, while the headquarters in Texas’s Gigafactory will house a 100MW data center for training self-driving software, with hardware using NVIDIA’s supply solutions. Regardless of the location, these chips ultimately come from TSMC’s production line. Referring to them as AI enablers is no exaggeration at all.

Read more

(Photo credit: TSMC)

Please note that this article cites information from TechNews.

  • Page 4
  • 260 page(s)
  • 1300 result(s)

Get in touch with us