About TrendForce News

TrendForce News operates independently from our research team, curating key semiconductor and tech updates to support timely, informed decisions.

[News] Chinese Scientists Achieved New Breakthrough in Next-Gen Optical Computing Chips


2025-12-30 Semiconductors editor

According to Xinhua News Agency, researchers at Shanghai Jiao Tong University (SJTU) have recently made a major breakthrough in next-generation optical computing chips, successfully realizing—for the first time—an all-optical computing chip capable of supporting large-scale semantic generative models. The research results were published on December 19 in Science.

As deep neural networks and large-scale generative models continue to evolve rapidly, the resulting demand for ultra-high computing power and energy efficiency has exposed a widening gap in the performance growth of conventional chip architectures. Against this backdrop, emerging paradigms like optical computing have increasingly gained traction.

“Optical computing can be understood as replacing electrons in transistors with light propagating through a chip, using changes in optical fields to perform computation,” said Chen Yitong, corresponding author of the paper and assistant professor at the School of Integrated Circuits, SJTU. “Light inherently offers advantages in speed and parallelism, making it a promising path to overcoming bottlenecks in computing power and energy consumption.”

However, Chen noted that applying optical computing to generative AI is far from straightforward. Existing all-optical computing chips are largely confined to small-scale classification tasks, while opto-electronic cascading or multiplexing significantly undermines the speed advantages of optical computing. How to enable next-generation optical chips to run complex generative models has thus become a widely recognized challenge in the global intelligent computing community.

To address this challenge, Chen’s research group proposed and demonstrated LightGen, an all-optical large-scale semantic generative chip. Experimental results obtained under stringent performance evaluation standards show that even when paired with relatively lagging input devices, LightGen delivers computing power and energy efficiency improvements of two orders of magnitude over state-of-the-art digital chips.

The team attributes this performance leap to three key breakthroughs achieved simultaneously on a single chip: the integration of millions of optical neurons on-chip, all-optical dimensional transformation, and a ground-truth-free optical training algorithm for generative models. Together, these advances make end-to-end all-optical implementation for large-scale generative tasks feasible.

According to the researchers, LightGen is capable of completing a full closed loop of “input–understanding–semantic manipulation–generation,” enabling high-resolution (≥512×512) image semantic generation, 3D generation (NeRF), high-definition video generation, and semantic control. It also supports a range of large-scale generative tasks, including denoising as well as local and global feature transfer.

“LightGen blazes a new pathway for next-generation optical computing chips to empower cutting-edge artificial intelligence,” Chen said, “and provides a fresh research direction for exploring faster and more energy-efficient generative intelligent computing.”

(Photo credit: FREEPIK)

Please note that this article cites information from Xinhua News Agency and Science.


Get in touch with us