Optical generative models
Shiqi Chen,
Yuhang Li,
Yuntian Wang,
Hanlong Chen and
Aydogan Ozcan ()
Additional contact information
Shiqi Chen: University of California Los Angeles
Yuhang Li: University of California Los Angeles
Yuntian Wang: University of California Los Angeles
Hanlong Chen: University of California Los Angeles
Aydogan Ozcan: University of California Los Angeles
Nature, 2025, vol. 644, issue 8078, 903-911
Abstract:
Abstract Generative models cover various application areas, including image and video synthesis, natural language processing and molecular design, among many others1–11. As digital generative models become larger, scalable inference in a fast and energy-efficient manner becomes a challenge12–14. Here we present optical generative models inspired by diffusion models4, where a shallow and fast digital encoder first maps random noise into phase patterns that serve as optical generative seeds for a desired data distribution; a jointly trained free-space-based reconfigurable decoder all-optically processes these generative seeds to create images never seen before following the target data distribution. Except for the illumination power and the random seed generation through a shallow encoder, these optical generative models do not consume computing power during the synthesis of the images. We report the optical generation of monochrome and multicolour images of handwritten digits, fashion products, butterflies, human faces and artworks, following the data distributions of MNIST15, Fashion-MNIST16, Butterflies-10017, Celeb-A datasets18, and Van Gogh’s paintings and drawings19, respectively, achieving an overall performance comparable to digital neural-network-based generative models. To experimentally demonstrate optical generative models, we used visible light to generate images of handwritten digits and fashion products. In addition, we generated Van Gogh-style artworks using both monochrome and multiwavelength illumination. These optical generative models might pave the way for energy-efficient and scalable inference tasks, further exploiting the potentials of optics and photonics for artificial-intelligence-generated content.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41586-025-09446-5 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:nature:v:644:y:2025:i:8078:d:10.1038_s41586-025-09446-5
Ordering information: This journal article can be ordered from
https://www.nature.com/
DOI: 10.1038/s41586-025-09446-5
Access Statistics for this article
Nature is currently edited by Magdalena Skipper
More articles in Nature from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().