EconPapers    
Economics at your fingertips  
 

11 TOPS photonic convolutional accelerator for optical neural networks

Xingyuan Xu, Mengxi Tan, Bill Corcoran, Jiayang Wu, Andreas Boes, Thach G. Nguyen, Sai T. Chu, Brent E. Little, Damien G. Hicks, Roberto Morandotti, Arnan Mitchell and David J. Moss ()
Additional contact information
Xingyuan Xu: Swinburne University of Technology
Mengxi Tan: Swinburne University of Technology
Bill Corcoran: Monash University
Jiayang Wu: Swinburne University of Technology
Andreas Boes: RMIT University
Thach G. Nguyen: RMIT University
Sai T. Chu: City University of Hong Kong
Brent E. Little: Chinese Academy of Sciences
Damien G. Hicks: Swinburne University of Technology
Roberto Morandotti: Matériaux et Télécommunications
Arnan Mitchell: RMIT University
David J. Moss: Swinburne University of Technology

Nature, 2021, vol. 589, issue 7840, 44-51

Abstract: Abstract Convolutional neural networks, inspired by biological visual cortex systems, are a powerful category of artificial neural networks that can extract the hierarchical features of raw data to provide greatly reduced parametric complexity and to enhance the accuracy of prediction. They are of great interest for machine learning tasks such as computer vision, speech recognition, playing board games and medical diagnosis1–7. Optical neural networks offer the promise of dramatically accelerating computing speed using the broad optical bandwidths available. Here we demonstrate a universal optical vector convolutional accelerator operating at more than ten TOPS (trillions (1012) of operations per second, or tera-ops per second), generating convolutions of images with 250,000 pixels—sufficiently large for facial image recognition. We use the same hardware to sequentially form an optical convolutional neural network with ten output neurons, achieving successful recognition of handwritten digit images at 88 per cent accuracy. Our results are based on simultaneously interleaving temporal, wavelength and spatial dimensions enabled by an integrated microcomb source. This approach is scalable and trainable to much more complex networks for demanding applications such as autonomous vehicles and real-time video recognition.

Date: 2021
References: Add references at CitEc
Citations: View citations in EconPapers (32)

Downloads: (external link)
https://www.nature.com/articles/s41586-020-03063-0 Abstract (text/html)
Access to the full text of the articles in this series is restricted.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:nature:v:589:y:2021:i:7840:d:10.1038_s41586-020-03063-0

Ordering information: This journal article can be ordered from
https://www.nature.com/

DOI: 10.1038/s41586-020-03063-0

Access Statistics for this article

Nature is currently edited by Magdalena Skipper

More articles in Nature from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:nature:v:589:y:2021:i:7840:d:10.1038_s41586-020-03063-0