EconPapers    
Economics at your fingertips  
 

High-resolution single-photon imaging with physics-informed deep learning

Liheng Bian (), Haoze Song, Lintao Peng, Xuyang Chang, Xi Yang, Roarke Horstmeyer, Lin Ye, Chunli Zhu, Tong Qin, Dezhi Zheng and Jun Zhang ()
Additional contact information
Liheng Bian: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Haoze Song: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Lintao Peng: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Xuyang Chang: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Xi Yang: Duke University
Roarke Horstmeyer: Duke University
Lin Ye: Beijing Institute of Technology
Chunli Zhu: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Tong Qin: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Dezhi Zheng: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology
Jun Zhang: MIIT Key Laboratory of Complex-field Intelligent Sensing, Beijing Institute of Technology

Nature Communications, 2023, vol. 14, issue 1, 1-13

Abstract: Abstract High-resolution single-photon imaging remains a big challenge due to the complex hardware manufacturing craft and noise disturbances. Here, we introduce deep learning into SPAD, enabling super-resolution single-photon imaging with enhancement of bit depth and imaging quality. We first studied the complex photon flow model of SPAD electronics to accurately characterize multiple physical noise sources, and collected a real SPAD image dataset (64 × 32 pixels, 90 scenes, 10 different bit depths, 3 different illumination flux, 2790 images in total) to calibrate noise model parameters. With this physical noise model, we synthesized a large-scale realistic single-photon image dataset (image pairs of 5 different resolutions with maximum megapixels, 17250 scenes, 10 different bit depths, 3 different illumination flux, 2.6 million images in total) for subsequent network training. To tackle the severe super-resolution challenge of SPAD inputs with low bit depth, low resolution, and heavy noise, we further built a deep transformer network with a content-adaptive self-attention mechanism and gated fusion modules, which can dig global contextual features to remove multi-source noise and extract full-frequency details. We applied the technique in a series of experiments including microfluidic inspection, Fourier ptychography, and high-speed imaging. The experiments validate the technique’s state-of-the-art super-resolution SPAD imaging performance.

Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-023-41597-9 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:14:y:2023:i:1:d:10.1038_s41467-023-41597-9

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-023-41597-9

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:14:y:2023:i:1:d:10.1038_s41467-023-41597-9