Random resistive memory-based deep extreme point learning machine for unified visual processing
Shaocong Wang,
Yizhao Gao,
Yi Li,
Woyu Zhang,
Yifei Yu,
Bo Wang,
Ning Lin,
Hegan Chen,
Yue Zhang,
Yang Jiang,
Dingchen Wang,
Jia Chen,
Peng Dai,
Hao Jiang,
Peng Lin,
Xumeng Zhang,
Xiaojuan Qi,
Xiaoxin Xu,
Hayden So,
Zhongrui Wang (),
Dashan Shang (),
Qi Liu,
Kwang-Ting Cheng and
Ming Liu
Additional contact information
Shaocong Wang: The University of Hong Kong
Yizhao Gao: The University of Hong Kong
Yi Li: The University of Hong Kong
Woyu Zhang: Chinese Academy of Sciences
Yifei Yu: The University of Hong Kong
Bo Wang: The University of Hong Kong
Ning Lin: The University of Hong Kong
Hegan Chen: The University of Hong Kong
Yue Zhang: The University of Hong Kong
Yang Jiang: The University of Hong Kong
Dingchen Wang: The University of Hong Kong
Jia Chen: The University of Hong Kong
Peng Dai: The University of Hong Kong
Hao Jiang: Fudan University
Peng Lin: Zhejiang University
Xumeng Zhang: Fudan University
Xiaojuan Qi: The University of Hong Kong
Xiaoxin Xu: Chinese Academy of Sciences
Hayden So: The University of Hong Kong
Zhongrui Wang: Southern University of Science and Technology
Dashan Shang: Chinese Academy of Sciences
Qi Liu: Chinese Academy of Sciences
Kwang-Ting Cheng: Hong Kong Science Park
Ming Liu: Chinese Academy of Sciences
Nature Communications, 2025, vol. 16, issue 1, 1-11
Abstract:
Abstract Visual sensors, including 3D light detection and ranging, neuromorphic dynamic vision sensor, and conventional frame cameras, are increasingly integrated into edge-side intelligent machines. However, their data are heterogeneous, causing complexity in system development. Moreover, conventional digital hardware is constrained by von Neumann bottleneck and the physical limit of transistor scaling. The computational demands of training ever-growing models further exacerbate these challenges. We propose a hardware-software co-designed random resistive memory-based deep extreme point learning machine. Data-wise, the multi-sensory data are unified as point set and processed universally. Software-wise, most weights are exempted from training. Hardware-wise, nanoscale resistive memory enables collocation of memory and processing, and leverages the inherent programming stochasticity for generating random weights. The co-design system is validated on 3D segmentation (ShapeNet), event recognition (DVS128 Gesture), and image classification (Fashion-MNIST) tasks, achieving accuracy comparable to conventional systems while delivering 6.78 × /21.04 × /15.79 × energy efficiency improvements and 70.12%/89.46%/85.61% training cost reductions.
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-025-56079-3 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56079-3
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-025-56079-3
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().