Fully hardware-implemented memristor convolutional neural network
Peng Yao,
Huaqiang Wu (),
Bin Gao,
Jianshi Tang,
Qingtian Zhang,
Wenqiang Zhang,
J. Joshua Yang and
He Qian
Additional contact information
Peng Yao: Tsinghua University
Huaqiang Wu: Tsinghua University
Bin Gao: Tsinghua University
Jianshi Tang: Tsinghua University
Qingtian Zhang: Tsinghua University
Wenqiang Zhang: Tsinghua University
J. Joshua Yang: University of Massachusetts
He Qian: Tsinghua University
Nature, 2020, vol. 577, issue 7792, 641-646
Abstract:
Abstract Memristor-enabled neuromorphic computing systems provide a fast and energy-efficient approach to training neural networks1–4. However, convolutional neural networks (CNNs)—one of the most important models for image recognition5—have not yet been fully hardware-implemented using memristor crossbars, which are cross-point arrays with a memristor device at each intersection. Moreover, achieving software-comparable results is highly challenging owing to the poor yield, large variation and other non-ideal characteristics of devices6–9. Here we report the fabrication of high-yield, high-performance and uniform memristor crossbar arrays for the implementation of CNNs, which integrate eight 2,048-cell memristor arrays to improve parallel-computing efficiency. In addition, we propose an effective hybrid-training method to adapt to device imperfections and improve the overall system performance. We built a five-layer memristor-based CNN to perform MNIST10 image recognition, and achieved a high accuracy of more than 96 per cent. In addition to parallel convolutions using different kernels with shared inputs, replication of multiple identical kernels in memristor arrays was demonstrated for processing different inputs in parallel. The memristor-based CNN neuromorphic system has an energy efficiency more than two orders of magnitude greater than that of state-of-the-art graphics-processing units, and is shown to be scalable to larger networks, such as residual neural networks. Our results are expected to enable a viable memristor-based non-von Neumann hardware solution for deep neural networks and edge computing.
Date: 2020
References: Add references at CitEc
Citations: View citations in EconPapers (58)
Downloads: (external link)
https://www.nature.com/articles/s41586-020-1942-4 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:nature:v:577:y:2020:i:7792:d:10.1038_s41586-020-1942-4
Ordering information: This journal article can be ordered from
https://www.nature.com/
DOI: 10.1038/s41586-020-1942-4
Access Statistics for this article
Nature is currently edited by Magdalena Skipper
More articles in Nature from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().