EconPapers    
Economics at your fingertips  
 

Optimised weight programming for analogue memory-based deep neural networks

Charles Mackin (), Malte J. Rasch, An Chen, Jonathan Timcheck, Robert L. Bruce, Ning Li, Pritish Narayanan, Stefano Ambrogio, Manuel Gallo, S. R. Nandakumar, Andrea Fasoli, Jose Luquin, Alexander Friz, Abu Sebastian, Hsinyu Tsai and Geoffrey W. Burr
Additional contact information
Charles Mackin: IBM Research–Almaden
Malte J. Rasch: IBM Research–Yorktown Heights
An Chen: IBM Research–Almaden
Jonathan Timcheck: Stanford University
Robert L. Bruce: IBM Research–Yorktown Heights
Ning Li: IBM Research–Yorktown Heights
Pritish Narayanan: IBM Research–Almaden
Stefano Ambrogio: IBM Research–Almaden
Manuel Gallo: IBM Research–Zurich
S. R. Nandakumar: IBM Research–Zurich
Andrea Fasoli: IBM Research–Almaden
Jose Luquin: IBM Research–Almaden
Alexander Friz: IBM Research–Almaden
Abu Sebastian: IBM Research–Zurich
Hsinyu Tsai: IBM Research–Almaden
Geoffrey W. Burr: IBM Research–Almaden

Nature Communications, 2022, vol. 13, issue 1, 1-12

Abstract: Abstract Analogue memory-based deep neural networks provide energy-efficiency and per-area throughput gains relative to state-of-the-art digital counterparts such as graphics processing units. Recent advances focus largely on hardware-aware algorithmic training and improvements to circuits, architectures, and memory devices. Optimal translation of software-trained weights into analogue hardware weights—given the plethora of complex memory non-idealities—represents an equally important task. We report a generalised computational framework that automates the crafting of complex weight programming strategies to minimise accuracy degradations during inference, particularly over time. The framework is agnostic to network structure and generalises well across recurrent, convolutional, and transformer neural networks. As a highly flexible numerical heuristic, the approach accommodates arbitrary device-level complexity, making it potentially relevant for a variety of analogue memories. By quantifying the limit of achievable inference accuracy, it also enables analogue memory-based deep neural network accelerators to reach their full inference potential.

Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.nature.com/articles/s41467-022-31405-1 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-31405-1

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-022-31405-1

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-31405-1