EconPapers    
Economics at your fingertips  
 

Experimentally validated memristive memory augmented neural network with efficient hashing and similarity search

Ruibin Mao, Bo Wen, Arman Kazemi, Yahui Zhao, Ann Franchesca Laguna, Rui Lin, Ngai Wong, Michael Niemier, X. Sharon Hu, Xia Sheng, Catherine E. Graves (), John Paul Strachan () and Can Li ()
Additional contact information
Ruibin Mao: The University of Hong Kong
Bo Wen: The University of Hong Kong
Arman Kazemi: Hewlett Packard Labs, Hewlett Packard Enterprise
Yahui Zhao: The University of Hong Kong
Ann Franchesca Laguna: University of Notre Dame
Rui Lin: The University of Hong Kong
Ngai Wong: The University of Hong Kong
Michael Niemier: University of Notre Dame
X. Sharon Hu: University of Notre Dame
Xia Sheng: Hewlett Packard Labs, Hewlett Packard Enterprise
Catherine E. Graves: Hewlett Packard Labs, Hewlett Packard Enterprise
John Paul Strachan: Peter Grünberg Institut (PGI-14), Forschungszentrum Jülich GmbH
Can Li: The University of Hong Kong

Nature Communications, 2022, vol. 13, issue 1, 1-13

Abstract: Abstract Lifelong on-device learning is a key challenge for machine intelligence, and this requires learning from few, often single, samples. Memory-augmented neural networks have been proposed to achieve the goal, but the memory module must be stored in off-chip memory, heavily limiting the practical use. In this work, we experimentally validated that all different structures in the memory-augmented neural network can be implemented in a fully integrated memristive crossbar platform with an accuracy that closely matches digital hardware. The successful demonstration is supported by implementing new functions in crossbars, including the crossbar-based content-addressable memory and locality sensitive hashing exploiting the intrinsic stochasticity of memristor devices. Simulations show that such an implementation can be efficiently scaled up for one-shot learning on more complex tasks. The successful demonstration paves the way for practical on-device lifelong learning and opens possibilities for novel attention-based algorithms that were not possible in conventional hardware.

Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.nature.com/articles/s41467-022-33629-7 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-33629-7

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-022-33629-7

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-19
Handle: RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-022-33629-7