EconPapers    
Economics at your fingertips  
 

Efficient nonlinear function approximation in analog resistive crossbars for recurrent neural networks

Junyi Yang, Ruibin Mao, Mingrui Jiang, Yichuan Cheng, Pao-Sheng Vincent Sun, Shuai Dong, Giacomo Pedretti, Xia Sheng, Jim Ignowski, Haoliang Li, Can Li () and Arindam Basu ()
Additional contact information
Junyi Yang: City University of Hong Kong
Ruibin Mao: The University of Hong Kong
Mingrui Jiang: The University of Hong Kong
Yichuan Cheng: City University of Hong Kong
Pao-Sheng Vincent Sun: City University of Hong Kong
Shuai Dong: City University of Hong Kong
Giacomo Pedretti: Hewlett Packard Enterprise
Xia Sheng: Hewlett Packard Enterprise
Jim Ignowski: Hewlett Packard Enterprise
Haoliang Li: City University of Hong Kong
Can Li: The University of Hong Kong
Arindam Basu: City University of Hong Kong

Nature Communications, 2025, vol. 16, issue 1, 1-15

Abstract: Abstract Analog In-memory Computing (IMC) has demonstrated energy-efficient and low latency implementation of convolution and fully-connected layers in deep neural networks (DNN) by using physics for computing in parallel resistive memory arrays. However, recurrent neural networks (RNN) that are widely used for speech-recognition and natural language processing have tasted limited success with this approach. This can be attributed to the significant time and energy penalties incurred in implementing nonlinear activation functions that are abundant in such models. In this work, we experimentally demonstrate the implementation of a non-linear activation function integrated with a ramp analog-to-digital conversion (ADC) at the periphery of the memory to improve in-memory implementation of RNNs. Our approach uses an extra column of memristors to produce an appropriately pre-distorted ramp voltage such that the comparator output directly approximates the desired nonlinear function. We experimentally demonstrate programming different nonlinear functions using a memristive array and simulate its incorporation in RNNs to solve keyword spotting and language modelling tasks. Compared to other approaches, we demonstrate manifold increase in area-efficiency, energy-efficiency and throughput due to the in-memory, programmable ramp generator that removes digital processing overhead.

Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.nature.com/articles/s41467-025-56254-6 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56254-6

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-56254-6

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-03-22
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-56254-6