EconPapers    
Economics at your fingertips  
 

Gradient descent in materia through homodyne gradient extraction

Marcus N. Boon, Lorenzo Cassola, Hans-Christian Ruiz Euler, Tao Chen, Bram van de Ven, Unai Alegre Ibarra, Peter A. Bobbert and Wilfred G. van der Wiel ()
Additional contact information
Marcus N. Boon: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Lorenzo Cassola: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Hans-Christian Ruiz Euler: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Tao Chen: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Bram van de Ven: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Unai Alegre Ibarra: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Peter A. Bobbert: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing
Wilfred G. van der Wiel: University of Twente, NanoElectronics Group, MESA+ Institute for Nanotechnology and BRAINS Center for Brain-Inspired Computing

Nature Communications, 2025, vol. 16, issue 1, 1-10

Abstract: Abstract Deep learning, a multilayered neural-network approach inspired by the brain, has revolutionized machine learning. Its success relies on backpropagation, which computes gradients of a loss function for use in gradient descent. However, digital implementations are energy hungry, with power demands limiting many applications. This has motivated specialized hardware, from neuromorphic CMOS and photonic tensor cores to unconventional material-based systems. Learning in such systems, for example via artificial evolution, equilibrium propagation, or surrogate modelling, is typically complicated and slow. Here, we demonstrate a simple gradient-extraction method based on homodyne detection, enabling gradient descent directly in physical systems without the need for an analytical description. By perturbing parameters with sinusoidal waveforms at distinct frequencies, we robustly obtain gradient information in a scalable manner. We illustrate the method in reconfigurable nonlinear-processing units and argue for broad applicability. Homodyne gradient extraction can in principle be fully implemented in materia, facilitating autonomously learning material systems.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-025-65155-7 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-65155-7

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-65155-7

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-11-23
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-65155-7