Noise-aware training of neuromorphic dynamic device networks
Luca Manneschi (),
Ian T. Vidamour (),
Kilian D. Stenning,
Charles Swindells,
Guru Venkat,
David Griffin,
Lai Gui,
Daanish Sonawala,
Denis Donskikh,
Dana Hariga,
Elisa Donati,
Susan Stepney,
Will R. Branford,
Jack C. Gartside,
Thomas J. Hayward,
Matthew O. A. Ellis and
Eleni Vasilaki
Additional contact information
Luca Manneschi: University of Sheffield
Ian T. Vidamour: University of Sheffield
Kilian D. Stenning: Imperial College London
Charles Swindells: University of Sheffield
Guru Venkat: University of Sheffield
David Griffin: University of York
Lai Gui: Imperial College London
Daanish Sonawala: Imperial College London
Denis Donskikh: Imperial College London
Dana Hariga: University of Sheffield
Elisa Donati: University of Zurich and ETHZ
Susan Stepney: University of York
Will R. Branford: Imperial College London
Jack C. Gartside: Imperial College London
Thomas J. Hayward: University of Sheffield
Matthew O. A. Ellis: University of Sheffield
Eleni Vasilaki: University of Sheffield
Nature Communications, 2025, vol. 16, issue 1, 1-12
Abstract:
Abstract In materio computing offers the potential for widespread embodied intelligence by leveraging the intrinsic dynamics of complex systems for efficient sensing, processing, and interaction. While individual devices offer basic data processing capabilities, networks of interconnected devices can perform more complex and varied tasks. However, designing such networks for dynamic tasks is challenging in the absence of physical models and accurate characterization of device noise. We introduce the Noise-Aware Dynamic Optimization (NADO) framework for training networks of dynamical devices, using Neural Stochastic Differential Equations (Neural-SDEs) as differentiable digital twins to capture both the dynamics and stochasticity of devices with intrinsic memory. Our approach combines backpropagation through time with cascade learning, enabling effective exploitation of the temporal properties of physical devices. We validate this method on networks of spintronic devices across both temporal classification and regression tasks. By decoupling device model training from network connectivity optimization, our framework reduces data requirements and enables robust, gradient-based programming of dynamical devices without requiring analytical descriptions of their behaviour.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-025-64232-1 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-64232-1
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-025-64232-1
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().