Targeting operational regimes of interest in recurrent neural networks
Pierre Ekelmans,
Nataliya Kraynyukova and
Tatjana Tchumatchenko
PLOS Computational Biology, 2023, vol. 19, issue 5, 1-25
Abstract:
Neural computations emerge from local recurrent neural circuits or computational units such as cortical columns that comprise hundreds to a few thousand neurons. Continuous progress in connectomics, electrophysiology, and calcium imaging require tractable spiking network models that can consistently incorporate new information about the network structure and reproduce the recorded neural activity features. However, for spiking networks, it is challenging to predict which connectivity configurations and neural properties can generate fundamental operational states and specific experimentally reported nonlinear cortical computations. Theoretical descriptions for the computational state of cortical spiking circuits are diverse, including the balanced state where excitatory and inhibitory inputs balance almost perfectly or the inhibition stabilized state (ISN) where the excitatory part of the circuit is unstable. It remains an open question whether these states can co-exist with experimentally reported nonlinear computations and whether they can be recovered in biologically realistic implementations of spiking networks. Here, we show how to identify spiking network connectivity patterns underlying diverse nonlinear computations such as XOR, bistability, inhibitory stabilization, supersaturation, and persistent activity. We establish a mapping between the stabilized supralinear network (SSN) and spiking activity which allows us to pinpoint the location in parameter space where these activity regimes occur. Notably, we find that biologically-sized spiking networks can have irregular asynchronous activity that does not require strong excitation-inhibition balance or large feedforward input and we show that the dynamic firing rate trajectories in spiking networks can be precisely targeted without error-driven training algorithms.Author summary: Biological neural networks must be able to execute diverse nonlinear operations on signals in order to perform complex information processing. While nonlinear transformations have been observed experimentally or in specific theoretical models, a comprehensive theory linking the parameters of a network of spiking neurons to its computations is still lacking. We show that spiking networks can be accurately approximated with a mathematically tractable model, the Stabilized Supralinear Network. Using the mapping we derived between these two frameworks, we show that spiking networks have a rich repertoire of nonlinear regimes at their disposal and link the existence of such regimes to precise conditions on parameters. Notably, we show that classical excitatory-inhibitory networks of leaky integrate-and-fire neurons support nonlinear transformations without the need for synaptic plasticity, intricate wiring diagrams or a complex system of different cell types. The capacity of a network to reliably perform such operations has profound functional implications as they can be the basis permitting the execution of complex computations.
Date: 2023
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011097 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 11097&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1011097
DOI: 10.1371/journal.pcbi.1011097
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().