Achieving stable dynamics in neural circuits
Leo Kozachkov,
Mikael Lundqvist,
Jean-Jacques Slotine and
Earl K Miller
PLOS Computational Biology, 2020, vol. 16, issue 8, 1-15
Abstract:
The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.Author summary: Stability is essential for any complex system including, and perhaps especially, the brain. The brain’s neural networks are highly dynamic and noisy. Activity fluctuates from moment to moment and can be highly variable. Yet it is critical that these networks reach a consistent state (or sequence of states) for their computations to make sense. Failures in stability have consequences ranging from mild (e.g incorrect decisions) to severe (disease states). In this paper we use tools from control theory and dynamical systems theory to find mechanisms which produce stability in recurrent neural networks (RNNs). We show that a kind of “unlearning” (inhibitory Hebbian and excitatory anti-Hebbian plasticity), balance of excitation and inhibition, and sparse anatomical connectivity all lead to stability. Crucially, we focus on the stability of neural trajectories. This is different from traditional studies of stability of fixed points or planes. We do not assess what trajectories our networks will follow but, rather, when these trajectories will all converge towards each other to achieve stability.
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007659 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 07659&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1007659
DOI: 10.1371/journal.pcbi.1007659
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().