Structured random receptive fields enable informative sensory encodings
Biraj Pandey,
Marius Pachitariu,
Bingni W Brunton and
Kameron Decker Harris
PLOS Computational Biology, 2022, vol. 18, issue 10, 1-28
Abstract:
Brains must represent the outside world so that animals survive and thrive. In early sensory systems, neural populations have diverse receptive fields structured to detect important features in inputs, yet significant variability has been ignored in classical models of sensory neurons. We model neuronal receptive fields as random, variable samples from parameterized distributions and demonstrate this model in two sensory modalities using data from insect mechanosensors and mammalian primary visual cortex. Our approach leads to a significant theoretical connection between the foundational concepts of receptive fields and random features, a leading theory for understanding artificial neural networks. The modeled neurons perform a randomized wavelet transform on inputs, which removes high frequency noise and boosts the signal. Further, these random feature neurons enable learning from fewer training samples and with smaller networks in artificial tasks. This structured random model of receptive fields provides a unifying, mathematically tractable framework to understand sensory encodings across both spatial and temporal domains.Author summary: Evolution has ensured that animal brains are dedicated to extracting useful information from raw sensory stimuli while discarding everything else. Models of sensory neurons are a key part of our theories of how the brain represents the world. In this work, we model the tuning properties of sensory neurons in a way that incorporates randomness and builds a bridge to a leading mathematical theory for understanding how artificial neural networks learn. Our models capture important properties of large populations of real neurons presented with varying stimuli. Moreover, we give a precise mathematical formula for how sensory neurons in two distinct areas, one involving a gyroscopic organ in insects and the other visual processing center in mammals, transform their inputs. We also find that artificial models imbued with properties from real neurons learn more efficiently, with shorter training time and fewer examples, and our mathematical theory explains some of these findings. This work expands our understanding of sensory representation in large networks with benefits for both the neuroscience and machine learning communities.
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1010484 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 10484&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1010484
DOI: 10.1371/journal.pcbi.1010484
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().