Robust encoding of natural stimuli by neuronal response sequences in monkey visual cortex
Yang Yiling,
Katharine Shapcott,
Alina Peter,
Johanna Klon-Lipok,
Huang Xuhui,
Andreea Lazar and
Wolf Singer ()
Additional contact information
Yang Yiling: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
Katharine Shapcott: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
Alina Peter: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
Johanna Klon-Lipok: Max Planck Institute for Brain Research
Huang Xuhui: Intelligent Science and Technology Academy, China Aerospace Science and Industry Corporation (CASIC)
Andreea Lazar: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
Wolf Singer: Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society
Nature Communications, 2023, vol. 14, issue 1, 1-18
Abstract:
Abstract Parallel multisite recordings in the visual cortex of trained monkeys revealed that the responses of spatially distributed neurons to natural scenes are ordered in sequences. The rank order of these sequences is stimulus-specific and maintained even if the absolute timing of the responses is modified by manipulating stimulus parameters. The stimulus specificity of these sequences was highest when they were evoked by natural stimuli and deteriorated for stimulus versions in which certain statistical regularities were removed. This suggests that the response sequences result from a matching operation between sensory evidence and priors stored in the cortical network. Decoders trained on sequence order performed as well as decoders trained on rate vectors but the former could decode stimulus identity from considerably shorter response intervals than the latter. A simulated recurrent network reproduced similarly structured stimulus-specific response sequences, particularly once it was familiarized with the stimuli through non-supervised Hebbian learning. We propose that recurrent processing transforms signals from stationary visual scenes into sequential responses whose rank order is the result of a Bayesian matching operation. If this temporal code were used by the visual system it would allow for ultrafast processing of visual scenes.
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41467-023-38587-2 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:14:y:2023:i:1:d:10.1038_s41467-023-38587-2
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-023-38587-2
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().