EconPapers    
Economics at your fingertips  
 

Higher-level spatial prediction in natural vision across mouse visual cortex

Micha Heilbron and Floris P de Lange

PLOS Computational Biology, 2026, vol. 22, issue 1, 1-21

Abstract: Theories of predictive processing propose that sensory systems constantly predict incoming signals, based on spatial and temporal context. However, evidence for prediction in sensory cortex largely comes from artificial experiments using simple, highly predictable stimuli, that arguably encourage prediction. Here, we test for sensory prediction during natural scene perception. Specifically, we use deep generative modelling to quantify the spatial predictability of receptive field (RF) patches in natural images, and compared those predictability estimates to brain responses in the mouse visual cortex—while rigorously accounting for established tuning to a rich set of low-level image features and their local statistical context—in a large scale survey of high-density recordings from the Allen Institute Brain Observatory. This revealed four insights. First, cortical responses across the mouse visual system are shaped by sensory predictability, with more predictable image patches evoking weaker responses. Secondly, visual cortical neurons are primarily sensitive to the predictability of higher-level image features, even in neurons in the primary visual areas that are preferentially tuned to low-level visual features. Third, unpredictability sensitivity is stronger in the superficial layers of primary visual cortex, in line with predictive coding models. Finally, these spatial prediction effects are independent of recent experience, suggesting that they rely on long-term priors about the structure of the visual world. Together, these results suggest visual cortex might predominantly predict sensory information at higher levels of abstraction—a pattern bearing striking similarities to recent, successful techniques from artificial intelligence for predictive self-supervised learning.Author summary: How does the brain make sense of the constant stream of visual information? A popular theory suggests the brain is not a passive receiver but an active predictor, constantly generating predictions about incoming sensory input. We tested this idea by analysing neural responses of thousands of brain cells of mice watching natural images. Using an AI model, we could quantify how predictable any specific image patch was given its surroundings. As predicted by the theory, we found that the brain cells indeed responded less to more predictable parts of an image. This effect appears based on long-term knowledge, as it was independent of the animal’s recent experience with the images. Strikingly, we discovered that even in the earliest stages of visual processing, the brain is most sensitive to the predictability of complex patterns and textures, not that of simple features like edges. This strategy of predicting specifically the complex (high-level) information mirrors recent breakthroughs in AI, suggesting that both brains and these recent AI systems may learn to understand the visual world through a similar, predictive process.

Date: 2026
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1013136 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 13136&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1013136

DOI: 10.1371/journal.pcbi.1013136

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2026-01-31
Handle: RePEc:plo:pcbi00:1013136