How well do models of visual cortex generalize to out of distribution samples?
Yifei Ren and
Pouya Bashivan
PLOS Computational Biology, 2024, vol. 20, issue 5, 1-29
Abstract:
Unit activity in particular deep neural networks (DNNs) are remarkably similar to the neuronal population responses to static images along the primate ventral visual cortex. Linear combinations of DNN unit activities are widely used to build predictive models of neuronal activity in the visual cortex. Nevertheless, prediction performance in these models is often investigated on stimulus sets consisting of everyday objects under naturalistic settings. Recent work has revealed a generalization gap in how predicting neuronal responses to synthetically generated out-of-distribution (OOD) stimuli. Here, we investigated how the recent progress in improving DNNs’ object recognition generalization, as well as various DNN design choices such as architecture, learning algorithm, and datasets have impacted the generalization gap in neural predictivity. We came to a surprising conclusion that the performance on none of the common computer vision OOD object recognition benchmarks is predictive of OOD neural predictivity performance. Furthermore, we found that adversarially robust models often yield substantially higher generalization in neural predictivity, although the degree of robustness itself was not predictive of neural predictivity score. These results suggest that improving object recognition behavior on current benchmarks alone may not lead to more general models of neurons in the primate ventral visual cortex.Author summary: Inspired by the neural circuits of the brain, deep neural networks (DNN) have been steadily improving in their ability to perform foundational visual tasks such as object recognition. Whereas, early models struggled with generalization to abstract visual domains such as line drawings and cartoons, recent advancement have approached near-human recognition capabilities. Moreover, the unit activity in these networks exhibit strong similarities with the activity of single-unit recordings along the primate ventral visual cortex. This capability of DNNs has provided visual neuroscientists with precise models for exploring the neural underpinnings of object recognition. Our research probes whether enhancements in neural networks’ recognition of out-of-distribution objects correlate with improved predictability of brain activity in the visual cortex of monkeys to synthetic stimuli. We found that the out of distribution object recognition performance on natural image datasets is not a reliable measure of neural predictivity. However, DNN models that were trained to be more resilient to adversarially generated noise patterns as well as DNN ensembles, consistently yielded better generalization in neural predictivity. Altogether, our results suggest that improving object recognition behaviour on current benchmarks alone may not lead to more general models of neurons in the primate ventral visual cortex.
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1011145 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 11145&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1011145
DOI: 10.1371/journal.pcbi.1011145
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().