Compression-enabled interpretability of voxelwise encoding models
Fatemeh Kamali,
Amir Abolfazl Suratgar,
Mohammadbagher Menhaj and
Reza Abbasi-Asl
PLOS Computational Biology, 2025, vol. 21, issue 2, 1-20
Abstract:
Voxelwise encoding models based on convolutional neural networks (CNNs) are widely used as predictive models of brain activity evoked by natural movies. Despite their superior predictive performance, the huge number of parameters in CNN-based models have made them difficult to interpret. Here, we investigate whether model compression can build more interpretable and more stable CNN-based voxelwise models while maintaining accuracy. We used multiple compression techniques to prune less important CNN filters and connections, a receptive field compression method to select receptive fields with optimal center and size, and principal component analysis to reduce dimensionality. We demonstrate that the model compression improves the accuracy of identifying visual stimuli in a hold-out test set. Additionally, compressed models offer a more stable interpretation of voxelwise pattern selectivity than uncompressed models. Finally, the receptive field-compressed models reveal that the optimal model-based population receptive fields become larger and more centralized along the ventral visual pathway. Overall, our findings support using model compression to build more interpretable voxelwise models.Author summary: In this study, we explored the process of simplifying complex brain models and investigated the advantages of this simplification for improved interpretability of the models without losing accuracy. We focused on models that predict brain activity when people watch movies, which are usually based on advanced neural networks. These models are powerful, but they are often too complicated to interpret. By using compression techniques to reduce the size and complexity of these models, we found that they not only remained accurate but also became more stable and easier to understand. Our approach involved trimming unnecessary parts of the model and focusing on the most important areas that respond to visual stimuli. This suggests that simplifying models can help us better understand how the brain processes visual information. Our work highlights the potential of model compression as a tool for making complex scientific findings more accessible and easier to understand for both researchers and the general public.
Date: 2025
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1012822 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 12822&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1012822
DOI: 10.1371/journal.pcbi.1012822
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().