Foundation model of neural activity predicts response to new stimulus types
Eric Y. Wang,
Paul G. Fahey,
Zhuokun Ding,
Stelios Papadopoulos,
Kayla Ponder,
Marissa A. Weis,
Andersen Chang,
Taliah Muhammad,
Saumil Patel,
Zhiwei Ding,
Dat Tran,
Jiakun Fu,
Casey M. Schneider-Mizell,
R. Clay Reid,
Forrest Collman,
Nuno Maçarico Costa,
Katrin Franke,
Alexander S. Ecker,
Jacob Reimer,
Xaq Pitkow,
Fabian H. Sinz and
Andreas S. Tolias ()
Additional contact information
Eric Y. Wang: Baylor College of Medicine
Paul G. Fahey: Baylor College of Medicine
Zhuokun Ding: Baylor College of Medicine
Stelios Papadopoulos: Baylor College of Medicine
Kayla Ponder: Baylor College of Medicine
Marissa A. Weis: University of Göttingen
Andersen Chang: Baylor College of Medicine
Taliah Muhammad: Baylor College of Medicine
Saumil Patel: Baylor College of Medicine
Zhiwei Ding: Baylor College of Medicine
Dat Tran: Baylor College of Medicine
Jiakun Fu: Baylor College of Medicine
Casey M. Schneider-Mizell: Allen Institute for Brain Science
R. Clay Reid: Allen Institute for Brain Science
Forrest Collman: Allen Institute for Brain Science
Nuno Maçarico Costa: Allen Institute for Brain Science
Katrin Franke: Baylor College of Medicine
Alexander S. Ecker: University of Göttingen
Jacob Reimer: Baylor College of Medicine
Xaq Pitkow: Baylor College of Medicine
Fabian H. Sinz: Baylor College of Medicine
Andreas S. Tolias: Baylor College of Medicine
Nature, 2025, vol. 640, issue 8058, 470-477
Abstract:
Abstract The complexity of neural circuits makes it challenging to decipher the brain’s algorithms of intelligence. Recent breakthroughs in deep learning have produced models that accurately simulate brain activity, enhancing our understanding of the brain’s computational objectives and neural coding. However, it is difficult for such models to generalize beyond their training distribution, limiting their utility. The emergence of foundation models1 trained on vast datasets has introduced a new artificial intelligence paradigm with remarkable generalization capabilities. Here we collected large amounts of neural activity from visual cortices of multiple mice and trained a foundation model to accurately predict neuronal responses to arbitrary natural videos. This model generalized to new mice with minimal training and successfully predicted responses across various new stimulus domains, such as coherent motion and noise patterns. Beyond neural response prediction, the model also accurately predicted anatomical cell types, dendritic features and neuronal connectivity within the MICrONS functional connectomics dataset2. Our work is a crucial step towards building foundation models of the brain. As neuroscience accumulates larger, multimodal datasets, foundation models will reveal statistical regularities, enable rapid adaptation to new tasks and accelerate research.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41586-025-08829-y Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:nature:v:640:y:2025:i:8058:d:10.1038_s41586-025-08829-y
Ordering information: This journal article can be ordered from
https://www.nature.com/
DOI: 10.1038/s41586-025-08829-y
Access Statistics for this article
Nature is currently edited by Magdalena Skipper
More articles in Nature from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().