EconPapers    
Economics at your fingertips  
 

Temporal structure of natural language processing in the human brain corresponds to layered hierarchy of large language models

Ariel Goldstein (), Eric Ham, Mariano Schain, Samuel A. Nastase, Bobbi Aubrey, Zaid Zada, Avigail Grinstein-Dabush, Harshvardhan Gazula, Amir Feder, Werner Doyle, Sasha Devore, Patricia Dugan, Daniel Friedman, Michael Brenner, Avinatan Hassidim, Yossi Matias, Orrin Devinsky, Noam Siegelman, Adeen Flinker, Omer Levy, Roi Reichart and Uri Hasson
Additional contact information
Ariel Goldstein: Hebrew University, Department of Cognitive and Brain Sciences
Eric Ham: Princeton University, Department of Psychology and the Neuroscience Institute
Mariano Schain: Google Research
Samuel A. Nastase: Princeton University, Department of Psychology and the Neuroscience Institute
Bobbi Aubrey: Princeton University, Department of Psychology and the Neuroscience Institute
Zaid Zada: Princeton University, Department of Psychology and the Neuroscience Institute
Avigail Grinstein-Dabush: Google Research
Harshvardhan Gazula: Princeton University, Department of Psychology and the Neuroscience Institute
Amir Feder: Google Research
Werner Doyle: New York University Grossman School of Medicine
Sasha Devore: New York University Grossman School of Medicine
Patricia Dugan: New York University Grossman School of Medicine
Daniel Friedman: New York University Grossman School of Medicine
Michael Brenner: Google Research
Avinatan Hassidim: Google Research
Yossi Matias: Google Research
Orrin Devinsky: New York University Grossman School of Medicine
Noam Siegelman: Hebrew University, Department of Cognitive and Brain Sciences
Adeen Flinker: New York University Grossman School of Medicine
Omer Levy: Tel-Aviv University, Blavatnik School of Computer Science
Roi Reichart: Technion—Israel Institute of Technology
Uri Hasson: Google Research

Nature Communications, 2025, vol. 16, issue 1, 1-12

Abstract: Abstract Large Language Models (LLMs) offer a framework for understanding language processing in the human brain. Unlike traditional models, LLMs represent words and context through layered numerical embeddings. Here, we demonstrate that LLMs’ layer hierarchy aligns with the temporal dynamics of language comprehension in the brain. Using electrocorticography (ECoG) data from participants listening to a 30-minute narrative, we show that deeper LLM layers correspond to later brain activity, particularly in Broca’s area and other language-related regions. We extract contextual embeddings from GPT-2 XL and Llama-2 and use linear models to predict neural responses across time. Our results reveal a strong correlation between model depth and the brain’s temporal receptive window during comprehension. We also compare LLM-based predictions with symbolic approaches, highlighting the advantages of deep learning models in capturing brain dynamics. We release our aligned neural and linguistic dataset as a public benchmark to test competing theories of language processing.

Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.nature.com/articles/s41467-025-65518-0 Abstract (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-65518-0

Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/

DOI: 10.1038/s41467-025-65518-0

Access Statistics for this article

Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie

More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().

 
Page updated 2025-11-28
Handle: RePEc:nat:natcom:v:16:y:2025:i:1:d:10.1038_s41467-025-65518-0