EconPapers    
Economics at your fingertips  
 

Brain-optimized extraction of complex sound features that drive continuous auditory perception

Julia Berezutskaya, Zachary V Freudenburg, Umut Güçlü, Marcel A J van Gerven and Nick F Ramsey

PLOS Computational Biology, 2020, vol. 16, issue 7, 1-34

Abstract: Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception.Author summary: A lot remains unknown regarding how the human brain processes sound in a naturalistic setting, for example when talking to a friend or watching a movie. Many theoretical frameworks have been developed in attempt to explain this process, yet we still lack the comprehensive understanding of the brain mechanisms that support continuous auditory processing. Here we present a new type of framework where we seek to explain the brain responses to sound by considering few theoretical assumptions and instead learn about the brain mechanisms of auditory processing with a ‘data-driven’ approach. Our approach is based on applying a deep artificial neural network directly to predicting the brain responses evoked by a soundtrack of a movie. We show that our framework provides good prediction accuracy of the observed neural activity and performs well on novel brain and audio data. In addition, we show that our model learns interpretable auditory features that link well to the observed neural dynamics particularly during speech perception. This framework can easily be applied to external audio and brain data and is therefore unique in its potential to address various questions about auditory perception in a completely data-driven way.

Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1007992 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 07992&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1007992

DOI: 10.1371/journal.pcbi.1007992

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-03-19
Handle: RePEc:plo:pcbi00:1007992