Detecting change in stochastic sound sequences
Benjamin Skerritt-Davis and
Mounya Elhilali
PLOS Computational Biology, 2018, vol. 14, issue 5, 1-24
Abstract:
Our ability to parse our acoustic environment relies on the brain’s capacity to extract statistical regularities from surrounding sounds. Previous work in regularity extraction has predominantly focused on the brain’s sensitivity to predictable patterns in sound sequences. However, natural sound environments are rarely completely predictable, often containing some level of randomness, yet the brain is able to effectively interpret its surroundings by extracting useful information from stochastic sounds. It has been previously shown that the brain is sensitive to the marginal lower-order statistics of sound sequences (i.e., mean and variance). In this work, we investigate the brain’s sensitivity to higher-order statistics describing temporal dependencies between sound events through a series of change detection experiments, where listeners are asked to detect changes in randomness in the pitch of tone sequences. Behavioral data indicate listeners collect statistical estimates to process incoming sounds, and a perceptual model based on Bayesian inference shows a capacity in the brain to track higher-order statistics. Further analysis of individual subjects’ behavior indicates an important role of perceptual constraints in listeners’ ability to track these sensory statistics with high fidelity. In addition, the inference model facilitates analysis of neural electroencephalography (EEG) responses, anchoring the analysis relative to the statistics of each stochastic stimulus. This reveals both a deviance response and a change-related disruption in phase of the stimulus-locked response that follow the higher-order statistics. These results shed light on the brain’s ability to process stochastic sound sequences.Author summary: To understand our auditory surroundings, the brain extracts invariant representations from sounds over time that are robust to the randomness inherent in real-world sound sources, while staying flexible to adapt to a dynamic environment. The computational mechanisms used to achieve this in auditory perception are not well understood. Typically, this ability is investigated using predictable patterns in a sequence of sounds, asking: “How does the brain detect the pattern embedded in this sequence?”, which does not generalize well to natural listening. Here, we examine processing of stochastic sounds that contain uncertainty in their interpretation, asking: “How does the brain detect the statistical structure instantiated by this sequence?”. We present human experimental evidence employing a perceptual model for predictive processing to show that the brain collects higher-order statistics about the temporal dependencies between sounds. In addition, the model reveals correlates between task performance and individual differences in perception, as well as deviance effects in the neural response that would be otherwise hidden with conventional, stimulus-driven analyses. This model guides our interpretation of both behavioral and neural responses in the presence of stimulus uncertainty, allowing for the study of perception of more natural stimuli in the laboratory.
Date: 2018
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1006162 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 06162&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1006162
DOI: 10.1371/journal.pcbi.1006162
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().