From Birdsong to Human Speech Recognition: Bayesian Inference on a Hierarchy of Nonlinear Dynamical Systems
Izzet B Yildiz,
Katharina von Kriegstein and
Stefan J Kiebel
PLOS Computational Biology, 2013, vol. 9, issue 9, 1-16
Abstract:
Our knowledge about the computational mechanisms underlying human learning and recognition of sound sequences, especially speech, is still very limited. One difficulty in deciphering the exact means by which humans recognize speech is that there are scarce experimental findings at a neuronal, microscopic level. Here, we show that our neuronal-computational understanding of speech learning and recognition may be vastly improved by looking at an animal model, i.e., the songbird, which faces the same challenge as humans: to learn and decode complex auditory input, in an online fashion. Motivated by striking similarities between the human and songbird neural recognition systems at the macroscopic level, we assumed that the human brain uses the same computational principles at a microscopic level and translated a birdsong model into a novel human sound learning and recognition model with an emphasis on speech. We show that the resulting Bayesian model with a hierarchy of nonlinear dynamical systems can learn speech samples such as words rapidly and recognize them robustly, even in adverse conditions. In addition, we show that recognition can be performed even when words are spoken by different speakers and with different accents—an everyday situation in which current state-of-the-art speech recognition models often fail. The model can also be used to qualitatively explain behavioral data on human speech learning and derive predictions for future experiments.Author Summary: Neuroscience still lacks a concrete explanation of how humans recognize speech. Even though neuroimaging techniques are helpful in determining the brain areas involved in speech recognition, there are rarely mechanistic explanations at a neuronal level. Here, we assume that songbirds and humans solve a very similar task: extracting information from sound wave modulations produced by a singing bird or a speaking human. Given strong evidence that both humans and songbirds, although genetically very distant, converged to a similar solution, we combined the vast amount of neurobiological findings for songbirds with nonlinear dynamical systems theory to develop a hierarchical, Bayesian model which explains fundamental functions in recognition of sound sequences. We found that the resulting model is good at learning and recognizing human speech. We suggest that this translated model can be used to qualitatively explain or predict experimental data, and the underlying mechanism can be used to construct improved automatic speech recognition algorithms.
Date: 2013
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003219 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 03219&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1003219
DOI: 10.1371/journal.pcbi.1003219
Access Statistics for this article
More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol (ploscompbiol@plos.org).