Imagined speech can be decoded from low- and cross-frequency intracranial EEG features
Timothée Proix (),
Jaime Delgado Saa,
Andy Christen,
Stephanie Martin,
Brian N. Pasley,
Robert T. Knight,
Xing Tian,
David Poeppel,
Werner K. Doyle,
Orrin Devinsky,
Luc H. Arnal,
Pierre Mégevand and
Anne-Lise Giraud
Additional contact information
Timothée Proix: University of Geneva
Jaime Delgado Saa: University of Geneva
Andy Christen: University of Geneva
Stephanie Martin: University of Geneva
Brian N. Pasley: University of California, Berkeley
Robert T. Knight: University of California, Berkeley
Xing Tian: New York University Shanghai
David Poeppel: New York University
Werner K. Doyle: New York University Grossman School of Medicine
Orrin Devinsky: New York University Grossman School of Medicine
Luc H. Arnal: Institut de l’Audition, Institut Pasteur, INSERM
Pierre Mégevand: University of Geneva
Anne-Lise Giraud: University of Geneva
Nature Communications, 2022, vol. 13, issue 1, 1-14
Abstract:
Abstract Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
https://www.nature.com/articles/s41467-021-27725-3 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:13:y:2022:i:1:d:10.1038_s41467-021-27725-3
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/s41467-021-27725-3
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().