Representation of internal speech by single neurons in human supramarginal gyrus
Sarah K. Wandelt (),
David A. Bjånes,
Kelsie Pejsa,
Brian Lee,
Charles Liu and
Richard A. Andersen
Additional contact information
Sarah K. Wandelt: California Institute of Technology
David A. Bjånes: California Institute of Technology
Kelsie Pejsa: California Institute of Technology
Brian Lee: California Institute of Technology
Charles Liu: California Institute of Technology
Richard A. Andersen: California Institute of Technology
Nature Human Behaviour, 2024, vol. 8, issue 6, 1136-1149
Abstract:
Abstract Speech brain–machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.nature.com/articles/s41562-024-01867-y Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:nathum:v:8:y:2024:i:6:d:10.1038_s41562-024-01867-y
Ordering information: This journal article can be ordered from
https://www.nature.com/nathumbehav/
DOI: 10.1038/s41562-024-01867-y
Access Statistics for this article
Nature Human Behaviour is currently edited by Stavroula Kousta
More articles in Nature Human Behaviour from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().