Dynamic encoding of face information in the human fusiform gyrus
Avniel Singh Ghuman (),
Nicolas M. Brunet,
Yuanning Li,
Roma O. Konecky,
John A. Pyles,
Shawn A. Walls,
Vincent Destefino,
Wei Wang and
R. Mark Richardson
Additional contact information
Avniel Singh Ghuman: University of Pittsburgh School of Medicine, 3550 Terrace St
Nicolas M. Brunet: University of Pittsburgh School of Medicine, 3550 Terrace St
Yuanning Li: University of Pittsburgh School of Medicine, 3550 Terrace St
Roma O. Konecky: University of Pittsburgh School of Medicine, 3550 Terrace St
John A. Pyles: Center for the Neural Basis of Cognition, 4400 Fifth Ave.
Shawn A. Walls: University of Pittsburgh School of Medicine, 3550 Terrace St
Vincent Destefino: University of Pittsburgh School of Medicine, 3550 Terrace St
Wei Wang: University of Pittsburgh School of Medicine, 3550 Terrace St
R. Mark Richardson: University of Pittsburgh School of Medicine, 3550 Terrace St
Nature Communications, 2014, vol. 5, issue 1, 1-10
Abstract:
Abstract Humans’ ability to rapidly and accurately detect, identify and classify faces under variable conditions derives from a network of brain regions highly tuned to face information. The fusiform face area (FFA) is thought to be a computational hub for face processing; however, temporal dynamics of face information processing in FFA remains unclear. Here we use multivariate pattern classification to decode the temporal dynamics of expression-invariant face information processing using electrodes placed directly on FFA in humans. Early FFA activity (50–75 ms) contained information regarding whether participants were viewing a face. Activity between 200 and 500 ms contained expression-invariant information about which of 70 faces participants were viewing along with the individual differences in facial features and their configurations. Long-lasting (500+ms) broadband gamma frequency activity predicted task performance. These results elucidate the dynamic computational role FFA plays in multiple face processing stages and indicate what information is used in performing these visual analyses.
Date: 2014
References: Add references at CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
https://www.nature.com/articles/ncomms6672 Abstract (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:nat:natcom:v:5:y:2014:i:1:d:10.1038_ncomms6672
Ordering information: This journal article can be ordered from
https://www.nature.com/ncomms/
DOI: 10.1038/ncomms6672
Access Statistics for this article
Nature Communications is currently edited by Nathalie Le Bot, Enda Bergin and Fiona Gillespie
More articles in Nature Communications from Nature
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().