EconPapers    
Economics at your fingertips  
 

“I Look in Your Eyes, Honey”: Internal Face Features Induce Spatial Frequency Preference for Human Face Processing

Matthias S Keil

PLOS Computational Biology, 2009, vol. 5, issue 3, 1-13

Abstract: Numerous psychophysical experiments found that humans preferably rely on a narrow band of spatial frequencies for recognition of face identity. A recently conducted theoretical study by the author suggests that this frequency preference reflects an adaptation of the brain's face processing machinery to this specific stimulus class (i.e., faces). The purpose of the present study is to examine this property in greater detail and to specifically elucidate the implication of internal face features (i.e., eyes, mouth, and nose). To this end, I parameterized Gabor filters to match the spatial receptive field of contrast sensitive neurons in the primary visual cortex (simple and complex cells). Filter responses to a large number of face images were computed, aligned for internal face features, and response-equalized (“whitened”). The results demonstrate that the frequency preference is caused by internal face features. Thus, the psychophysically observed human frequency bias for face processing seems to be specifically caused by the intrinsic spatial frequency content of internal face features. Author Summary: Imagine a photograph showing your friend's face. Although you might think that every single detail in his face matters for recognizing him, numerous experiments have shown that the brain prefers a rather coarse resolution instead. This means that a small rectangular photograph of about 30 to 40 pixels in width (showing only the face from left ear to right ear) is optimal. But why? To answer this question, I analyzed a large number of male and female face images. (The analysis was designed to mimic the way that the brain presumably processes them.) The analysis was carried out separately for each of the internal face features (left eye, right eye, mouth, and nose), which permits us to identify the responsible feature(s) for setting the resolution level, and it turns out that the eyes and the mouth are responsible for setting it. Thus, looking at eyes and mouth at the mentioned coarse resolution gives the most reliable signals for face recognition, and the brain has built-in knowledge about that. Although a preferred resolution level for face recognition has been observed for a long time in numerous experiments, this study offers, for the first time, a plausible explanation.

Date: 2009
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (4)

Downloads: (external link)
https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1000329 (text/html)
https://journals.plos.org/ploscompbiol/article/fil ... 00329&type=printable (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:plo:pcbi00:1000329

DOI: 10.1371/journal.pcbi.1000329

Access Statistics for this article

More articles in PLOS Computational Biology from Public Library of Science
Bibliographic data for series maintained by ploscompbiol ().

 
Page updated 2025-03-19
Handle: RePEc:plo:pcbi00:1000329