EconPapers    
Economics at your fingertips  
 

An Opportunity to Investigate the Role of Specific Nonverbal Cues and First Impression in Interviews using Deepfake Based Controlled Video Generation

Rahil Satyanarayan Vijay, Kumar Shubham, Laetitia Aurelie Renier, Emmanuelle P. Kleinlogel (), Marianne Schmid Mast and Dinesh Babu Jayagopi
Additional contact information
Rahil Satyanarayan Vijay: IIIT-B - International Institute of Information Technology Bangalore
Kumar Shubham: IIIT-B - International Institute of Information Technology Bangalore
Laetitia Aurelie Renier: UNIL - Université de Lausanne = University of Lausanne
Emmanuelle P. Kleinlogel: CEMOI - Centre d'Économie et de Management de l'Océan Indien - UR - Université de La Réunion, UNIL - Université de Lausanne = University of Lausanne
Marianne Schmid Mast: UNIL - Université de Lausanne = University of Lausanne
Dinesh Babu Jayagopi: IIIT-B - International Institute of Information Technology Bangalore

Post-Print from HAL

Abstract: The study of nonverbal cues in a dyadic interaction, such as a job interview, mostly relies on videos and does not allow to disentangle the role of specific cues. It is thus not clear whether, for instance, an interviewee who smiles while listening to an interviewer would be perceived more favorably than an interviewee who only gazes at an interviewer. While a similar analysis in naturalistic situations requires careful curation of interview recordings, it still does not allow to disentangle the effect of specific nonverbal cues on first impression. Deepfake technology provides the opportunity to address this challenge by creating highly standardized videos of interviewees manifesting a determined behavior (i.e., a combination of specific nonverbal cues). Accordingly, we created a set of deepfake videos enabling us to manipulate the occurrence of three classes of nonverbal attributes (i.e., eye contact, nodding, and smiling). The deepfake videos showed interviewees manifesting one of four behaviors while listening to the interviewer: eye contact with smile and nod, eye contact with only nodding, just eye contact, and looking distracted. Then we tested whether these combinations of nonverbal cues influenced how the interviewees were perceived with respect to personality, confidence, and hireability. Our work reveals the potential of using deepfake technology for generating behaviorally controlled videos, useful for psychology experiments.

Date: 2021-10-18
References: Add references at CitEc
Citations:

Published in ICMI '21: INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, Oct 2021, Montréal, Canada. pp.148-152, ⟨10.1145/3461615.3485397⟩

There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-03665990

DOI: 10.1145/3461615.3485397

Access Statistics for this paper

More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().

 
Page updated 2025-03-19
Handle: RePEc:hal:journl:hal-03665990