Facial Expressions Based on the Types of Conversation Contents
Kazunori Minetaki () and
I-Hsien Ting ()
Additional contact information
Kazunori Minetaki: Kindai University
I-Hsien Ting: National University of Kaohsiung
The Review of Socionetwork Strategies, 2024, vol. 18, issue 2, 449-489
Abstract:
Abstract This study analyzed changes in facial expressions over time to examine how rapport is formed when discussing any topics. Changes in facial expression are captured by action units (AUs) and 55 locations in the 2D eye region. The topics included (a) introduction and greeting, which are conventional conversations; (b) favorite food; (c) journeys and watching movies; and (d) the future where humans and AI interact. The data used in this study are 15–20-min recorded videos (.MP4 format) between 29 human participants and a virtual agent named Hazumi1902, operated by the Wizard-of-Oz method. Multimodal information generated by the participants during the dialogue was recorded using video and a Microsoft Kinect sensor. The AUs and locations of the 2D eye regions were detected by OpenFace 2.0. The intensity of AUs and the location of 2D eye region landmarks were analyzed using the Kruskal–Wallis test by combining the conversations and rapport criterion-type measurements. This study found that the intensity of AUs was insufficient, and the 2D eye region landmarks was necessary for analyzing how facial expressions were affected by perceptions of rapport and conversation type. One of the criteria measurements,” cold,” was not observed in the intensity change of the AUs. It was concluded that the AUs were not universal, and the location of 2D eye region landmarks played a crucial role in complementing the analysis of their intensity. The intensity of the AUs and the location of 2D eye region landmarks were observed in harmonious conversations. It was discovered that factors hindering the rapport were in the eyes, whereas those promoting rapport were in the AUs. These insights could be invaluable in various fields, from human–computer interaction to non-verbal communication.
Keywords: Facial expression; AU(Action Unit); Location of 2D eye region landmarks; Open Face 2.0 (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s12626-024-00177-z Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:trosos:v:18:y:2024:i:2:d:10.1007_s12626-024-00177-z
Ordering information: This journal article can be ordered from
https://www.springer ... ystems/journal/12626
DOI: 10.1007/s12626-024-00177-z
Access Statistics for this article
The Review of Socionetwork Strategies is currently edited by Katsutoshi Yada, Yasuharu Ukai and Marshall Van Alstyne
More articles in The Review of Socionetwork Strategies from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().