Enhancing Human–Agent Interaction via Artificial Agents That Speculate About the Future
Casey C. Bennett (),
Young-Ho Bae,
Jun-Hyung Yoon,
Say Young Kim and
Benjamin Weiss
Additional contact information
Casey C. Bennett: School of Computing, DePaul University, Chicago, IL 60604, USA
Young-Ho Bae: Department of Data Science, Hanyang University, Seoul 04763, Republic of Korea
Jun-Hyung Yoon: Department of Data Science, Hanyang University, Seoul 04763, Republic of Korea
Say Young Kim: Department of English Language & Literature, Hanyang University, Seoul 04763, Republic of Korea
Benjamin Weiss: Quality and Usability Lab, Technische Universität Berlin, 10623 Berlin, Germany
Future Internet, 2025, vol. 17, issue 2, 1-21
Abstract:
Human communication in daily life entails not only talking about what we are currently doing or will do, but also speculating about future possibilities that may (or may not) occur, i.e., “anticipatory speech”. Such conversations are central to social cooperation and social cohesion in humans. This suggests that such capabilities may also be critical for developing improved speech systems for artificial agents, e.g., human–agent interaction (HAI) and human–robot interaction (HRI). However, to do so successfully, it is imperative that we understand how anticipatory speech may affect the behavior of human users and, subsequently, the behavior of the agent/robot. Moreover, it is possible that such effects may vary across cultures and languages. To that end, we conducted an experiment where a human and autonomous 3D virtual avatar interacted in a cooperative gameplay environment. The experiment included 40 participants, comparing different languages (20 English, 20 Korean), where the artificial agent had anticipatory speech either enabled or disabled. The results showed that anticipatory speech significantly altered the speech patterns and turn-taking behavior of both the human and the agent, but those effects varied depending on the language spoken. We discuss how the use of such novel communication forms holds potential for enhancing HAI/HRI, as well as the development of mixed reality and virtual reality interactive systems for human users.
Keywords: human–robot interaction; social cognition; virtual avatar; speech system; language differences; virtual reality (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2025
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1999-5903/17/2/52/pdf (application/pdf)
https://www.mdpi.com/1999-5903/17/2/52/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:17:y:2025:i:2:p:52-:d:1572554
Access Statistics for this article
Future Internet is currently edited by Ms. Grace You
More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().