EconPapers    
Economics at your fingertips  
 

Less Artificial, More Intelligent: Understanding Affinity, Trustworthiness, and Preference for Digital Humans

Mike Seymour (), Lingyao (Ivy) Yuan (), Kai Riemer () and Alan R. Dennis ()
Additional contact information
Mike Seymour: Business School, The University of Sydney, Darlington, New South Wales 2006, Australia
Lingyao (Ivy) Yuan: Debbie & Jerry Ivy College of Business, Iowa State University, Ames, Iowa 50021
Kai Riemer: Business School, The University of Sydney, Darlington, New South Wales 2006, Australia
Alan R. Dennis: Kelley School of Business, Indiana University, Bloomington, Indiana 47405

Information Systems Research, 2025, vol. 36, issue 2, 1096-1128

Abstract: Companies are beginning to deploy highly realistic-looking digital human agents (DHAs) controlled by increasingly realistic artificial intelligence (AI) for online customer service tasks often performed by chatbots. We conducted four major experiments to examine users’ perceptions (trustworthiness, affinity, and willingness to work with) and behaviors while using DHA via a mixed-method approach with data from quantitative surveys, qualitative interviews, direct observations, and neurophysiological measurements. Four different DHAs were used in our experiments, which included commercial products from two different vendors (which proved to be immature) and two future-focused ones (where participants were successfully led to believe that the human-controlled digital human was controlled by AI). The first study compared user perceptions of a DHA, a chatbot, and a human agent from a written description and found few differences between the DHA and the chatbot. The second study compared perceptions after using a commercially available DHA and a chatbot. Most participants reported problems using a current production implementation of DHA, either finding it uncanny or robotic or having trouble conversing with it. The third and fourth studies used a plausible future-focused “Wizard of Oz” design by informing users that the DHA was controlled by AI when it was actually controlled by a human. Participants still preferred a human agent using video conferencing to the DHA, but after controlling for visual fidelity, we did not find evidence of differences between the human and the DHA. Current DHAs that have communication problems trigger greater affinity than chatbots but are otherwise similar to them. When the DHAs’ representation and communication ability match human ability, we failed to find differences between DHAs and human agents for simple customer service tasks. Our results also add to research on algorithm aversion and suggest that the anthropomorphic computer interfaces of DHA might alleviate algorithm aversion.

Keywords: avatars; artificial intelligence; trustworthiness; mixed-method research; digital humans (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/isre.2022.0203 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:36:y:2025:i:2:p:1096-1128

Access Statistics for this article

More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-07-05
Handle: RePEc:inm:orisre:v:36:y:2025:i:2:p:1096-1128