The golden zone of AI’s emotional expression in frontline chatbot service failures
Qian Chen,
Yeming Gong (),
Yaobin Lu and
Xin Robert Luo
Additional contact information
Qian Chen: HZAU - Huazhong Agricultural University [Wuhan]
Yeming Gong: EM - EMLyon Business School
Yaobin Lu: HUST - Huazhong University of Science and Technology [Wuhan]
Xin Robert Luo: The University of New Mexico [Albuquerque] - NMC - New Mexico Consortium
Post-Print from HAL
Abstract:
Purpose The purpose of this study is twofold: first, to identify the categories of artificial intelligence (AI) chatbot service failures in frontline, and second, to examine the effect of the intensity of AI emotion exhibited on the effectiveness of the chatbots' autonomous service recovery process. Design/methodology/approach We adopt a mixed-methods research approach, starting with a qualitative research, the purpose of which is to identify specific categories of AI chatbot service failures. In the second stage, we conduct experiments to investigate the impact of AI chatbot service failures on consumers' psychological perceptions, with a focus on the moderating influence of chatbot's emotional expression. This sequential approach enabled us to incorporate both qualitative and quantitative aspects for a comprehensive research perspective. Findings The results suggest that, from the analysis of interview data, AI chatbot service failures mainly include four categories: failure to understand, failure to personalize, lack of competence, and lack of assurance. The results also reveal that AI chatbot service failures positively affect dehumanization and increase customers' perceptions of service failure severity. However, AI chatbots can autonomously remedy service failures through moderate AI emotion. An interesting golden zone of AI's emotional expression in chatbot service failures was discovered, indicating that extremely weak or strong intensity of AI's emotional expression can be counterproductive. Originality/value This study contributes to the burgeoning AI literature by identifying four types of AI service failure, developing dehumanization theory in the context of smart services, and demonstrating the nonlinear effects of AI emotion. The findings also offer valuable insights for organizations that rely on AI chatbots in terms of designing chatbots that effectively address and remediate service failures.
Keywords: AI's emotional expression; AI chatbot; Service failure; Dehumanization (search for similar items in EconPapers)
Date: 2024-08-15
References: Add references at CitEc
Citations:
Published in Internet Research, inPress, 39 p. ⟨10.1108/INTR-07-2023-0551⟩
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-04792333
DOI: 10.1108/INTR-07-2023-0551
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().