EconPapers    
Economics at your fingertips  
 

Expl(AI)ned: The Impact of Explainable Artificial Intelligence on Users’ Information Processing

Kevin Bauer (), Moritz von Zahn () and Oliver Hinz
Additional contact information
Kevin Bauer: Information Systems Department, University of Mannheim, 68161 Mannheim, Germany
Moritz von Zahn: Information Systems Department, Goethe University, 60323 Frankfurt am Main, Germany

Information Systems Research, 2023, vol. 34, issue 4, 1582-1602

Abstract: Because of a growing number of initiatives and regulations, predictions of modern artificial intelligence (AI) systems increasingly come with explanations about why they behave the way they do. In this paper, we explore the impact of feature-based explanations on users’ information processing. We designed two complementary empirical studies where participants either made incentivized decisions on their own, with the aid of opaque predictions, or with explained predictions. In Study 1, laypeople engaged in the deliberately abstract investment game task. In Study 2, experts from the real estate industry estimated listing prices for real German apartments. Our results indicate that the provision of feature-based explanations paves the way for AI systems to reshape users’ sense making of information and understanding of the world around them. Specifically, explanations change users’ situational weighting of available information and evoke mental model adjustments. Crucially, mental model adjustments are subject to the confirmation bias so that misconceptions can persist and even accumulate, possibly leading to suboptimal or biased decisions. Additionally, mental model adjustments create spillover effects that alter user behavior in related yet disparate domains. Overall, this paper provides important insights into potential downstream consequences of the broad employment of modern explainable AI methods. In particular, side effects of mental model adjustments present a potential risk of manipulating user behavior, promoting discriminatory inclinations, and increasing noise in decision making. Our findings may inform the refinement of current efforts of companies building AI systems and regulators that aim to mitigate problems associated with the black-box nature of many modern AI systems.

Keywords: explainable artificial intelligence; user behavior; information processing; mental models (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8)

Downloads: (external link)
http://dx.doi.org/10.1287/isre.2023.1199 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:34:y:2023:i:4:p:1582-1602

Access Statistics for this article

More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-31
Handle: RePEc:inm:orisre:v:34:y:2023:i:4:p:1582-1602