Augmenting Medical Diagnosis Decisions? An Investigation into Physicians’ Decision-Making Process with Artificial Intelligence
Ekaterina Jussupow (),
Kai Spohrer (),
Armin Heinzl () and
Joshua Gawlitza ()
Additional contact information
Ekaterina Jussupow: Business School, Area Information Systems, Chair of General Management and Information Systems, University of Mannheim, 68161 Mannheim, Germany
Kai Spohrer: Business School, Area Information Systems, Chair of General Management and Information Systems, University of Mannheim, 68161 Mannheim, Germany
Armin Heinzl: Business School, Area Information Systems, Chair of General Management and Information Systems, University of Mannheim, 68161 Mannheim, Germany
Joshua Gawlitza: Institute of Diagnostic and Interventional Radiology, Thoracic Imaging, University Hospital Rechts der Isar, Technical University Munich, 81675 Munich, Germany
Information Systems Research, 2021, vol. 32, issue 3, 713-735
Abstract:
Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Much research currently aims to improve AI technologies and debates their societal implications. Surprisingly little effort is spent on understanding the cognitive challenges of decision augmentation with AI-based systems although these systems make it more difficult for decision makers to evaluate the correctness of system advice and to decide whether to reject or accept it. As little is known about the cognitive mechanisms that underlie such evaluations, we take an inductive approach to understand how AI advice influences physicians’ decision-making process. We conducted experiments with a total of 68 novice and 12 experienced physicians who diagnosed patient cases with an AI-based system that provided both correct and incorrect advice. Based on qualitative data from think-aloud protocols, interviews, and questionnaires, we elicit five decision-making patterns and develop a process model of medical diagnosis decision augmentation with AI advice. We show that physicians use second-order cognitive processes, namely metacognitions, to monitor and control their reasoning while assessing AI advice. These metacognitions determine whether physicians are able to reap the full benefits of AI or not. Specifically, wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial information search. Our findings provide a first perspective on the metacognitive mechanisms that decision makers use to evaluate system advice. Overall, our study sheds light on an overlooked facet of decision augmentation with AI, namely, the crucial role of human actors in compensating for technological errors.
Keywords: decision making; artificial intelligence; decision support; metacognition; healthcare; dual process; advice taking (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (17)
Downloads: (external link)
http://dx.doi.org/10.1287/isre.2020.0980 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orisre:v:32:y:2021:i:3:p:713-735
Access Statistics for this article
More articles in Information Systems Research from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().