EconPapers    
Economics at your fingertips  
 

Knowing (not) to know: Explainable artificial intelligence and human metacognition

Moritz von Zahn, Lena Liebich, Ekaterina Jussupow, Oliver Hinz and Kevin Bauer

No 464, SAFE Working Paper Series from Leibniz Institute for Financial Research SAFE

Abstract: The use of explainable AI (XAI) methods to render the prediction logic of black-box AI interpretable to humans is becoming more popular and more widely used in practice, among other things due to regulatory requirements such as the EU AI Act. Previous research on human-XAI interaction has shown that explainability may help mitigate black-box problems but also unintentionally alter individuals' cognitive processes, e.g., distorting their reasoning and evoking informational overload. While empirical evidence on the impact of XAI on how individuals "think" is growing, it has been largely overlooked whether XAI can even affect individuals' "thinking about thinking", i.e., metacognition, which theory conceptualizes to monitor and control previously-studied thinking processes. Aiming to take a first step in filling this gap, we investigate whether XAI affects confidence calibrations, and, thereby, decisions to transfer decision-making responsibility to AI, on the meta-level of cognition. We conduct two incentivized experiments in which human experts repeatedly perform prediction tasks, with the option to delegate each task to an AI. We exogenously vary whether participants initially receive explanations that reveal the AI's underlying prediction logic. We find that XAI improves individuals' metaknowledge (the alignment between confidence and actual performance) and partially enhances confidence sensitivity (the variation of confidence with task performance). These metacognitive shifts causally increase both the frequency and effectiveness of human-to-AI delegation decisions. Interestingly, these effects only occur when explanations reveal to individuals that AI's logic diverges from their own, leading to a systematic reduction in confidence. Our findings suggest that XAI can correct overconfidence at the potential cost of lowering confidence even when individuals perform well. Both effects influence decisions to cede responsibility to AI, highlighting metacognition as a central mechanism in human-XAI collaboration.

Keywords: Explainable Artificial Intelligence; Metacognition; Metaknowledge; Delegation; Machine Learning; Human-AI Collaboration (search for similar items in EconPapers)
Date: 2025
New Economics Papers: this item is included in nep-ain and nep-cbe
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.econstor.eu/bitstream/10419/334511/1/1947737333.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:zbw:safewp:334511

DOI: 10.2139/ssrn.5383106

Access Statistics for this paper

More papers in SAFE Working Paper Series from Leibniz Institute for Financial Research SAFE Contact information at EDIRC.
Bibliographic data for series maintained by ZBW - Leibniz Information Centre for Economics ().

 
Page updated 2026-01-13
Handle: RePEc:zbw:safewp:334511