The disagreement dilemma in explainable AI: can bias reduction bridge the gap
Nitanshi Bhardwaj () and
Gaurav Parashar ()
Additional contact information
Nitanshi Bhardwaj: SRM Institute of Science and Technology
Gaurav Parashar: KIET Group of Institutions
International Journal of System Assurance Engineering and Management, 2025, vol. 16, issue 6, No 2, 2005-2024
Abstract:
Abstract Explainable AI is an emerging field of research since the spread of AI in multifarious fields. The opacity and inherent black-box nature of the advanced machine learning models create a lack of transparency in them leading to the insufficiency in societal recognition. The increasing dependence on AI across diverse sectors has created the need for informed decision-making of the numerous predictive models used. XAI strives to close this divide by providing an explanation of the decision-making process, promoting trust, ensuring adherence to regulations, and cultivating societal approval. Various post-hoc techniques including well-known methods like LIME, SHAP, Integrated Gradients, Partial Dependence Plot, and Accumulated Local Effects have been proposed to decipher the intricacies of complex AI models. In the context of post hoc explanatory methods for machine learning models there arises a conflict known as the Disagreement problem where different explanation techniques provide differing interpretations of the same model. In this study, we aim to find whether reducing the bias in the dataset could lead to XAI explanations that do not disagree. The study thoroughly analyzes this problem, examining various widely recognized explanation methods. Our method aims to understand the effect of bias versus the effect after removing it. It also aims to understand the effect of bias on the explainability of the model predictions in light of the work of other authors on bias removal and fairness.
Keywords: Explainable AI; Bias; LIME; SHAP (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://link.springer.com/10.1007/s13198-025-02712-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:ijsaem:v:16:y:2025:i:6:d:10.1007_s13198-025-02712-9
Ordering information: This journal article can be ordered from
http://www.springer.com/engineering/journal/13198
DOI: 10.1007/s13198-025-02712-9
Access Statistics for this article
International Journal of System Assurance Engineering and Management is currently edited by P.K. Kapur, A.K. Verma and U. Kumar
More articles in International Journal of System Assurance Engineering and Management from Springer, The Society for Reliability, Engineering Quality and Operations Management (SREQOM),India, and Division of Operation and Maintenance, Lulea University of Technology, Sweden
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().