Software for dataset-wide XAI: From local explanations to global insights with Zennit, CoRelAy, and ViRelAy
Christopher J Anders,
David Neumann,
Wojciech Samek,
Klaus-Robert Müller and
Sebastian Lapuschkin
PLOS ONE, 2026, vol. 21, issue 1, 1-38
Abstract:
The predictive capabilities of Deep Neural Networks (DNNs) are well-established, yet the underlying mechanisms driving these predictions often remain opaque. The advent of Explainable Artificial Intelligence (XAI) has introduced novel methodologies to explore the reasoning behind complex model predictions of complex models. Among post-hoc attribution methods, Layer-wise Relevance Propagation (LRP) has demonstrated notable adaptability and performance for explaining individual predictions – provided the method is used to its full potential. For deeper dataset-wide and quantitative analyses, however, the manual inspection of individual attribution maps remains unnecessarily labor-intensive and time consuming. While several approaches for dataset-wide XAI-analyses have been proposed, unified and accessible implementations of such tools are still lacking. Furthermore, there is a notable absence of dedicated visualization and analysis software to support stakeholders in interpreting both local and global XAI results effectively. This gap underscores the need for comprehensive software tools that facilitate both granular and holistic understanding of model behavior, as well as easing the adaptability of XAI in applications and the sciences. To address these challenges, we present three software packages designed to facilitate the exploration of model reasoning using attribution approaches and beyond: (1) Zennit – a highly customizable and intuitive attribution framework implementing LRP and related methods in PyTorch, (2) CoRelAy – a framework to easily and quickly construct quantitative analysis pipelines for dataset-wide analyses of explanations, and (3) ViRelAy – an interactive web-application for exploring data, attributions, and analysis results. By providing a standardized implementation for XAI, we aim to promote reproducibility in our field and empower scientists and practitioners to uncover the intricacies of complex model behavior.
Date: 2026
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0336683 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 36683&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0336683
DOI: 10.1371/journal.pone.0336683
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().