Using Explanations to Estimate the Quality of Computer Vision Models
Filipe Oliveira (),
Davide Carneiro () and
João Pereira
Additional contact information
Filipe Oliveira: INESC TEC
Davide Carneiro: INESC TEC
João Pereira: CIICESI, ESTG, Politécnico do Porto
A chapter in Human-Centred Technology Management for a Sustainable Future, 2025, pp 293-301 from Springer
Abstract:
Abstract Explainable AI (xAI) emerged as one of the ways of addressing the interpretability issues of the so-called black-box models. Most of the xAI artifacts proposed so far were designed, as expected, for human users. In this work, we posit that such artifacts can also be used by computer systems. Specifically, we propose a set of metrics derived from LIME explanations, that can eventually be used to ascertain the quality of each output of an underlying image classification model. We validate these metrics against quantitative human feedback, and identify 4 potentially interesting metrics for this purpose. This research is particularly useful in concept drift scenarios, in which models are deployed into production and there is no new labelled data to continuously evaluate them, becoming impossible to know the current performance of the model.
Keywords: Machine learning; Computer vision; Explainability (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:prbchp:978-3-031-72494-7_29
Ordering information: This item can be ordered from
http://www.springer.com/9783031724947
DOI: 10.1007/978-3-031-72494-7_29
Access Statistics for this chapter
More chapters in Springer Proceedings in Business and Economics from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().