Explaining black box decisions by Shapley cohort refinement
Masayoshi Mase,
Art B. Owen and
Benjamin Seiler
Papers from arXiv.org
Abstract:
We introduce a variable importance measure to quantify the impact of individual input variables to a black box function. Our measure is based on the Shapley value from cooperative game theory. Many measures of variable importance operate by changing some predictor values with others held fixed, potentially creating unlikely or even logically impossible combinations. Our cohort Shapley measure uses only observed data points. Instead of changing the value of a predictor we include or exclude subjects similar to the target subject on that predictor to form a similarity cohort. Then we apply Shapley value to the cohort averages. We connect variable importance measures from explainable AI to function decompositions from global sensitivity analysis. We introduce a squared cohort Shapley value that splits previously studied Shapley effects over subjects, consistent with a Shapley axiom.
Date: 2019-11, Revised 2020-10
New Economics Papers: this item is included in nep-gth
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/1911.00467 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1911.00467
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().