EconPapers    
Economics at your fingertips  
 

Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles

M. Merz, R. Richman, T. Tsanakas and M. V. W\"uthrich

Papers from arXiv.org

Abstract: A vastly growing literature on explaining deep learning models has emerged. This paper contributes to that literature by introducing a global gradient-based model-agnostic method, which we call Marginal Attribution by Conditioning on Quantiles (MACQ). Our approach is based on analyzing the marginal attribution of predictions (outputs) to individual features (inputs). Specificalllly, we consider variable importance by mixing (global) output levels and, thus, explain how features marginally contribute across different regions of the prediction space. Hence, MACQ can be seen as a marginal attribution counterpart to approaches such as accumulated local effects (ALE), which study the sensitivities of outputs by perturbing inputs. Furthermore, MACQ allows us to separate marginal attribution of individual features from interaction effect, and visually illustrate the 3-way relationship between marginal attribution, output level, and feature value.

Date: 2021-03
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://arxiv.org/pdf/2103.11706 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2103.11706

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators (help@arxiv.org).

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2103.11706