Why Groups Matter: Necessity of Group Structures in Attributions
Dangxing Chen,
Jingfeng Chen and
Weicheng Ye
Papers from arXiv.org
Abstract:
Explainable machine learning methods have been accompanied by substantial development. Despite their success, the existing approaches focus more on the general framework with no prior domain expertise. High-stakes financial sectors have extensive domain knowledge of the features. Hence, it is expected that explanations of models will be consistent with domain knowledge to ensure conceptual soundness. In this work, we study the group structures of features that are naturally formed in the financial dataset. Our study shows the importance of considering group structures that conform to the regulations. When group structures are present, direct applications of explainable machine learning methods, such as Shapley values and Integrated Gradients, may not provide consistent explanations; alternatively, group versions of the Shapley value can provide consistent explanations. We contain detailed examples to concentrate on the practical perspective of our framework.
Date: 2024-08
New Economics Papers: this item is included in nep-big
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2408.05701 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2408.05701
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().