How to address monotonicity for model risk management?
Dangxing Chen and
Weicheng Ye
Papers from arXiv.org
Abstract:
In this paper, we study the problem of establishing the accountability and fairness of transparent machine learning models through monotonicity. Although there have been numerous studies on individual monotonicity, pairwise monotonicity is often overlooked in the existing literature. This paper studies transparent neural networks in the presence of three types of monotonicity: individual monotonicity, weak pairwise monotonicity, and strong pairwise monotonicity. As a means of achieving monotonicity while maintaining transparency, we propose the monotonic groves of neural additive models. As a result of empirical examples, we demonstrate that monotonicity is often violated in practice and that monotonic groves of neural additive models are transparent, accountable, and fair.
Date: 2023-04, Revised 2023-09
New Economics Papers: this item is included in nep-big, nep-cmp and nep-rmg
References: View references in EconPapers View complete reference list from CitEc
Citations:
Published in In Proceedings of the 40th International Conference on Machine Learning, 2023, (Proceedings of Machine Learning Research, Vol. 202). PMLR, 5282-5295
Downloads: (external link)
http://arxiv.org/pdf/2305.00799 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2305.00799
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().