Recalibrating binary probabilistic classifiers
Dirk Tasche
Papers from arXiv.org
Abstract:
Recalibration of binary probabilistic classifiers to a target prior probability is an important task in areas like credit risk management. However, recalibration of a classifier learned on a training dataset to a target on a test dataset in general is not a well-defined problem because there might be more than one way to transform the original posterior probabilities such that the target is matched. In this paper, methods for recalibration are analysed from a distribution shift perspective. Distribution shift assumptions linked to the area under the curve (AUC) of a probabilistic classifier are found to be useful for the design of meaningful recalibration methods. Two new methods called parametric covariate shift with posterior drift (CSPD) and ROC-based quasi moment matching (QMM) are proposed and tested together with some other methods in an example setting. The outcomes of the test suggest that the QMM methods discussed in the paper can provide appropriately conservative results in evaluations with concave functions like for instance risk weights functions for credit risk.
Date: 2025-05, Revised 2026-01
New Economics Papers: this item is included in nep-ecm and nep-rmg
References: View complete reference list from CitEc
Citations:
Published in Journal of Statistics and Computer Science, 2025, 4(2):117-133
Downloads: (external link)
http://arxiv.org/pdf/2505.19068 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2505.19068
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().