The Fairness of Credit Scoring Models
Christophe Hurlin,
Christophe Perignon and
Sébastien Saurin
Additional contact information
Christophe Perignon: HEC Paris - Ecole des Hautes Etudes Commerciales
Sébastien Saurin: UO - Université d'Orléans
Working Papers from HAL
Abstract:
In credit markets, screening algorithms discriminate between good-type and bad-type borrowers. This is their raison d'être. However, by doing so, they also often discriminate between individuals sharing a protected attribute (e.g. gender, age, race) and the rest of the population. In this paper, we show how to test (1) whether there exists a statistical significant difference in terms of rejection rates or interest rates, called lack of fairness, between protected and unprotected groups and (2) whether this difference is only due to credit worthiness. When condition (2) is not met, the screening algorithm does not comply with the fair-lending principle and can be qualified as illegal. Our framework provides guidance on how algorithmic fairness can be monitored by lenders, controlled by their regulators, and improved for the benefit of protected groups.
Date: 2021-05-19
References: Add references at CitEc
Citations: View citations in EconPapers (4)
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
Working Paper: The Fairness of Credit Scoring Models (2024) 
Working Paper: The Fairness of Credit Scoring Models (2021) 
Working Paper: The Fairness of Credit Scoring Models (2021) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:wpaper:hal-03501452
DOI: 10.2139/ssrn.3785882
Access Statistics for this paper
More papers in Working Papers from HAL
Bibliographic data for series maintained by CCSD ().