Debiasing classifiers: is reality at variance with expectation?
Ashrya Agrawal,
Florian Pfisterer,
Bernd Bischl,
Francois Buet-Golfouse,
Srijan Sood,
Jiahao Chen,
Sameena Shah and
Sebastian Vollmer
Papers from arXiv.org
Abstract:
We present an empirical study of debiasing methods for classifiers, showing that debiasers often fail in practice to generalize out-of-sample, and can in fact make fairness worse rather than better. A rigorous evaluation of the debiasing treatment effect requires extensive cross-validation beyond what is usually done. We demonstrate that this phenomenon can be explained as a consequence of bias-variance trade-off, with an increase in variance necessitated by imposing a fairness constraint. Follow-up experiments validate the theoretical prediction that the estimation variance depends strongly on the base rates of the protected class. Considering fairness--performance trade-offs justifies the counterintuitive notion that partial debiasing can actually yield better results in practice on out-of-sample data.
Date: 2020-11, Revised 2021-05
References: View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2011.02407 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2011.02407
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().