Can Unbiased Predictive AI Amplify Bias?
Tanvir Ahmed Khan ()
No 1510, Working Paper from Economics Department, Queen's University
Abstract:
Predictive AI is increasingly used to guide decisions on agents. I show that even a bias-neutral predictive AI can potentially amplify exogenous (human) bias in settings where the predictive AI represents a cost-adjusted precision gain to unbiased predictions, and the final judgments are made by biased human evaluators. In the absence of perfect and instantaneous belief updating, expected victims of bias become less likely to be saved by randomness under more precise predictions. An increase in aggregate discrimination is possible if this effect dominates. Not accounting for this mechanism may result in AI being unduly blamed for creating bias.
Keywords: artificial intelligence; AI; algorithm; human-machine interactions; discrimination; bias; algorithmic bias; financial institutions (search for similar items in EconPapers)
JEL-codes: G2 J15 O33 (search for similar items in EconPapers)
Pages: 25 pages
Date: 2023-07
New Economics Papers: this item is included in nep-ain, nep-big and nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.econ.queensu.ca/sites/econ.queensu.ca/files/wpaper/qed_wp_1510.pdf First version 2023 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:qed:wpaper:1510
Access Statistics for this paper
More papers in Working Paper from Economics Department, Queen's University Contact information at EDIRC.
Bibliographic data for series maintained by Mark Babcock ().