Towards Fair AI: Mitigating Bias in Credit Decisions—A Systematic Literature Review
José Rômulo de Castro Vieira (),
Flavio Barboza,
Daniel Cajueiro and
Herbert Kimura
Additional contact information
José Rômulo de Castro Vieira: Department of Business Management, Faculty of Economics, Business Management, Accounting and Public Policy Management (FACE), University of Brasília, Brasília 70910-900, DF, Brazil
Flavio Barboza: Faculty of Management and Business, Federal University of Uberlândia, Uberlândia 38408-100, MG, Brazil
Daniel Cajueiro: Department of Economics, Faculty of Economics, Business Management, Accounting and Public Policy Management (FACE), University of Brasília, Brasília 70910-900, DF, Brazil
Herbert Kimura: Department of Business Management, Faculty of Economics, Business Management, Accounting and Public Policy Management (FACE), University of Brasília, Brasília 70910-900, DF, Brazil
JRFM, 2025, vol. 18, issue 5, 1-30
Abstract:
The increasing adoption of artificial intelligence algorithms is redefining decision-making across various industries. In the financial sector, where automated credit granting has undergone profound changes, this transformation raises concerns about biases perpetuated or introduced by AI systems. This study investigates the methods used to identify and mitigate biases in AI models applied to credit granting. We conducted a systematic literature review using the IEEE, Scopus, Web of Science, and Science Direct databases, covering the period from 1 January 2013 to 1 October 2024. From the 414 identified articles, 34 were selected for detailed analysis. Most studies are empirical and quantitative, focusing on fairness in outcomes and biases present in datasets. Preprocessing techniques dominated as the approach for bias mitigation, often relying on public academic datasets. Gender and race were the most studied sensitive attributes, with statistical parity being the most commonly used fairness metric. The findings reveal a maturing research landscape that prioritizes fairness in model outcomes and the mitigation of biases embedded in historical data. However, only a quarter of the papers report more than one fairness metric, limiting comparability across approaches. The literature remains largely focused on a narrow set of sensitive attributes, with little attention to intersectionality or alternative sources of bias. Furthermore, no study employed causal inference techniques to identify proxy discrimination. Despite some promising results—where fairness gains exceed 30% with minimal accuracy loss—significant methodological gaps persist, including the lack of standardized metrics, overreliance on legacy data, and insufficient transparency in model pipelines. Future work should prioritize developing advanced bias mitigation methods, exploring sensitive attributes, standardizing fairness metrics, improving model explainability, reducing computational complexity, enhancing synthetic data generation, and addressing the legal and ethical challenges of algorithms.
Keywords: algorithmic fairness; machine learning; credit scoring; algorithmic bias; artificial intelligence (search for similar items in EconPapers)
JEL-codes: C E F2 F3 G (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1911-8074/18/5/228/pdf (application/pdf)
https://www.mdpi.com/1911-8074/18/5/228/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jjrfmx:v:18:y:2025:i:5:p:228-:d:1641302
Access Statistics for this article
JRFM is currently edited by Ms. Chelthy Cheng
More articles in JRFM from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().