Social Group Bias in AI Finance
Thomas R. Cook and
Sophia Kazinnik
Papers from arXiv.org
Abstract:
Financial institutions increasingly rely on large language models (LLMs) for high-stakes decision-making. However, these models risk perpetuating harmful biases if deployed without careful oversight. This paper investigates racial bias in LLMs specifically through the lens of credit decision-making tasks, operating on the premise that biases identified here are indicative of broader concerns across financial applications. We introduce a reproducible, counterfactual testing framework that evaluates how models respond to simulated mortgage applicants identical in all attributes except race. Our results reveal significant race-based discrepancies, exceeding historically observed bias levels. Leveraging layer-wise analysis, we track the propagation of sensitive attributes through internal model representations. Building on this, we deploy a control-vector intervention that effectively reduces racial disparities by up to 70% (33% on average) without impairing overall model performance. Our approach provides a transparent and practical toolkit for the identification and mitigation of bias in financial LLM deployments.
Date: 2025-06
References: Add references at CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2506.17490 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2506.17490
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().