EconPapers    
Economics at your fingertips  
 

Human bias in AI models? Anchoring effects and mitigation strategies in large language models

Jeremy K. Nguyen

Journal of Behavioral and Experimental Finance, 2024, vol. 43, issue C

Abstract: This study builds on the seminal work of Tversky and Kahneman (1974), exploring the presence and extent of anchoring bias in forecasts generated by four Large Language Models (LLMs): GPT-4, Claude 2, Gemini Pro and GPT-3.5. In contrast to recent findings of advanced reasoning capabilities in LLMs, our randomised controlled trials reveal the presence of anchoring bias across all models: forecasts are significantly influenced by prior mention of high or low values. We examine two mitigation prompting strategies, ‘Chain of Thought’ and ‘ignore previous’, finding limited and varying degrees of effectiveness. Our results extend the anchoring bias research in finance beyond human decision-making to encompass LLMs, highlighting the importance of deliberate and informed prompting in AI forecasting in both ad hoc LLM use and in crafting few-shot examples.

Keywords: Anchoring bias; Artificial intelligence (search for similar items in EconPapers)
JEL-codes: C45 D81 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S2214635024000868

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:beexfi:v:43:y:2024:i:c:s2214635024000868

DOI: 10.1016/j.jbef.2024.100971

Access Statistics for this article

Journal of Behavioral and Experimental Finance is currently edited by Michael Dowling and Jürgen Huber

More articles in Journal of Behavioral and Experimental Finance from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-23
Handle: RePEc:eee:beexfi:v:43:y:2024:i:c:s2214635024000868