EconPapers    
Economics at your fingertips  
 

Bias-Adjusted LLM Agents for Human-Like Decision-Making via Behavioral Economics

Ayato Kitadai, Yusuke Fukasawa and Nariaki Nishino

Papers from arXiv.org

Abstract: Large language models (LLMs) are increasingly used to simulate human decision-making, but their intrinsic biases often diverge from real human behavior--limiting their ability to reflect population-level diversity. We address this challenge with a persona-based approach that leverages individual-level behavioral data from behavioral economics to adjust model biases. Applying this method to the ultimatum game--a standard but difficult benchmark for LLMs--we observe improved alignment between simulated and empirical behavior, particularly on the responder side. While further refinement of trait representations is needed, our results demonstrate the promise of persona-conditioned LLMs for simulating human-like decision patterns at scale.

Date: 2025-08
New Economics Papers: this item is included in nep-ain and nep-evo
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2508.18600 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2508.18600

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-09-17
Handle: RePEc:arx:papers:2508.18600