Learning to be Homo Economicus: Can an LLM Learn Preferences from Choice
Jeongbin Kim,
Matthew Kovach,
Kyu-Min Lee,
Euncheol Shin and
Hector Tzavellas
Papers from arXiv.org
Abstract:
This paper explores the use of Large Language Models (LLMs) as decision aids, with a focus on their ability to learn preferences and provide personalized recommendations. To establish a baseline, we replicate standard economic experiments on choice under risk (Choi et al., 2007) with GPT, one of the most prominent LLMs, prompted to respond as (i) a human decision maker or (ii) a recommendation system for customers. With these baselines established, GPT is provided with a sample set of choices and prompted to make recommendations based on the provided data. From the data generated by GPT, we identify its (revealed) preferences and explore its ability to learn from data. Our analysis yields three results. First, GPT's choices are consistent with (expected) utility maximization theory. Second, GPT can align its recommendations with people's risk aversion, by recommending less risky portfolios to more risk-averse decision makers, highlighting GPT's potential as a personalized decision aid. Third, however, GPT demonstrates limited alignment when it comes to disappointment aversion.
Date: 2024-01
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp, nep-dcm and nep-upt
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2401.07345 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2401.07345
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().