Can an LLM Learn Preferences from Choice Data?
Jeongbin Kim,
Matthew Kovach,
Kyu-Min Lee,
Euncheol Shin and
Hector Tzavellas
Papers from arXiv.org
Abstract:
Can large language models (LLMs) learn a decision maker's preferences from observed choices and generate preference-consistent recommendations in new situations? We propose a portable Simulate-Recommend-Evaluate framework that tests preference learning from revealed-choice data by comparing LLM recommendations with optimal choices implied by known preference primitives. We apply the framework to choice under uncertainty using the disappointment aversion model. Recommendation accuracy improves as models observe more choices, but learning is heterogeneous across preference types and LLMs: GPT learns risk aversion better than disappointment aversion, Gemini performs best in high disappointment-aversion regions, and Claude shows the broadest effective learning across parameter regions.
Date: 2024-01, Revised 2026-04
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp, nep-dcm and nep-upt
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
http://arxiv.org/pdf/2401.07345 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2401.07345
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().