Large Language Models for Behavioral Economics: Internal Validity and Elicitation of Mental Models
Brian Jabarian
Papers from arXiv.org
Abstract:
In this article, we explore the transformative potential of integrating generative AI, particularly Large Language Models (LLMs), into behavioral and experimental economics to enhance internal validity. By leveraging AI tools, researchers can improve adherence to key exclusion restrictions and in particular ensure the internal validity measures of mental models, which often require human intervention in the incentive mechanism. We present a case study demonstrating how LLMs can enhance experimental design, participant engagement, and the validity of measuring mental models.
Date: 2024-06
New Economics Papers: this item is included in nep-ain, nep-big, nep-cbe, nep-cmp, nep-dcm, nep-evo, nep-exp and nep-pke
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2407.12032 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2407.12032
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().