EconPapers    
Economics at your fingertips  
 

The ordinary meaning bot: Simulating human surveys with LLMs

Johannes Kruse ()
Additional contact information
Johannes Kruse: Max Planck Institute for Research on Collective Goods, Bonn

No 2025_12, Discussion Paper Series of the Max Planck Institute for Research on Collective Goods from Max Planck Institute for Research on Collective Goods

Abstract: This comment shows how large language models (LLMs) can help courts discern the "ordinary meaning" of statutory terms. Instead of relying on expert-heavy corpus-linguistic techniques (Gries 2025), the author simulates a human survey with GPT-4o. Demographically realistic AI agents replicate the 2,835 participants in Tobia's 2020 study on vehicle and yield response distributions with no statistically significant difference from the human data (Kolmogorov–Smirnov p = 0.915). The paper addresses concerns about hallucinations, reproducibility, data leakage, and explainability, and introduces the locked-prompt "Ordinary Meaning Bot," arguing that LLM-based survey simulation is a practical, accurate alternative to dictionaries, intuition, or complex corpus analysis.

Keywords: ordinary meaning; large language models; prompt engineering; human survey simulation; alignment (search for similar items in EconPapers)
JEL-codes: K1 Z0 (search for similar items in EconPapers)
Date: 2025-08
New Economics Papers: this item is included in nep-ain
References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.coll.mpg.de/pdf_dat/2025_12online.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:mpg:wpaper:2025_12

Access Statistics for this paper

More papers in Discussion Paper Series of the Max Planck Institute for Research on Collective Goods from Max Planck Institute for Research on Collective Goods Contact information at EDIRC.
Bibliographic data for series maintained by Marc Martin ().

 
Page updated 2025-08-28
Handle: RePEc:mpg:wpaper:2025_12