Narratives to Numbers: Large Language Models and Economic Policy Uncertainty
Ethan Hartley
Papers from arXiv.org
Abstract:
This study evaluates large language models as estimable classifiers and clarifies how modeling choices shape downstream measurement error. Revisiting the Economic Policy Uncertainty index, we show that contemporary classifiers substantially outperform dictionary rules, better track human audit assessments, and extend naturally to noisy historical and multilingual news. We use these tools to construct a new nineteenth-century U.S. index from more than 360 million newspaper articles and exploratory cross-country indices with a single multilingual model. Taken together, our results show that LLMs can systematically improve text-derived measures and should be integrated as explicit measurement tools in empirical economics.
Date: 2025-11, Revised 2025-11
New Economics Papers: this item is included in nep-ain
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/2511.17866 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2511.17866
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().