Economic uncertainty measures, experts and large language models
Maria Bontempi,
Wojciech Charemza and
Svetlana Makarova
Journal of International Money and Finance, 2025, vol. 157, issue C
Abstract:
The paper proposes a randomness-type test for comparing the validity of different measures of economic uncertainty. The test verifies the randomness hypothesis for the match between the jumps of an uncertainty index and the dates of uncertainty-generating events named by the panel of experts or artificial intelligence through large language models (LLMs) capable of generating human-like text. The test can also be applied to verify whether LLMs provide a reliable selection of uncertainty-generating events. It was initially used to evaluate the quality of three uncertainty indices for Poland and then applied to six uncertainty indices for the US using monthly data from January 2004 to March 2021 for both countries. The results show that LLMs provide a reasonable alternative for testing when panels of experts are not available.
Keywords: Uncertainty indices; Uncertainty-generating events; Native language and english; Internet searches; Large language models (search for similar items in EconPapers)
JEL-codes: C32 C8 D83 E32 E60 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0261560625001044
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:jimfin:v:157:y:2025:i:c:s0261560625001044
DOI: 10.1016/j.jimonfin.2025.103369
Access Statistics for this article
Journal of International Money and Finance is currently edited by J. R. Lothian
More articles in Journal of International Money and Finance from Elsevier
Bibliographic data for series maintained by Catherine Liu ().