A novel content-based approach to measuring monetary policy uncertainty using fine-tuned LLMs
Arata Ito,
Masahiro Sato and
Rui Ota
Finance Research Letters, 2025, vol. 75, issue C
Abstract:
Policy uncertainty is a potential source for reducing policy effectiveness. Existing studies have measured policy uncertainty by tracking the frequency of specific keywords in newspaper articles. However, this keyword-based approach fails to account for the context of articles and differentiate the types of uncertainty that such contexts indicate. This study introduces a new method for measuring different types of policy uncertainty in news content using large language models (LLMs). We fine-tune the LLMs to identify different types of uncertainty expressed in newspaper articles based on their context, even if they do not contain specific keywords indicating uncertainty. By applying this method to Japan’s monetary policy from 2015 to 2016, we demonstrate that our approach successfully captures the dynamics of monetary policy uncertainty, which vary significantly depending on the type of uncertainty examined.
Keywords: Bank of Japan; Central bank communication; Generative pre-trained transformer; Large language model; Monetary policy; Policy uncertainty; Text data (search for similar items in EconPapers)
JEL-codes: C88 E52 E58 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S1544612325000972
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:finlet:v:75:y:2025:i:c:s1544612325000972
DOI: 10.1016/j.frl.2025.106832
Access Statistics for this article
Finance Research Letters is currently edited by R. Gençay
More articles in Finance Research Letters from Elsevier
Bibliographic data for series maintained by Catherine Liu ().