EconPapers    
Economics at your fingertips  
 

The Memorization Problem: Can We Trust LLMs' Economic Forecasts?

Alejandro Lopez-Lira, Yuehua Tang and Mingyin Zhu

Papers from arXiv.org

Abstract: Large language models (LLMs) cannot be trusted for economic forecasts during periods covered by their training data. Counterfactual forecasting ability is non-identified when the model has seen the realized values: any observed output is consistent with both genuine skill and memorization. Any evidence of memorization represents only a lower bound on encoded knowledge. We demonstrate LLMs have memorized economic and financial data, recalling exact values before their knowledge cutoff. Instructions to respect historical boundaries fail to prevent recall-level accuracy, and masking fails as LLMs reconstruct entities and dates from minimal context. Post-cutoff, we observe no recall. Memorization extends to embeddings.

Date: 2025-04, Revised 2025-12
New Economics Papers: this item is included in nep-ain, nep-big, nep-cmp and nep-for
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://arxiv.org/pdf/2504.14765 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2504.14765

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-12-25
Handle: RePEc:arx:papers:2504.14765