AI’s predictable memory in financial analysis
Antoine Didisheim, 
Martina Fraschini and 
Luciano Somoza
Economics Letters, 2025, vol. 256, issue C
Abstract:
Look-ahead bias in Large Language Models (LLMs) arises when information that would not have been available at the time of prediction is included in the training data and inflates prediction performance. This paper proposes a practical methodology to quantify look-ahead bias in financial applications. By prompting LLMs to retrieve historical stock returns without context, we construct a proxy to estimate memorization-driven predictability. We show that the bias varies predictably with data frequency, model size, and aggregation level: smaller models and finer data granularity exhibit negligible bias. Our results help researchers navigate the trade-off between statistical power and bias in LLMs.
Keywords: AI; LLM; Look-ahead bias; Back-testing (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc 
Citations: 
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0165176525004392
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX 
RIS (EndNote, ProCite, RefMan) 
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:ecolet:v:256:y:2025:i:c:s0165176525004392
DOI: 10.1016/j.econlet.2025.112602
Access Statistics for this article
Economics Letters is currently edited by Economics Letters Editorial Office
More articles in Economics Letters  from  Elsevier
Bibliographic data for series maintained by Catherine Liu ().