Can AI beat a naive portfolio? An experiment with anonymized data
Marcelo S. Perlin,
Cristian R. Foguesatto,
Fernanda M. Müller and
Marcelo B. Righi
Finance Research Letters, 2025, vol. 78, issue C
Abstract:
Using anonymized data from the United States (U.S.) market, we evaluate the performance of Google’s main LLM (Large Language Model) Gemini 1.5 Flash in making investment decisions. Unlike other studies, we query the LLM for different investment horizons (1 to 36 months) and types of financial information (financial data, price data, and a combination of both). Running a total of 30,000 simulations for 1,522 companies over 20 years of data, we find that Gemini does not consistently outperform a naive portfolio and the S&P 500 index in terms of returns and Sharpe ratios. Additionally, our findings indicate a decline in risk adjusted investment performance as the investment horizon extends.
Keywords: LLM; Regenerative AI; Artificial intelligence; Gemini; Investments; ChatGPT; Large language models (search for similar items in EconPapers)
JEL-codes: G11 G17 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S1544612325003897
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:finlet:v:78:y:2025:i:c:s1544612325003897
DOI: 10.1016/j.frl.2025.107126
Access Statistics for this article
Finance Research Letters is currently edited by R. Gençay
More articles in Finance Research Letters from Elsevier
Bibliographic data for series maintained by Catherine Liu ().