Can deep reinforcement learning beat 1N
Garvin Kruthof and
Sebastian Müller
Finance Research Letters, 2025, vol. 75, issue C
Abstract:
Deep reinforcement learning (DRL) has emerged as a promising tool for portfolio management. However, limited datasets in prior research hinder the generalizability of findings. We conduct a large-scale evaluation of the Soft Actor-Critic (SAC) algorithm across seven diverse equity datasets, spanning over 300 years of out-of-sample data. While SAC demonstrates market timing potential, it does not systematically outperform a 1N benchmark in a frictionless setting. SAC’s high turnover leads to negative net returns under modest transaction costs (0.1%), whereas the 1N strategy and alternative lower-turnover strategies remain robust. Distribution-based and tail-risk measures do not reveal a consistent advantage for SAC. Our results highlight the practical challenges faced by high-frequency DRL strategies and emphasize the need for future research on cost-aware DRL methods and robust validation protocols.
Keywords: Deep reinforcement learning; Portfolio optimization; Diversification; Portfolio management; 1/N (search for similar items in EconPapers)
JEL-codes: G11 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S154461232500131X
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:finlet:v:75:y:2025:i:c:s154461232500131x
DOI: 10.1016/j.frl.2025.106866
Access Statistics for this article
Finance Research Letters is currently edited by R. Gençay
More articles in Finance Research Letters from Elsevier
Bibliographic data for series maintained by Catherine Liu ().