Portfolio management based on a reinforcement learning framework
Wu Junfeng,
Li Yaoming,
Tan Wenqing and
Chen Yun
Journal of Forecasting, 2024, vol. 43, issue 7, 2792-2808
Abstract:
Portfolio management is crucial for investors. We propose a dynamic portfolio management framework based on reinforcement learning using the proximal policy optimization algorithm. The two‐part framework includes a feature extraction network and a full connected network. First, the majority of the previous research on portfolio management based on reinforcement learning has been dedicated to discrete action spaces. We propose a potential solution to the problem of a continuous action space with a constraint (i.e., the sum of the portfolio weights is equal to 1). Second, we explore different feature extraction networks (i.e., convolutional neural network [CNN], long short‐term memory [LSTM] network, and convolutional LSTM network) combined with our system, and we conduct extensive experiments on the six kinds of assets, including 16 features. The empirical results show that the CNN performs best in the test set. Last, we discuss the effect of the trading frequency on our trading system and find that the monthly trading frequency has a higher Sharpe ratio in the test set than other trading frequencies.
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://doi.org/10.1002/for.3155
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wly:jforec:v:43:y:2024:i:7:p:2792-2808
Access Statistics for this article
Journal of Forecasting is currently edited by Derek W. Bunn
More articles in Journal of Forecasting from John Wiley & Sons, Ltd.
Bibliographic data for series maintained by Wiley Content Delivery ().