Deep Reinforcement Learning for Financial Forecasting in Static and Streaming Cases
Aravilli Atchuta Ram (),
Sandarbh Yadav (),
Yelleti Vivek () and
Vadlamani Ravi
Additional contact information
Aravilli Atchuta Ram: PES University, Bangalore, India
Sandarbh Yadav: ��Centre for Artificial Intelligence and Machine Learning, Institute for Development and Research in Banking Technology, Castle Hills Road #1, Masab Tank, Hyderabad 500076, India
Yelleti Vivek: ��Centre for Artificial Intelligence and Machine Learning, Institute for Development and Research in Banking Technology, Castle Hills Road #1, Masab Tank, Hyderabad 500076, India
Vadlamani Ravi: ��Centre for Artificial Intelligence and Machine Learning, Institute for Development and Research in Banking Technology, Castle Hills Road #1, Masab Tank, Hyderabad 500076, India
Journal of Information & Knowledge Management (JIKM), 2024, vol. 23, issue 06, 1-37
Abstract:
Literature abounds with various statistical and machine learning techniques for stock market forecasting. However, Reinforcement Learning (RL) is conspicuous by its absence in this field and is little explored despite its potential to address the dynamic and uncertain nature of the stock market. In a first-of-its-kind study, this research precisely bridges this gap, by forecasting stock prices using RL, in the static as well as streaming contexts using deep RL techniques. In the static context, we employed three deep RL algorithms for forecasting the stock prices: Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimisation (PPO) and Recurrent Deterministic Policy Gradient (RDPG) and compared their performance with Multi-Layer Perceptron (MLP), Support Vector Regression (SVR) and General Regression Neural Network (GRNN). In addition, we proposed a generic streaming analytics-based forecasting approach leveraging the real-time processing capabilities of Spark streaming for all six methods. This approach employs a sliding window technique for real-time forecasting or nowcasting using the above-mentioned algorithms. We demonstrated the effectiveness of the proposed approach on the daily closing prices of four different financial time series dataset as well as the Mackey–Glass time series, a benchmark chaotic time series dataset. We evaluated the performance of these methods using three metrics: Symmetric Mean Absolute Percentage (SMAPE), Directional Symmetry statistic (DS) and Theil’s U Coefficient. The results are promising for DDPG in the static context and GRNN turned out to be the best in streaming context. We performed the Diebold–Mariano (DM) test to assess the statistical significance of the best-performing models.
Keywords: Stock price forecasting; reinforcement learning; DDPG; PPO; RDPG; GRNN; streaming (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.worldscientific.com/doi/abs/10.1142/S0219649224500801
Access to full text is restricted to subscribers
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:wsi:jikmxx:v:23:y:2024:i:06:n:s0219649224500801
Ordering information: This journal article can be ordered from
DOI: 10.1142/S0219649224500801
Access Statistics for this article
Journal of Information & Knowledge Management (JIKM) is currently edited by Professor Suliman Hawamdeh
More articles in Journal of Information & Knowledge Management (JIKM) from World Scientific Publishing Co. Pte. Ltd.
Bibliographic data for series maintained by Tai Tone Lim ().