EconPapers    
Economics at your fingertips  
 

On the predictability of long-term stock market returns: Design configuration of deep neural networks

Manfred Herdt and Hermann Schulte-Mattler
Additional contact information
Manfred Herdt: Dortmund University of Applied Sciences and Arts, Emil-Figge-Strasse 44, Germany
Hermann Schulte-Mattler: Professor of Finance, Dortmund University of Applied Sciences, Germany

Journal of AI, Robotics & Workplace Automation, 2022, vol. 2, issue 1, 70-93

Abstract: In 1998, Robert J. Shiller and John Y. Campbell proposed that long-term stock market returns are not random walks and can be predicted by a valuation measure called the cyclically adjusted price-to-earnings (CAPE) ratio. This paper is set to identify the predictive power of long-term stock market returns with deep neural networks and trace the impact of different architectural components of deep neural networks. We present three network types — recurrent neural network (RNN), long short-term memory (LSTM) neural network, and gated recurrent units (GRU) neural network — to ascertain what impact the different networks have on predicting long-term stock market returns and whether a parsimonious neural network model (PNNM) can be identified for practical application. The networks above have different design features that allow returns to be predicted and the effects of the various elements of the networks to be understood. For our study, we use monthly CAPE ratios and real ten-year annualised excess returns of the S&P 500 from 1881-01 to 2012-06, with data from 1876-06 (real earnings) to 2022-06 (real total return price) needed to determine the two datasets. Our results show improved forecasting accuracy over linear regression for all analysed neural networks. Only the complex trial-and-error procedure leads to the network design with the optimal result of minimising the root-mean-squared error (RMSE). This approach is usually associated with a considerable time and cost factor. Therefore, for time series studies of the present type, we propose a parsimonious GRU architecture with low complexity and comparatively low out-of-sample error, which we call ‘GRU-101010’.

Keywords: cyclically adjusted price-to-earnings (CAPE) ratio; gated recurrent units (GRU) neural network; long short-term memory (LSTM) neural network; neural network’s architecture; neural network’s hyperparameters; recurrent neural network (RNN); time series analysis. (search for similar items in EconPapers)
JEL-codes: G2 M15 (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:

Downloads: (external link)
https://hstalks.com/article/7363/download/ (application/pdf)
https://hstalks.com/article/7363/ (text/html)
Requires a paid subscription for full access.

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:aza:airwa0:y:2022:v:2:i:1:p:70-93

Access Statistics for this article

More articles in Journal of AI, Robotics & Workplace Automation from Henry Stewart Publications
Bibliographic data for series maintained by Henry Stewart Talks ().

 
Page updated 2025-03-19
Handle: RePEc:aza:airwa0:y:2022:v:2:i:1:p:70-93