Recurrent Dictionary Learning for State-Space Models with an Application in Stock Forecasting
Shalini Sharma (),
Víctor Elvira (),
Emilie Chouzenoux () and
Angshul Majumdar
Additional contact information
Shalini Sharma: IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi]
Víctor Elvira: School of Mathematics - University of Edinburgh - The University of Edinburgh
Emilie Chouzenoux: OPIS - OPtimisation Imagerie et Santé - CVN - Centre de vision numérique - Inria - Institut National de Recherche en Informatique et en Automatique - CentraleSupélec - Université Paris-Saclay - Centre Inria de l'Université Paris-Saclay - Centre Inria de Saclay - Inria - Institut National de Recherche en Informatique et en Automatique
Angshul Majumdar: IIIT-Delhi - Indraprastha Institute of Information Technology [New Delhi]
Post-Print from HAL
Abstract:
In this work, we introduce a new modeling and inferential tool for dynamical processing of time series. The approach is called recurrent dictionary learning (RDL). The proposed model reads as a linear Gaussian Markovian state-space model involving two linear operators, the state evolution and the observation matrices, that we assumed to be unknown. These two unknown operators (that can be seen interpreted as dictionaries) and the sequence of hidden states are jointly learnt via an expectation-maximization algorithm. The RDL model gathers several advantages, namely online processing, probabilistic inference, and a high model expressiveness which is usually typical of neural networks. RDL is particularly well suited for stock forecasting. Its performance is illustrated on two problems: next day forecasting (regression problem) and next day trading (classification problem), given past stock market observations. Experimental results show that our proposed method excels over state-of-the-art stock analysis models such as CNN-TA, MFNN, and LSTM.
Keywords: Stock Forecasting; Recurrent dictionary learning; Kalman filter; expectation-minimization; dynamical modeling; uncertainty quantification (search for similar items in EconPapers)
Date: 2021
New Economics Papers: this item is included in nep-big, nep-cmp, nep-cwa, nep-ecm, nep-ets, nep-for and nep-ore
Note: View the original document on HAL open archive server: https://hal.science/hal-03184841v1
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Published in Neurocomputing, inPress
Downloads: (external link)
https://hal.science/hal-03184841v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:journl:hal-03184841
Access Statistics for this paper
More papers in Post-Print from HAL
Bibliographic data for series maintained by CCSD ().