An Improved Reinforcement Learning Model Based on Sentiment Analysis
Yizhuo Li,
Peng Zhou,
Fangyi Li and
Xiao Yang
Papers from arXiv.org
Abstract:
With the development of artificial intelligence technology, quantitative trading systems represented by reinforcement learning have emerged in the stock trading market. The authors combined the deep Q network in reinforcement learning with the sentiment quantitative indicator ARBR to build a high-frequency stock trading model for the share market. To improve the performance of the model, the PCA algorithm is used to reduce the dimensionality feature vector while incorporating the influence of market sentiment on the long-short power into the spatial state of the trading model and uses the LSTM layer to replace the fully connected layer to solve the traditional DQN model due to limited empirical data storage. Through the use of cumulative income, Sharpe ratio to evaluate the performance of the model and the use of double moving averages and other strategies for comparison. The results show that the improved model proposed by authors is far superior to the comparison model in terms of income, achieving a maximum annualized rate of return of 54.5%, which is proven to be able to increase reinforcement learning performance significantly in stock trading.
Date: 2021-11
New Economics Papers: this item is included in nep-big, nep-cmp, nep-fmk and nep-mst
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/2111.15354 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2111.15354
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().