EconPapers    
Economics at your fingertips  
 

Using Reinforcement Learning in the Algorithmic Trading Problem

Evgeny Ponomarev, Ivan Oseledets and Andrzej Cichocki

Papers from arXiv.org

Abstract: The development of reinforced learning methods has extended application to many areas including algorithmic trading. In this paper trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards. A system for trading the fixed volume of a financial instrument is proposed and experimentally tested; this is based on the asynchronous advantage actor-critic method with the use of several neural network architectures. The application of recurrent layers in this approach is investigated. The experiments were performed on real anonymized data. The best architecture demonstrated a trading strategy for the RTS Index futures (MOEX:RTSI) with a profitability of 66% per annum accounting for commission. The project source code is available via the following link: http://github.com/evgps/a3c_trading.

Date: 2020-02
New Economics Papers: this item is included in nep-big, nep-cmp and nep-fmk
References: View references in EconPapers View complete reference list from CitEc
Citations:

Published in ISSN 1064-2269, Journal of Communications Technology and Electronics, 2019, Vol. 64, No. 12, pp. 1450-1457

Downloads: (external link)
http://arxiv.org/pdf/2002.11523 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2002.11523

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2025-03-19
Handle: RePEc:arx:papers:2002.11523