Deep Reinforcement Learning for Trading
Zihao Zhang,
Stefan Zohren and
Stephen Roberts
Papers from arXiv.org
Abstract:
We adopt Deep Reinforcement Learning algorithms to design trading strategies for continuous futures contracts. Both discrete and continuous action spaces are considered and volatility scaling is incorporated to create reward functions which scale trade positions based on market volatility. We test our algorithms on the 50 most liquid futures contracts from 2011 to 2019, and investigate how performance varies across different asset classes including commodities, equity indices, fixed income and FX markets. We compare our algorithms against classical time series momentum strategies, and show that our method outperforms such baseline models, delivering positive profits despite heavy transaction costs. The experiments show that the proposed algorithms can follow large market trends without changing positions and can also scale down, or hold, through consolidation periods.
Date: 2019-11
New Economics Papers: this item is included in nep-big, nep-cmp, nep-fmk and nep-mst
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (9)
Downloads: (external link)
http://arxiv.org/pdf/1911.10107 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1911.10107
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().