Agent Inspired Trading Using Recurrent Reinforcement Learning and LSTM Neural Networks
David W. Lu
Papers from arXiv.org
Abstract:
With the breakthrough of computational power and deep neural networks, many areas that we haven't explore with various techniques that was researched rigorously in past is feasible. In this paper, we will walk through possible concepts to achieve robo-like trading or advising. In order to accomplish similar level of performance and generality, like a human trader, our agents learn for themselves to create successful strategies that lead to the human-level long-term rewards. The learning model is implemented in Long Short Term Memory (LSTM) recurrent structures with Reinforcement Learning or Evolution Strategies acting as agents The robustness and feasibility of the system is verified on GBPUSD trading.
Date: 2017-07
New Economics Papers: this item is included in nep-big and nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (10)
Downloads: (external link)
http://arxiv.org/pdf/1707.07338 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1707.07338
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().