Reinforcement Learning: Prediction, Control and Value Function Approximation
Haoqian Li and
Thomas Lau
Papers from arXiv.org
Abstract:
With the increasing power of computers and the rapid development of self-learning methodologies such as machine learning and artificial intelligence, the problem of constructing an automatic Financial Trading Systems (FTFs) becomes an increasingly attractive research topic. An intuitive way of developing such a trading algorithm is to use Reinforcement Learning (RL) algorithms, which does not require model-building. In this paper, we dive into the RL algorithms and illustrate the definitions of the reward function, actions and policy functions in details, as well as introducing algorithms that could be applied to FTFs.
Date: 2019-08
New Economics Papers: this item is included in nep-big, nep-cmp and nep-pay
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://arxiv.org/pdf/1908.10771 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1908.10771
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().