Robust Log-Optimal Strategy with Reinforcement Learning
Yifeng Guo,
Xingyu Fu,
Yuyan Shi and
Mingwen Liu
Papers from arXiv.org
Abstract:
We proposed a new Portfolio Management method termed as Robust Log-Optimal Strategy (RLOS), which ameliorates the General Log-Optimal Strategy (GLOS) by approximating the traditional objective function with quadratic Taylor expansion. It avoids GLOS's complex CDF estimation process,hence resists the "Butterfly Effect" caused by estimation error. Besides,RLOS retains GLOS's profitability and the optimization problem involved in RLOS is computationally far more practical compared to GLOS. Further, we combine RLOS with Reinforcement Learning (RL) and propose the so-called Robust Log-Optimal Strategy with Reinforcement Learning (RLOSRL), where the RL agent receives the analyzed results from RLOS and observes the trading environment to make comprehensive investment decisions. The RLOSRL's performance is compared to some traditional strategies on several back tests, where we randomly choose a selection of constituent stocks of the CSI300 index as assets under management and the test results validate its profitability and stability.
Date: 2018-05
New Economics Papers: this item is included in nep-cmp and nep-rmg
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (4)
Downloads: (external link)
http://arxiv.org/pdf/1805.00205 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:1805.00205
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().