The Recurrent Reinforcement Learning Crypto Agent
Gabriel Borrageiro,
Nikan Firoozye and
Paolo Barucca
Papers from arXiv.org
Abstract:
We demonstrate a novel application of online transfer learning for a digital assets trading agent. This agent uses a powerful feature space representation in the form of an echo state network, the output of which is made available to a direct, recurrent reinforcement learning agent. The agent learns to trade the XBTUSD (Bitcoin versus US Dollars) perpetual swap derivatives contract on BitMEX on an intraday basis. By learning from the multiple sources of impact on the quadratic risk-adjusted utility that it seeks to maximise, the agent avoids excessive over-trading, captures a funding profit, and can predict the market's direction. Overall, our crypto agent realises a total return of 350\%, net of transaction costs, over roughly five years, 71\% of which is down to funding profit. The annualised information ratio that it achieves is 1.46.
Date: 2022-01, Revised 2022-05
New Economics Papers: this item is included in nep-cmp and nep-pay
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Published in IEEE Access, vol. 10, pp. 38590-38599, 2022
Downloads: (external link)
http://arxiv.org/pdf/2201.04699 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2201.04699
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().