Optimal Market Making by Reinforcement Learning
Matias Selser,
Javier Kreiner and
Manuel Maurette
Papers from arXiv.org
Abstract:
We apply Reinforcement Learning algorithms to solve the classic quantitative finance Market Making problem, in which an agent provides liquidity to the market by placing buy and sell orders while maximizing a utility function. The optimal agent has to find a delicate balance between the price risk of her inventory and the profits obtained by capturing the bid-ask spread. We design an environment with a reward function that determines an order relation between policies equivalent to the original utility function. When comparing our agents with the optimal solution and a benchmark symmetric agent, we find that the Deep Q-Learning algorithm manages to recover the optimal agent.
Date: 2021-04
New Economics Papers: this item is included in nep-cmp, nep-cwa, nep-mst and nep-upt
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://arxiv.org/pdf/2104.04036 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2104.04036
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators (help@arxiv.org).