Double Deep Q-Learning for Optimal Execution
Brian Ning,
Franco Ho Ting Lin and
Sebastian Jaimungal
Applied Mathematical Finance, 2021, vol. 28, issue 4, 361-380
Abstract:
Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.
Date: 2021
References: Add references at CitEc
Citations: View citations in EconPapers (7)
Downloads: (external link)
http://hdl.handle.net/10.1080/1350486X.2022.2077783 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:apmtfi:v:28:y:2021:i:4:p:361-380
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/RAMF20
DOI: 10.1080/1350486X.2022.2077783
Access Statistics for this article
Applied Mathematical Finance is currently edited by Professor Ben Hambly and Christoph Reisinger
More articles in Applied Mathematical Finance from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().