Reinforcement Learning for Optimal Execution When Liquidity Is Time-Varying
Andrea Macrì and
Fabrizio Lillo
Applied Mathematical Finance, 2024, vol. 31, issue 5, 312-342
Abstract:
Optimal execution is an important problem faced by any trader. Most solutions are based on the assumption of constant market impact, while liquidity is known to be dynamic. Moreover, models with time-varying liquidity typically assume that it is observable, despite the fact that, in reality, it is latent and hard to measure in real time. In this paper we show that the use of Double Deep Q-learning, a form of Reinforcement Learning based on neural networks, is able to learn optimal trading policies when liquidity is time-varying. Specifically, we consider an Almgren-Chriss framework with temporary and permanent impact parameters following several deterministic and stochastic dynamics. Using extensive numerical experiments, we show that the trained algorithm learns the optimal policy when the analytical solution is available, and overcomes benchmarks and approximated solutions when the solution is not available.
Date: 2024
References: Add references at CitEc
Citations:
Downloads: (external link)
http://hdl.handle.net/10.1080/1350486X.2025.2490157 (text/html)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:taf:apmtfi:v:31:y:2024:i:5:p:312-342
Ordering information: This journal article can be ordered from
http://www.tandfonline.com/pricing/journal/RAMF20
DOI: 10.1080/1350486X.2025.2490157
Access Statistics for this article
Applied Mathematical Finance is currently edited by Professor Ben Hambly and Christoph Reisinger
More articles in Applied Mathematical Finance from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().