EconPapers    
Economics at your fingertips  
 

Policy gradient methods for optimal trade execution in limit order books

Michael Giegrich, Roel Oomen and Christoph Reisinger

Journal of Computational Finance

Abstract: We discuss applications of policy gradient methods for the optimal execution of an asset position via limit orders. We study two examples in-depth: a parametric limit order book (LOB) model and a realistic generative adversarial neural network (GAN) LOB model. In the first case, we apply a zeroth-order gradient estimator to a suitable parameterization of candidate policies and propose modifications to lower the variance in the estimate, including conditional sampling and a backward-in-time recursion. In the second case, we adapt a recently published LOB-GAN model to obtain a differentiable map from the parameters to the objective. We then alter a standard policy gradient method with a pathwise gradient estimator to overcome issues with the nonconvexity and roughness of the loss landscape, studying different initializations using inexact dynamic programming and second-order optimization steps, as well as regularization of the learnt policies. In both cases, we are able to learn effective trading strategies.

References: Add references at CitEc
Citations:

Downloads: (external link)
https://www.risk.net/journal-of-computational-fina ... in-limit-order-books (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:rsk:journ0:7962850

Access Statistics for this article

More articles in Journal of Computational Finance from Journal of Computational Finance
Bibliographic data for series maintained by Thomas Paine ().

 
Page updated 2025-12-17
Handle: RePEc:rsk:journ0:7962850