A Comparison of Reinforcement Learning and Deep Trajectory Based Stochastic Control Agents for Stepwise Mean-Variance Hedging
Ali Fathi and
Bernhard Hientzsch
Papers from arXiv.org
Abstract:
We consider two data-driven approaches to hedging, Reinforcement Learning and Deep Trajectory-based Stochastic Optimal Control, under a stepwise mean-variance objective. We compare their performance for a European call option in the presence of transaction costs under discrete trading schedules. We do this for a setting where stock prices follow Black-Scholes-Merton dynamics and the "book-keeping" price for the option is given by the Black-Scholes-Merton model with the same parameters. This simulated data setting provides a "sanitized" lab environment with simple enough features where we can conduct a detailed study of strengths, features, issues, and limitations of these two approaches. However, the formulation is model free and could allow any other setting with available book-keeping prices. We consider this study as a first step to develop, test, and validate autonomous hedging agents, and we provide blueprints for such efforts that address various concerns and requirements.
Date: 2023-02, Revised 2023-11
New Economics Papers: this item is included in nep-cmp
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
http://arxiv.org/pdf/2302.07996 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2302.07996
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().