EconPapers    
Economics at your fingertips  
 

Solving The Dynamic Volatility Fitting Problem: A Deep Reinforcement Learning Approach

Emmanuel Gnabeyeu, Omar Karkar and Imad Idboufous

Papers from arXiv.org

Abstract: The volatility fitting is one of the core problems in the equity derivatives business. Through a set of deterministic rules, the degrees of freedom in the implied volatility surface encoding (parametrization, density, diffusion) are defined. Whilst very effective, this approach widespread in the industry is not natively tailored to learn from shifts in market regimes and discover unsuspected optimal behaviors. In this paper, we change the classical paradigm and apply the latest advances in Deep Reinforcement Learning(DRL) to solve the fitting problem. In particular, we show that variants of Deep Deterministic Policy Gradient (DDPG) and Soft Actor Critic (SAC) can achieve at least as good as standard fitting algorithms. Furthermore, we explain why the reinforcement learning framework is appropriate to handle complex objective functions and is natively adapted for online learning.

Date: 2024-10
New Economics Papers: this item is included in nep-big, nep-cmp and nep-rmg
References: Add references at CitEc
Citations:

Downloads: (external link)
http://arxiv.org/pdf/2410.11789 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2410.11789

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2024-12-28
Handle: RePEc:arx:papers:2410.11789