EconPapers    
Economics at your fingertips  
 

Robust Risk-Aware Reinforcement Learning

Sebastian Jaimungal, Silvana Pesenti, Ye Sheng Wang and Hariom Tatsat

Papers from arXiv.org

Abstract: We present a reinforcement learning (RL) approach for robust optimisation of risk-aware performance criteria. To allow agents to express a wide variety of risk-reward profiles, we assess the value of a policy using rank dependent expected utility (RDEU). RDEU allows the agent to seek gains, while simultaneously protecting themselves against downside risk. To robustify optimal policies against model uncertainty, we assess a policy not by its distribution, but rather, by the worst possible distribution that lies within a Wasserstein ball around it. Thus, our problem formulation may be viewed as an actor/agent choosing a policy (the outer problem), and the adversary then acting to worsen the performance of that strategy (the inner problem). We develop explicit policy gradient formulae for the inner and outer problems, and show its efficacy on three prototypical financial problems: robust portfolio allocation, optimising a benchmark, and statistical arbitrage.

Date: 2021-08, Revised 2021-12
New Economics Papers: this item is included in nep-cmp, nep-isf, nep-rmg and nep-upt
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1) Track citations by RSS feed

Published in SIAM J. Financial Mathematics, Forthcoming 2021

Downloads: (external link)
http://arxiv.org/pdf/2108.10403 Latest version (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2108.10403

Access Statistics for this paper

More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().

 
Page updated 2022-04-09
Handle: RePEc:arx:papers:2108.10403