Exploratory Control with Tsallis Entropy for Latent Factor Models
Ryan Donnelly and
Sebastian Jaimungal
Papers from arXiv.org
Abstract:
We study optimal control in models with latent factors where the agent controls the distribution over actions, rather than actions themselves, in both discrete and continuous time. To encourage exploration of the state space, we reward exploration with Tsallis Entropy and derive the optimal distribution over states - which we prove is $q$-Gaussian distributed with location characterized through the solution of an FBS$\Delta$E and FBSDE in discrete and continuous time, respectively. We discuss the relation between the solutions of the optimal exploration problems and the standard dynamic optimal control solution. Finally, we develop the optimal policy in a model-agnostic setting along the lines of soft $Q$-learning. The approach may be applied in, e.g., developing more robust statistical arbitrage trading strategies.
Date: 2022-11, Revised 2024-01
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Published in SIAM J. Financial Mathematisc, Forthcoming, 2023
Downloads: (external link)
http://arxiv.org/pdf/2211.07622 Latest version (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:arx:papers:2211.07622
Access Statistics for this paper
More papers in Papers from arXiv.org
Bibliographic data for series maintained by arXiv administrators ().