A Hybrid Deep Reinforcement Learning Method for Insurance Portfolio Management
Xiang Cheng (), 
Zhuo Jin (), 
Hailiang Yang () and 
George Yin ()
Additional contact information 
Zhuo Jin: Macquarie University
Hailiang Yang: Xi’an Jiaotong-Liverpool University
George Yin: University of Connecticut
Journal of Optimization Theory and Applications, 2026, vol. 208, issue 1, No 34, 42 pages
Abstract:
Abstract This paper develops a hybrid deep reinforcement learning approach to manage an insurance portfolio for diffusion models. To address the model uncertainty, we adopt the recently developed modelling of exploration and exploitation strategies in a continuous-time decision-making process with reinforcement learning. We consider an insurance portfolio management problem in which an entropy-regularized reward function and corresponding relaxed stochastic controls are formulated. To obtain the optimal relaxed stochastic controls, we develop a Markov chain approximation and stochastic approximation-based iterative deep reinforcement learning algorithm where the probability distribution of the optimal stochastic controls is approximated by neural networks. In our hybrid algorithm, both Markov chain approximation and stochastic approximation are adopted in the learning processes. The idea of using the Markov chain approximation method to find initial guesses is proposed. A stochastic approximation is adopted to estimate the parameters of neural networks. Convergence analysis of the algorithm is presented. Numerical examples are provided to illustrate the performance of the algorithm.
Keywords: Neural Network; Deep Reinforcement Learnings; Markov Chain Approximation; Stochastic Approximation; Insurance Portfolio; 91B06; 91B70; 93E20 (search for similar items in EconPapers)
Date: 2026
References: Add references at CitEc 
Citations: 
Downloads: (external link)
http://link.springer.com/10.1007/s10957-025-02858-3 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX 
RIS (EndNote, ProCite, RefMan) 
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:joptap:v:208:y:2026:i:1:d:10.1007_s10957-025-02858-3
Ordering information: This journal article can be ordered from
http://www.springer. ... cs/journal/10957/PS2
DOI: 10.1007/s10957-025-02858-3
Access Statistics for this article
Journal of Optimization Theory and Applications is currently edited by Franco Giannessi and David G. Hull
More articles in Journal of Optimization Theory and Applications  from  Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().