Transfer Reinforcement Learning for Mixed Observability Markov Decision Processes with Time-Varying Interval-Valued Parameters and Its Application in Pandemic Control
Mu Du (),
Hongtao Yu () and
Nan Kong ()
Additional contact information
Mu Du: School of Economics and Management, Dalian University of Technology, Dalian 116024, China
Hongtao Yu: School of Economics and Management, Dalian University of Technology, Dalian 116024, China
Nan Kong: Weldon School of Biomedical Engineering, Purdue University, West Lafayette, Indiana 47907
INFORMS Journal on Computing, 2025, vol. 37, issue 2, 315-337
Abstract:
We investigate a novel type of online sequential decision problem under uncertainty, namely mixed observability Markov decision process with time-varying interval-valued parameters (MOMDP-TVIVP). Such data-driven optimization problems with online learning widely have real-world applications (e.g., coordinating surveillance and intervention activities under limited resources for pandemic control). Solving MOMDP-TVIVP is a great challenge as online system identification and reoptimization based on newly observational data are required considering the unobserved states and time-varying parameters. Moreover, for many practical problems, the action and state spaces are intractably large for online optimization. To address this challenge, we propose a novel transfer reinforcement learning (TRL)-based algorithmic approach that ingrates transfer learning (TL) into deep reinforcement learning (DRL) in an offline-online scheme. To accelerate the online reoptimization, we pretrain a collection of promising networks and fine-tune them with newly acquired observational data of the system. The hallmark of our approach comes from combining the strong approximation ability of neural networks with the high flexibility of TL through efficiently adapting the previously learned policy to changes in system dynamics. Computational study under different uncertainty configurations and problem scales shows that our approach outperforms existing methods in solution optimality, robustness, efficiency, and scalability. We also demonstrate the value of fine-tuning by comparing TRL with DRL, in which at least 21% solution improvement can be yielded by TRL with fine-tuning for no more than 0.62% of time spent on pretraining in each period for problem instances with a continuous state-action space of modest dimensionality. A retrospective study on a pandemic control use case in Shanghai, China shows improved decision making via TRL in several public health metrics. Our approach is the first-ever endeavor of employing intensive neural network training in solving Markov decision processes requiring online system identification and reoptimization.
Keywords: online learning and optimization; deep reinforcement learning; transfer learning; MOMDP; time-varying interval-valued parameters (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/ijoc.2022.0236 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:37:y:2025:i:2:p:315-337
Access Statistics for this article
More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().