EconPapers    
Economics at your fingertips  
 

Approximate Dynamic Programming for Military Medical Evacuation Dispatching Policies

Phillip R. Jenkins (), Matthew J. Robbins () and Brian J. Lunday ()
Additional contact information
Phillip R. Jenkins: Department of Operational Sciences, Air Force Institute of Technology, Wright Patterson Air Force Base, Ohio 45433
Matthew J. Robbins: Department of Operational Sciences, Air Force Institute of Technology, Wright Patterson Air Force Base, Ohio 45433
Brian J. Lunday: Department of Operational Sciences, Air Force Institute of Technology, Wright Patterson Air Force Base, Ohio 45433

INFORMS Journal on Computing, 2021, vol. 33, issue 1, 2-26

Abstract: Military medical planners must consider how aerial medical evacuation (MEDEVAC) assets will be dispatched when preparing for and supporting high-intensity combat operations. The dispatching authority seeks to dispatch MEDEVAC assets to prioritized requests for service, such that battlefield casualties are effectively and efficiently transported to nearby medical-treatment facilities. We formulate and solve a discounted, infinite-horizon Markov decision process (MDP) model of the MEDEVAC dispatching problem. Because the high dimensionality and uncountable state space of our MDP model renders classical dynamic programming solution methods intractable, we instead apply approximate dynamic programming (ADP) solution methods to produce high-quality dispatching policies relative to the currently practiced closest-available dispatching policy. We develop, test, and compare two distinct ADP solution techniques, both of which utilize an approximate policy iteration (API) algorithmic framework. The first algorithm uses least-squares temporal differences (LSTD) learning for policy evaluation, whereas the second algorithm uses neural network (NN) learning. We construct a notional, yet representative planning scenario based on high-intensity combat operations in southern Azerbaijan to demonstrate the applicability of our MDP model and to compare the efficacies of our proposed ADP solution techniques. We generate 30 problem instances via a designed experiment to examine how selected problem features and algorithmic features affect the quality of solutions attained by our ADP policies. Results show that the respective policies determined by the NN-API and LSTD-API algorithms significantly outperform the closest-available benchmark policies in 27 (90%) and 24 (80%) of the problem instances examined. Moreover, the NN-API policies significantly outperform the LSTD-API policies in each of the problem instances examined. Compared with the closest-available policy for the baseline problem instance, the NN-API policy decreases the average response time of important urgent (i.e., life-threatening) requests by 39 minutes. These research models, methodologies, and results inform the implementation and modification of current and future MEDEVAC tactics, techniques, and procedures, as well as the design and purchase of future aerial MEDEVAC assets.

Keywords: military MEDEVAC; EMS system; Markov decision process; approximate dynamic programming (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)

Downloads: (external link)
https://doi.org/10.1287/ijoc.2019.0930 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:orijoc:v:33:y:2021:i:1:p:2-26

Access Statistics for this article

More articles in INFORMS Journal on Computing from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:orijoc:v:33:y:2021:i:1:p:2-26