EconPapers    
Economics at your fingertips  
 

Approximate dynamic programming for the military aeromedical evacuation dispatching, preemption-rerouting, and redeployment problem

Phillip R. Jenkins, Matthew J. Robbins and Brian J. Lunday

European Journal of Operational Research, 2021, vol. 290, issue 1, 132-143

Abstract: Military medical planners must consider how aeromedical evacuation (MEDEVAC) assets will be utilized when preparing for and supporting combat operations. This research examines the MEDEVAC dispatching, preemption-rerouting, and redeployment (DPR) problem. The intent of this research is to determine high-quality DPR policies that improve the performance of United States Army MEDEVAC systems and ultimately increase the combat casualty survivability rate. A discounted, infinite-horizon Markov decision process (MDP) model of the MEDEVAC DPR problem is formulated and solved via an approximate dynamic programming (ADP) strategy that utilizes a support vector regression value function approximation scheme within an approximate policy iteration algorithmic framework. The objective is to maximize the expected total discounted reward attained by the system. The applicability of the MDP model is examined via a notional, representative planning scenario based on high-intensity combat operations to defend Azerbaijan against a notional aggressor. Computational experimentation is performed to determine how selected problem features and algorithmic features affect the quality of solutions attained by the ADP-generated DPR policies and to assess the efficacy of the proposed solution methodology. The results from the computational experiments indicate the ADP-generated policies significantly outperform the two benchmark policies considered. Moreover, the results reveal that the average service time of high-precedence, time-sensitive requests decreases when an ADP policy is adopted during high-intensity conflicts. As the rate at which requests enter the MEDEVAC system increases, the performance gap between the ADP policy and the first benchmark policy (i.e., the currently practiced, closest-available dispatching policy) increases substantially. Conversely, as the rate at which requests enter the system decreases, the ADP performance improvement over both benchmark policies decreases, indicating the ADP policy provides little-to-no benefit over a myopic approach (e.g., as utilized in the benchmark policies) when the intensity of a conflict is low. Ultimately, this research informs the development and implementation of future tactics, techniques, and procedures for military MEDEVAC operations.

Keywords: OR in defense; Approximate dynamic programming; Markov decision process; Support vector regression; Military MEDEVAC (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0377221720306949
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:ejores:v:290:y:2021:i:1:p:132-143

DOI: 10.1016/j.ejor.2020.08.004

Access Statistics for this article

European Journal of Operational Research is currently edited by Roman Slowinski, Jesus Artalejo, Jean-Charles. Billaut, Robert Dyson and Lorenzo Peccati

More articles in European Journal of Operational Research from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:ejores:v:290:y:2021:i:1:p:132-143