Improving defensive air battle management by solving a stochastic dynamic assignment problem via approximate dynamic programming
Joseph M. Liles,
Matthew J. Robbins and
Brian J. Lunday
European Journal of Operational Research, 2023, vol. 305, issue 3, 1435-1449
Abstract:
Military air battle managers face several challenges when directing operations during quickly evolving combat scenarios. These scenarios require rapid assignment decisions to engage moving targets having dynamic flight paths. In defensive operations, the success of a sequence of air battle management decisions is reflected by the friendly force’s ability to maintain air superiority and defend friendly assets. We develop a Markov decision process (MDP) model of a stochastic dynamic assignment problem, named the Air Battle Management Problem (ABMP), wherein a set of unmanned combat aerial vehicles (UCAV) must defend an asset from cruise missiles arriving stochastically over time. Attaining an exact solution using traditional dynamic programming techniques is computationally intractable. Hence, we utilize an approximate dynamic programming (ADP) technique known as approximate policy iteration with least squares temporal differences (API-LSTD) learning to find high-quality solutions to the ABMP. We create a simulation environment in conjunction with a generic yet representative combat scenario to illustrate how the ADP solution compares in quality to a reasonable, closest-intercept benchmark policy. Our API-LSTD policy improves mean success rate by 2.8% compared to the benchmark policy and offers an 81.7% increase in the frequency with which the policy performs perfectly. Moreover, we find the increased success rate of the ADP policy is, on average, equivalent to the success rate attained by the benchmark policy when using a 20% faster UCAV. These results inform military force management and defense acquisition decisions and aid in the development of more effective tactics, techniques, and procedures.
Keywords: OR in defense; Air battle management; Dynamic assignment problem; Markov decision process; Approximate dynamic programming (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0377221722005069
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:ejores:v:305:y:2023:i:3:p:1435-1449
DOI: 10.1016/j.ejor.2022.06.031
Access Statistics for this article
European Journal of Operational Research is currently edited by Roman Slowinski, Jesus Artalejo, Jean-Charles. Billaut, Robert Dyson and Lorenzo Peccati
More articles in European Journal of Operational Research from Elsevier
Bibliographic data for series maintained by Catherine Liu ().