EconPapers    
Economics at your fingertips  
 

Reverse Thinking Approach to Deceptive Path Planning Problems

Dejun Chen, Quanjun Yin and Kai Xu ()
Additional contact information
Dejun Chen: College of Systems Engineering, National University of Defense Technology, Changsha 410073, China
Quanjun Yin: College of Systems Engineering, National University of Defense Technology, Changsha 410073, China
Kai Xu: College of Systems Engineering, National University of Defense Technology, Changsha 410073, China

Mathematics, 2024, vol. 12, issue 16, 1-21

Abstract: Deceptive path planning (DPP) aims to find routes that reduce the chances of observers discovering the real goal before its attainment, which is essential for addressing public safety, strategic path planning, and preserving the confidentiality of logistics routes. Currently, no single metric is available to comprehensively evaluate the performance of deceptive paths. This paper introduces two new metrics, termed “Average Deception Degree” (ADD) and “Average Deception Intensity” (ADI) to measure the overall performance of a path. Unlike traditional methods that focus solely on planning paths from the start point to the endpoint, we propose a reverse planning approach in which paths are considered from the endpoint back to the start point. Inverting the path from the endpoint back to the start point yields a feasible DPP solution. Based on this concept, we extend the existing π d 1 ~ 4 method to propose a new approach, e _ π d 1 ~ 4 , and introduce two novel methods, Endpoint DPP_Q and LDP DPP_Q, based on the existing DPP_Q method. Experimental results demonstrate that e _ π d 1 ~ 4 achieves significant improvements over π d 1 ~ 4 (an overall average improvement of 8.07%). Furthermore, Endpoint DPP_Q and LDP DPP_Q effectively address the issue of local optima encountered by DPP_Q. Specifically, in scenarios where the real and false goals have distinctive distributions, Endpoint DPP_Q and LDP DPP_Q show notable enhancements over DPP_Q (approximately a 2.71% improvement observed in batch experiments on 10 × 10 maps). Finally, tests on larger maps from Moving-AI demonstrate that these improvements become more pronounced as the map size increases. The introduction of ADD, ADI and the three new methods significantly expand the applicability of π d 1 ~ 4 and DPP_Q in more complex scenarios.

Keywords: deception; deceptive path planning; goal recognition; count-based reinforcement learning (search for similar items in EconPapers)
JEL-codes: C (search for similar items in EconPapers)
Date: 2024
References: View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2227-7390/12/16/2540/pdf (application/pdf)
https://www.mdpi.com/2227-7390/12/16/2540/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jmathe:v:12:y:2024:i:16:p:2540-:d:1458332

Access Statistics for this article

Mathematics is currently edited by Ms. Emma He

More articles in Mathematics from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jmathe:v:12:y:2024:i:16:p:2540-:d:1458332