The flying sidekick traveling salesman problem with stochastic travel time: A reinforcement learning approach
Zeyu Liu,
Xueping Li and
Anahita Khojandi
Transportation Research Part E: Logistics and Transportation Review, 2022, vol. 164, issue C
Abstract:
As a novel urban delivery approach, the coordinated operation of a truck–drone pair has gained increasing popularity, where the truck takes a traveling salesman route and the drone launches from the truck to deliver packages to nearby customers. Previous studies have referred to this problem as the flying sidekick traveling salesman problem (FSTSP) and have proposed numerous algorithms to solve it. However, few studies have considered the stochasticity of the travel time on the road network, mainly caused by traffic congestion, harsh weather conditions, etc, which heavily impacts the speed of the truck, thus affecting the drone’s operations and overall delivery routine. In this study, we extend the FSTSP with stochastic travel times and formulate the problem into a Markov decision process (MDP). The model is solved using reinforcement learning (RL) algorithms including the deep Q-network (DQN) and the Advantage Actor-Critic (A2C) algorithm to overcome the curse of dimensionality. Using an artificially generated dataset that was widely accepted as benchmarks in the literature, we show that the reinforcement learning algorithms also perform well as approximate optimization algorithms, outperforming a mixed integer programming (MIP) model and a local search heuristic algorithm on the original FSTSP without the stochastic travel time. On the FSTSP with stochastic travel time, the reinforcement learning algorithms obtain flexible policies that make dynamic decisions based on different traffic conditions on the roads, saving up to 28.65% on delivery time compared with the MIP model and a dynamic local search (DLS) algorithm. We also conduct a case study using real-time traffic data collected in a middle-sized city in the U.S. using Google Map API. Compared with a benchmark calculated by the DLS, the DRL approach saves 32.68% total delivery time in the case study, showing great potential for future practical adoption.
Keywords: Drone; Dynamic vehicle routing problem; Traveling salesman problem; Markov decision process; Deep reinforcement learning; Artificial neural network (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S1366554522002034
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:transe:v:164:y:2022:i:c:s1366554522002034
Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/600244/bibliographic
http://www.elsevier. ... 600244/bibliographic
DOI: 10.1016/j.tre.2022.102816
Access Statistics for this article
Transportation Research Part E: Logistics and Transportation Review is currently edited by W. Talley
More articles in Transportation Research Part E: Logistics and Transportation Review from Elsevier
Bibliographic data for series maintained by Catherine Liu ().