Dynamic Vehicle Allocation Policies for Shared Autonomous Electric Fleets
Yuxuan Dong (),
René De Koster (),
Debjit Roy () and
Yugang Yu ()
Additional contact information
Yuxuan Dong: Sino-US Global Logistics Institute, Antai College of Economics and Management, Shanghai Jiao Tong University, Shanghai 200030, China; Anhui Province Key Laboratory of Contemporary Logistics and Supply Chain, School of Management, University of Science and Technology of China, Hefei 230026, China
René De Koster: Rotterdam School of Management, Erasmus University, 3062 PA Rotterdam, Netherlands
Debjit Roy: Indian Institute of Management Ahmedabad, Ahmedabad, 380015 Gujarat, India
Yugang Yu: Anhui Province Key Laboratory of Contemporary Logistics and Supply Chain & International Institute of Finance, School of Management, University of Science and Technology of China, Hefei 230026, China
Transportation Science, 2022, vol. 56, issue 5, 1238-1258
Abstract:
In the future, vehicle sharing platforms for passenger transport will be unmanned, autonomous, and electric. These platforms must decide which vehicle should pick up which type of customer based on the vehicle’s battery level and customer’s travel distance. We design dynamic vehicle allocation policies for matching appropriate vehicles to customers using a Markov decision process model. To obtain the model parameters, we first model the system as a semi-open queuing network (SOQN) with multiple synchronization stations. At these stations, customers with varied battery demands are matched with semi-shared vehicles that hold sufficient remaining battery levels. If a vehicle’s battery level drops below a threshold, it is routed probabilistically to a nearby charging station for charging. We solve the analytical model of the SOQN and obtain approximate system performance measures, which are validated using simulation. With inputs from the SOQN model, the Markov decision process minimizes both customer waiting cost and lost demand and finds a good heuristic vehicle allocation policy. The experiments show that the heuristic policy is near optimal in small-scale networks and outperforms benchmark policies in large-scale realistic scenarios. An interesting finding is that reserving idle vehicles to wait for future short-distance customer arrivals can be beneficial even when long-distance customers are waiting.
Keywords: autonomous electric vehicle sharing; queuing network; vehicle allocation; Markov decision process (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:
Downloads: (external link)
http://dx.doi.org/10.1287/trsc.2021.1115 (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:inm:ortrsc:v:56:y:2022:i:5:p:1238-1258
Access Statistics for this article
More articles in Transportation Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().