EconPapers    
Economics at your fingertips  
 

Multiagent Q-Learning Approach for the Recharging Scheduling of Electric Automated Guided Vehicles in Container Terminals

Chenhao Zhou (), Aloisius Stephen (), Kok Choon Tan (), Ek Peng Chew () and Loo Hay Lee ()
Additional contact information
Chenhao Zhou: School of Management, Northwestern Polytechnical University, Xi’an 710072, China; Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore 117576
Aloisius Stephen: Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore 117576
Kok Choon Tan: Department of Analytics & Operations, National University of Singapore, Singapore 119245
Ek Peng Chew: Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore 117576
Loo Hay Lee: Department of Industrial Systems Engineering & Management, National University of Singapore, Singapore 117576

Transportation Science, 2024, vol. 58, issue 3, 664-683

Abstract: In recent years, advancements in battery technology have led to increased adoption of electric automated guided vehicles in container terminals. Given how critical these vehicles are to terminal operations, this trend requires efficient recharging scheduling for automated guided vehicles, and the main challenges arise from limited charging station capacity and tight vehicle schedules. Motivated by the dynamic nature of the problem, the recharging scheduling problem for an entire vehicle fleet given capacitated stations is formulated as a Markov decision process model. Then, it is solved using a multiagent Q-learning (MAQL) approach to produce a recharging schedule that minimizes the delay of jobs. Numerical experiments show that under a stochastic environment in terms of vehicle travel time, MAQL enables the exploration of better scheduling by coordinating across the entire vehicle fleet and charging facilities and outperforms various benchmark approaches, with an additional improvement of 18.8% on average over the best rule-based heuristic and 5.4% over the predetermined approach.

Keywords: recharging scheduling; multiagent Q-learning; automated guided vehicle (search for similar items in EconPapers)
Date: 2024
References: Add references at CitEc
Citations:

Downloads: (external link)
http://dx.doi.org/10.1287/trsc.2022.0113 (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:inm:ortrsc:v:58:y:2024:i:3:p:664-683

Access Statistics for this article

More articles in Transportation Science from INFORMS Contact information at EDIRC.
Bibliographic data for series maintained by Chris Asher ().

 
Page updated 2025-03-19
Handle: RePEc:inm:ortrsc:v:58:y:2024:i:3:p:664-683