EconPapers    
Economics at your fingertips  
 

Adaptive rescheduling of rail transit services with short-turnings under disruptions via a multi-agent deep reinforcement learning approach

Chengshuo Ying, Andy H.F. Chow, Yimo Yan, Yong-Hong Kuo and Shouyang Wang

Transportation Research Part B: Methodological, 2024, vol. 188, issue C

Abstract: This paper presents a novel multi-agent deep reinforcement learning (MADRL) approach for real-time rescheduling of rail transit services with short-turnings during a complete track blockage on a double-track service corridor. The optimization problem is modeled as a Markov decision process with multiple control agents rescheduling train services on each directional line for system recovery. To ensure computational efficacy, we employ a multi-agent policy optimization solution framework in which each control agent employs a decentralized policy function for deriving local decisions and a centralized value function approximation (VFA) estimating global system state values. Both the policy functions and VFAs are represented by multi-layer artificial neural networks (ANNs). A multi-agent proximal policy optimization gradient algorithm is developed for training the policies and VFAs through iterative simulated system transitions. The proposed framework is implemented and tested with real-world scenarios with data collected from London Underground, UK. Computational results demonstrate the superiority of the developed framework in computational effectiveness compared with previous distributed control algorithms and conventional metaheuristic methods. We also provide managerial implications for train rescheduling during disruptions with different durations, locations, and passenger behaviors. Additional experiments show the scalability of the proposed MADRL framework in managing disruptions with uncertain durations with a generalized model. This study contributes to real-time rail transit management with innovative control and optimization techniques.

Keywords: Train rescheduling; Short-turning; Markov decision process; Multi-agent deep reinforcement learning; Proximal policy optimization (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0191261524001917
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:transb:v:188:y:2024:i:c:s0191261524001917

Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01

DOI: 10.1016/j.trb.2024.103067

Access Statistics for this article

Transportation Research Part B: Methodological is currently edited by Fred Mannering

More articles in Transportation Research Part B: Methodological from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:transb:v:188:y:2024:i:c:s0191261524001917