Enhanced intelligent train operation algorithms for metro train based on expert system and deep reinforcement learning
Yunhu Huang,
Wenzhu Lai,
Dewang Chen,
Geng Lin and
Jiateng Yin
PLOS ONE, 2025, vol. 20, issue 5, 1-27
Abstract:
In recent decades, automatic train operation (ATO) systems have been gradually adopted by many metro systems, primarily due to their cost-effectiveness and practicality. However, a critical examination reveals computational constraints, adaptability to unforeseen conditions and multi-objective balancing that our research aims to address. In this paper, expert knowledge is combined with deep reinforcement learning algorithm (Proximal Policy Optimization, PPO) and two enhanced intelligent train operation algorithms (EITO) are proposed. The first algorithm, EITOE, is based on an expert system containing expert rules and a heuristic expert inference method. On the basis of EITOE, we propose EITOP algorithm using the PPO algorithm to optimize multiple objectives by designing reinforcement learning strategies, rewards, and value functions. We also develop the double minimal-time distribution (DMTD) calculation method in the EITO implementation to achieve longer coasting distances and further optimize the energy consumption. Compared with previous works, EITO enables the control of continuous train operation without reference to offline speed profiles and optimizes several key performance indicators online. Finally, we conducted comparative tests of the manual driving, intelligent driving algorithm (ITOR, STON), and the algorithms proposed in this paper, EITO, using real line data from the Yizhuang Line of Beijing Metro (YLBS). The test results show that the EITO outperform the current intelligent driving algorithms and manual driving in terms of energy consumption and passengers’ comfort. In addition, we further validated the robustness of EITO by selecting some complex lines with speed limits, gradients and different running times for testing on the YLBS. Overall, the EITOP algorithm has the best performance.
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0323478 (text/html)
https://journals.plos.org/plosone/article/file?id= ... 23478&type=printable (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:plo:pone00:0323478
DOI: 10.1371/journal.pone.0323478
Access Statistics for this article
More articles in PLOS ONE from Public Library of Science
Bibliographic data for series maintained by plosone ().