Deep reinforcement learning for dynamic incident-responsive traffic information dissemination
Jiaohong Xie,
Zhenyu Yang,
Xiongfei Lai,
Yang Liu,
Xiao Bo Yang,
Teck-Hou Teng and
Chen-Khong Tham
Transportation Research Part E: Logistics and Transportation Review, 2022, vol. 166, issue C
Abstract:
This study is concerned with the optimal dynamical information dissemination (DID) problem in a transportation network interrupted by traffic incidents. Optimizing system performance with DID after road incidents is challenging because of the uncertainty in traffic flow variation and travelers’ heterogeneous responses to information. To address the problem, we consider a traffic manager who aims to improve the system performance by dynamically generating and disseminating information to road users in a time period after an incident happens. We develop a decision tool for obtaining DID strategy based on double deep Q-learning (DDQL) for the traffic manager, aiming at finding an optimal DID strategy. The decision tool is integrated with traffic sensors which collect traffic data in real time. With advanced traveler information systems, the DID system dynamically sends out various types of information to users according to the current and anticipated traffic states so as to minimize congestion and enhance road network capacity. In particular, the proposed DDQL method utilizes a double deep Q-network (DQN) structure to learn the state–action values. To test and evaluate the performance of the decision tool, we develop a microscopic simulation model of a real road network in the Serangoon area of Singapore in PTV VISSIM and calibrate the model with real historical traffic data. We train and compare the DDQL controller model with different reward signals, including the weighted sum of the average speed and queue delay, total traffic flow, and average travel time. Numerical experiments demonstrate the good performance of the proposed DDQL-based DID approach in improving the congestion and other performance metrics of the expressway. The robustness and generalizability of the DDQL agent are also verified by evaluating the algorithm performance in environments with different traffic demand patterns and driving behavior profiles.
Keywords: Dynamic information dissemination; Deep reinforcement learning; Traffic congestion management; Traffic simulation; Intelligent transportation system; Traffic incident response (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S1366554522002514
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:transe:v:166:y:2022:i:c:s1366554522002514
Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/600244/bibliographic
http://www.elsevier. ... 600244/bibliographic
DOI: 10.1016/j.tre.2022.102871
Access Statistics for this article
Transportation Research Part E: Logistics and Transportation Review is currently edited by W. Talley
More articles in Transportation Research Part E: Logistics and Transportation Review from Elsevier
Bibliographic data for series maintained by Catherine Liu ().