Spatial-Temporal Flows-Adaptive Street Layout Control Using Reinforcement Learning
Qiming Ye,
Yuxiang Feng,
Eduardo Candela,
Jose Escribano Macias,
Marc Stettler and
Panagiotis Angeloudis
Additional contact information
Qiming Ye: Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK
Yuxiang Feng: Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK
Eduardo Candela: Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK
Jose Escribano Macias: Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK
Marc Stettler: Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK
Panagiotis Angeloudis: Department of Civil and Environmental Engineering, Imperial College London, London SW7 2AZ, UK
Sustainability, 2021, vol. 14, issue 1, 1-22
Abstract:
Complete streets scheme makes seminal contributions to securing the basic public right-of-way (ROW), improving road safety, and maintaining high traffic efficiency for all modes of commute. However, such a popular street design paradigm also faces endogenous pressures like the appeal to a more balanced ROW for non-vehicular users. In addition, the deployment of Autonomous Vehicle (AV) mobility is likely to challenge the conventional use of the street space as well as this scheme. Previous studies have invented automated control techniques for specific road management issues, such as traffic light control and lane management. Whereas models and algorithms that dynamically calibrate the ROW of road space corresponding to travel demands and place-making requirements still represent a research gap. This study proposes a novel optimal control method that decides the ROW of road space assigned to driveways and sidewalks in real-time. To solve this optimal control task, a reinforcement learning method is introduced that employs a microscopic traffic simulator, namely SUMO, as its environment. The model was trained for 150 episodes using a four-legged intersection and joint AVs-pedestrian travel demands of a day. Results evidenced the effectiveness of the model in both symmetric and asymmetric road settings. After being trained by 150 episodes, our proposed model significantly increased its comprehensive reward of both pedestrians and vehicular traffic efficiency and sidewalk ratio by 10.39%. Decisions on the balanced ROW are optimised as 90.16% of the edges decrease the driveways supply and raise sidewalk shares by approximately 9%. Moreover, during 18.22% of the tested time slots, a lane-width equivalent space is shifted from driveways to sidewalks, minimising the travel costs for both an AV fleet and pedestrians. Our study primarily contributes to the modelling architecture and algorithms concerning centralised and real-time ROW management. Prospective applications out of this method are likely to facilitate AV mobility-oriented road management and pedestrian-friendly street space design in the near future.
Keywords: intelligent road infrastructure; Intelligent Transport System; reinforcement learning; Deep Deterministic Policy Gradient (DDPG); urban planning; street design; Autonomous Vehicles (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2071-1050/14/1/107/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/1/107/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2021:i:1:p:107-:d:709235
Access Statistics for this article
Sustainability is currently edited by Ms. Alexandra Wu
More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().