EconPapers    
Economics at your fingertips  
 

A Q-Learning-Based Approximate Solving Algorithm for Vehicular Route Game

Le Zhang, Lijing Lyu (), Shanshui Zheng, Li Ding and Lang Xu
Additional contact information
Le Zhang: School of Transport and Logistics, Guangzhou Railway Polytechnic, Guangzhou 510430, China
Lijing Lyu: School of Management, Guangzhou Huali Science and Technology Vocational College, Guangzhou 511325, China
Shanshui Zheng: School of Transport and Logistics, Guangzhou Railway Polytechnic, Guangzhou 510430, China
Li Ding: School of Physics and Optoelectronics, South China University of Technology, Guangzhou 510630, China
Lang Xu: School of Transport and Communications, Shanghai Maritime University, Shanghai 201306, China

Sustainability, 2022, vol. 14, issue 19, 1-14

Abstract: Route game is recognized as an effective method to alleviate Braess’ paradox, which generates a new traffic congestion since numerous vehicles obey the same guidance from the selfish route guidance (such as Google Maps). The conventional route games have symmetry since vehicles’ payoffs depend only on the selected route distribution but not who chose, which leads to the precise Nash equilibrium being able to be solved by constructing a special potential function. However, with the arrival of smart cities, the real-time of route schemes is more of a concerned of engineers than the absolute optimality in real traffic. It is not an easy task to re-construct the new potential functions of the route games due to the dynamic traffic conditions. In this paper, compared with the hard-solvable potential function-based precise method, a matched Q-learning algorithm is designed to generate the approximate Nash equilibrium of the classic route game for real-time traffic. An experimental study shows that the Nash equilibrium coefficients generated by the Q-learning-based approximate solving algorithm all converge to 1.00, and still have the required convergence in the different traffic parameters.

Keywords: traffic congestion; Braess’ paradox; route game; Q-learning; approximate Nash equilibrium (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
https://www.mdpi.com/2071-1050/14/19/12033/pdf (application/pdf)
https://www.mdpi.com/2071-1050/14/19/12033/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:14:y:2022:i:19:p:12033-:d:922987

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:14:y:2022:i:19:p:12033-:d:922987