EconPapers    
Economics at your fingertips  
 

Deep reinforcement learning with a particle dynamics environment applied to emergency evacuation of a room with obstacles

Yihao Zhang, Zhaojie Chai and George Lykotrafitis

Physica A: Statistical Mechanics and its Applications, 2021, vol. 571, issue C

Abstract: Efficient emergency evacuation is crucial for survival. A very successful model for simulating emergency evacuation is the social-force model. At the heart of the model is the self-driven force that is applied to an agent and is directed towards the exit. However, it is not clear if the application of this force results in optimal evacuation, especially in complex environments with obstacles. In this paper, we develop a deep reinforcement learning algorithm in association with the social force model to train agents to find the fastest evacuation path. During training, we penalize every step of an agent in the room and give zero reward at the exit. We adopt the Dyna-Q learning approach, which incorporates both the model-free Q-learning algorithm and the model-based reinforcement learning method, to update a deep neural network used to approximate the action value functions. We first show that in the case of a room without obstacles the resulting self-driven force points directly towards the exit as in the social force model. To quantitatively validate our method, we compare the total time elapsed when agents escape a room with one door and without obstacles employing the Dyna-Q model with the result obtained using the social-force model. We find that the median exit time intervals calculated using the two methods are not significantly different. We confirm that the proposed method obtains trajectories that minimize the travel time by comparing our results to results generated by geodesics-based adaptive pedestrian dynamics. Then, we investigate evacuation of a room with one obstacle and one exit. We show that our method produces similar results with the social force model when the obstacle is convex. However, in the case of concave obstacles, which sometimes can act as traps for agents governed purely by the social force model and prohibit complete room evacuation, our approach is clearly advantageous since it derives a policy that results in object avoidance and complete room evacuation without additional assumptions. We also study evacuation of a room with multiple exits. We show that agents are able to evacuate efficiently from the nearest exit through a shared network trained for a single agent. Finally, we test the robustness of the Dyna-Q learning approach in a complex environment with multiple exits and obstacles. Overall, we show that our model, based on the Dyna-Q reinforcement learning approach, can efficiently handle modeling of emergency evacuation in complex environments with multiple room exits and obstacles where it is difficult to obtain an intuitive rule for fast evacuation.

Keywords: Dyna-Q learning; Particle dynamics simulation; Social-force model; Pedestrian dynamics (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0378437121001175
Full text for ScienceDirect subscribers only. Journal offers the option of making the article available online on Science direct for a fee of $3,000

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:phsmap:v:571:y:2021:i:c:s0378437121001175

DOI: 10.1016/j.physa.2021.125845

Access Statistics for this article

Physica A: Statistical Mechanics and its Applications is currently edited by K. A. Dawson, J. O. Indekeu, H.E. Stanley and C. Tsallis

More articles in Physica A: Statistical Mechanics and its Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:phsmap:v:571:y:2021:i:c:s0378437121001175