Hybrid multi-agent deep reinforcement learning for multi-type mobile resources dispatching under transportation and power network recovery
Shaohua Sun,
Gengfeng Li,
Zhaohong Bie,
Dingmao Zhang and
Yuxiong Huang
Applied Energy, 2025, vol. 399, issue C, No S0306261925011535
Abstract:
Rainstorm waterlogging or typhoon can not only cause seriously failure of power network (PN), but also damage the normal traffic of transportation network (TN). Equipment fault of PN affects normal power supply of critical loads, and the interruption of TN severely limits the flexible transfer of mobile resources for recovery of transportation and power network (TPN). Previous work only addresses dispatching of multi-type mobile resources (MMRs) for power network recovery on the assumption of healthy TN, which makes dispatching strategy impractical. To fill this gap, this paper proposes a dispatching model of MMRs for collaborative recovery of TPN, embedding road repair crews (RRCs) dispatching behaviors into road repair constraints. To solve the above model, firstly road island and various topology update strategies are introduced to simplify shortest path searching for MMRs routing. Then, the dispatching model of MMRs is described as a parameterized action Markov decision process, in which MMRs are modeled as different types of intelligent agents considering various discrete-continuous dispatching characteristics. And, a hybrid multi-agent deep reinforcement learning (HMADRL) method characterizing master-slave architecture is developed to improve the solving efficiency and convergence speed of model, where the master module describes the recovery process of TN with dispatching of RRCs, and the slave module is constructed to recovery PN based on the path update strategies. The case analysis based on 15-node PN (18-node TN), 33-node PN (45-node TN) and practical example demonstrates that this approach can elevate the practicality of dispatching strategies and the recovery efficiency of TPN.
Keywords: Road island; Topology update; MMRs; Collaborative recovery; TPN; deep reinforcement learning (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0306261925011535
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:appene:v:399:y:2025:i:c:s0306261925011535
Ordering information: This journal article can be ordered from
http://www.elsevier.com/wps/find/journaldescription.cws_home/405891/bibliographic
http://www.elsevier. ... 405891/bibliographic
DOI: 10.1016/j.apenergy.2025.126423
Access Statistics for this article
Applied Energy is currently edited by J. Yan
More articles in Applied Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().