Secondary Voltage Collaborative Control of Distributed Energy System via Multi-Agent Reinforcement Learning
Tianhao Wang,
Shiqian Ma,
Na Xu (),
Tianchun Xiang,
Xiaoyun Han,
Chaoxu Mu and
Yao Jin
Additional contact information
Tianhao Wang: Electric Power Research Institute, State Grid Tianjin Electric Power Company, No. 8, Haitai Huake 4th Road, Huayuan Industrial Zone, Binhai High Tech Zone, Tianjin 300384, China
Shiqian Ma: Electric Power Research Institute, State Grid Tianjin Electric Power Company, No. 8, Haitai Huake 4th Road, Huayuan Industrial Zone, Binhai High Tech Zone, Tianjin 300384, China
Na Xu: Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin 300072, China
Tianchun Xiang: State Grid Tianjin Electric Power Company, No. 39 Wujing, Guangfu Street, Hebei District, Tianjin 300010, China
Xiaoyun Han: Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin 300072, China
Chaoxu Mu: Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin 300072, China
Yao Jin: State Grid Tianjin Electric Power Company, No. 39 Wujing, Guangfu Street, Hebei District, Tianjin 300010, China
Energies, 2022, vol. 15, issue 19, 1-12
Abstract:
In this paper, a new voltage cooperative control strategy for a distributed power generation system is proposed based on the multi-agent advantage actor-critic (MA2C) algorithm, which realizes flexible management and effective control of distributed energy. The attentional actor-critic message processor (AACMP) is extended into the MA2C method to select the important messages from all communication messages adaptively and process important messages efficiently. The cooperative control strategy trained by centralized training and decentralized execution frame will take over the responsibility of the secondary control level for voltage restoration in a distributed manner. The introduction of the attention mechanism reduces the amount of information exchanged and the requirements of the communication network. Finally, a distributed system with six energy nodes is used to verify the effectiveness of the proposed control strategy.
Keywords: distributed energy; deep reinforcement learning; attentional mechanism; nodal voltage; coordination optimization (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/15/19/7047/pdf (application/pdf)
https://www.mdpi.com/1996-1073/15/19/7047/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:15:y:2022:i:19:p:7047-:d:924888
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().