Reactive Power Optimization Method of Power Network Based on Deep Reinforcement Learning Considering Topology Characteristics
Tianhua Chen,
Zemei Dai,
Xin Shan,
Zhenghong Li,
Chengming Hu (),
Yang Xue and
Ke Xu
Additional contact information
Tianhua Chen: State Key Laboratory of Technology and Equipment for Defense against Power System Operational Risks, Nanjing 211106, China
Zemei Dai: State Key Laboratory of Technology and Equipment for Defense against Power System Operational Risks, Nanjing 211106, China
Xin Shan: State Key Laboratory of Technology and Equipment for Defense against Power System Operational Risks, Nanjing 211106, China
Zhenghong Li: School of Electric Power Engineering, Nanjing Institute of Technology, Nanjing 211167, China
Chengming Hu: School of Electric Power Engineering, Nanjing Institute of Technology, Nanjing 211167, China
Yang Xue: School of Electric Power Engineering, Nanjing Institute of Technology, Nanjing 211167, China
Ke Xu: School of Electric Power Engineering, Nanjing Institute of Technology, Nanjing 211167, China
Energies, 2024, vol. 17, issue 24, 1-16
Abstract:
Aiming at the load fluctuation problem caused by a high proportion of new energy grid connections, a reactive power optimization method based on deep reinforcement learning (DRL) considering topological characteristics is proposed. The proposed method transforms the reactive power optimization problem into a Markov decision process and models and solves it through the deep reinforcement learning framework. The Dueling Double Deep Q-Network (D3QN) algorithm is adopted to improve the accuracy and efficiency of calculation. Aiming at the problem that deep reinforcement learning algorithms are difficult to simulate the topological characteristics of power flow, the Graph Convolutional Dueling Double Deep Q-Network (GCD3QN) algorithm is proposed. The graph convolutional neural network (GCN) is integrated into the D3QN model, and the information aggregation of topological nodes is realized through the graph convolution operator, which solves the calculation problem of deep learning algorithms in non-European space and improves the accuracy of reactive power optimization. The IEEE standard node system is used for simulation analysis, and the effectiveness of the proposed method is verified.
Keywords: deep reinforcement learning; graph convolution; topological characteristics; reactive power optimization; deep Q-network (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/17/24/6454/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/24/6454/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:24:p:6454-:d:1549482
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().