EconPapers    
Economics at your fingertips  
 

Research on Energy Scheduling Optimization Strategy with Compressed Air Energy Storage

Rui Wang, Zhanqiang Zhang (), Keqilao Meng, Pengbing Lei, Kuo Wang, Wenlu Yang, Yong Liu and Zhihua Lin ()
Additional contact information
Rui Wang: College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010080, China
Zhanqiang Zhang: College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010080, China
Keqilao Meng: College of New Energy, Inner Mongolia University of Technology, Hohhot 010080, China
Pengbing Lei: POWERCHINA Hebei Electric Power Engineering Co., Ltd., Shijiazhuang 050031, China
Kuo Wang: College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010080, China
Wenlu Yang: College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010080, China
Yong Liu: Shandong Energy Group Electric Power Group Co., Ltd., Jinan 250014, China
Zhihua Lin: Science and Technology Research Institute of China Three Gorges Corporation, Beijing 101100, China

Sustainability, 2024, vol. 16, issue 18, 1-18

Abstract: Due to the volatility and intermittency of renewable energy, the integration of a large amount of renewable energy into the grid can have a significant impact on its stability and security. In this paper, we propose a tiered dispatching strategy for compressed air energy storage (CAES) and utilize it to balance the power output of wind farms, achieving the intelligent dispatching of the source–storage–grid system. The Markov decision process framework is used to describe the energy dispatching problem of CAES through the Actor–Critic (AC) algorithm. To address the stability and low sampling efficiency issues of the AC algorithm in continuous action spaces, we employ the deep deterministic policy gradient (DDPG) algorithm, a model-free deep reinforcement learning algorithm based on deterministic policy. Furthermore, the use of Neuroevolution of Augmenting Topologies (NEAT) to improve DDPG can enhance the adaptability of the algorithm in complex environments and improve its performance. The results show that scheduling accuracy of the DDPG-NEAT algorithm reached 91.97%, which was 15.43% and 31.5% higher than the comparison with the SAC and DDPG algorithms, respectively. The algorithm exhibits excellent performance and stability in CAES energy dispatching.

Keywords: compressed air energy storage; deep deterministic policy gradient; neuroevolution of augmenting topologies; optimal scheduling (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/2071-1050/16/18/8008/pdf (application/pdf)
https://www.mdpi.com/2071-1050/16/18/8008/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:16:y:2024:i:18:p:8008-:d:1477377

Access Statistics for this article

Sustainability is currently edited by Ms. Alexandra Wu

More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jsusta:v:16:y:2024:i:18:p:8008-:d:1477377