EconPapers    
Economics at your fingertips  
 

Reinforcement learning-based optimization for power scheduling in a renewable energy connected grid

Awol Seid Ebrie and Young Jin Kim

Renewable Energy, 2024, vol. 230, issue C

Abstract: Power scheduling is an NP-hard optimization problem that demands a delicate equilibrium between economic costs and environmental emissions. In response to the growing concern for climate change, global environmental policies prioritize decarbonizing the electricity sector by integrating renewable energies (REs) into power grids. While this integration brings economic and environmental benefits, the intermittency of REs amplifies the uncertainty and complexity of power scheduling. Existing optimization approaches often grapple with a limited number of units, overlook critical parameters, and disregard the intermittency of REs. To address these limitations, this article introduces a robust and scalable optimization algorithm for renewable integrated power scheduling based on reinforcement learning (RL). In this proposed methodology, the power scheduling problem is decomposed into Markov decision processes (MDPs) within a multi-agent simulation environment. The simulated MDPs are used to train a deep reinforcement learning (DRL) model for solving the optimization. The validity and effectiveness of the proposed method are validated across various test systems, encompassing single-to tri-objective problems with 10–100 generating units. The findings consistently demonstrate the superior performance of the proposed DRL algorithm compared to existing methods, such as multi-agent immune system-based evolutionary priority list (MAI-EPL), binary real-coded genetic algorithm (BRCGA), teaching learning-based optimization (TLBO), quasi-oppositional teaching learning-based algorithm (QOTLBO), hybrid genetic-imperialist competitive algorithm (HGICA), three-stage priority list (TSPL), real-coded grey wolf optimization (RCGWO), multi-objective evolutionary algorithm based on decomposition (MOEAD), and non-dominated sorting algorithms (NSGA-II and NSGA-III). Regarding the experimental results, it is important to highlight the importance of integrating RESs into larger power systems. In a 10-unit system with 2.81 % RE penetration, reductions of 3.42 %, 4.03 %, and 3.10 % were observed in costs, CO2 emissions, and SO2 emissions, respectively. Similarly, in a 100-unit system with a RE penetration rate of only 0.28 %, reductions of 3.75 % in cost, 4.42 % in CO2, and 3.34 % in SO2 were observed. These findings emphasize the effectiveness of RES integration, even at lower penetration rates, in larger-scale power systems.

Keywords: Deep reinforcement learning; Economic environmental dispatch; Multi-objective optimization; Renewable energy sources; Unit commitment (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (1)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0960148124009546
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:renene:v:230:y:2024:i:c:s0960148124009546

DOI: 10.1016/j.renene.2024.120886

Access Statistics for this article

Renewable Energy is currently edited by Soteris A. Kalogirou and Paul Christodoulides

More articles in Renewable Energy from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:renene:v:230:y:2024:i:c:s0960148124009546