EconPapers    
Economics at your fingertips  
 

Robust Energy Management Policies for Solar Microgrids via Reinforcement Learning

Gerald Jones, Xueping Li and Yulin Sun ()
Additional contact information
Gerald Jones: Department of Industrial and Systems Engineering, University of Tennessee, Knoxville, TN 37996, USA
Xueping Li: Department of Industrial and Systems Engineering, University of Tennessee, Knoxville, TN 37996, USA
Yulin Sun: School of Accounting, Southwestern University of Finance and Economics, Chengdu 610074, China

Energies, 2024, vol. 17, issue 12, 1-22

Abstract: As the integration of renewable energy expands, effective energy system management becomes increasingly crucial. Distributed renewable generation microgrids offer green energy and resilience. Combining them with energy storage and a suitable energy management system (EMS) is essential due to the variability in renewable energy generation. Reinforcement learning (RL)-based EMSs have shown promising results in handling these complexities. However, concerns about policy robustness arise with the growing number of grid intermittent disruptions or disconnections from the main utility. This study investigates the resilience of RL-based EMSs to unforeseen grid disconnections when trained in grid-connected scenarios. Specifically, we evaluate the resilience of policies derived from advantage actor–critic (A2C) and proximal policy optimization (PPO) networks trained in both grid-connected and uncertain grid-connectivity scenarios. Stochastic models, incorporating solar energy and load uncertainties and utilizing real-world data, are employed in the simulation. Our findings indicate that grid-trained PPO and A2C excel in cost coverage, with PPO performing better. However, in isolated or uncertain connectivity scenarios, the demand coverage performance hierarchy shifts. The disruption-trained A2C model achieves the best demand coverage when islanded, whereas the grid-connected A2C network performs best in an uncertain grid connectivity scenario. This study enhances the understanding of the resilience of RL-based solutions using varied training methods and provides an analysis of the EMS policies generated.

Keywords: distributed generation; microgrid; renewable energy; energy management systems (EMS); reinforcement learning (RL); advantage actor–critic (A2C); proximal policy optimization (PPO) (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1996-1073/17/12/2821/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/12/2821/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:12:p:2821-:d:1411207

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:17:y:2024:i:12:p:2821-:d:1411207