Reinforcement Learning for Fair and Efficient Charging Coordination for Smart Grid
Amr A. Elshazly,
Mahmoud M. Badr,
Mohamed Mahmoud (),
William Eberle,
Maazen Alsabaan and
Mohamed I. Ibrahem
Additional contact information
Amr A. Elshazly: Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA
Mahmoud M. Badr: Department of Network and Computer Security, College of Engineering, SUNY Polytechnic Institute, Utica, NY 13502, USA
Mohamed Mahmoud: Department of Electrical and Computer Engineering, Tennessee Technological University, Cookeville, TN 38505, USA
William Eberle: Department of Computer Science, Tennessee Technological University, Cookeville, TN 38505, USA
Maazen Alsabaan: Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11451, Saudi Arabia
Mohamed I. Ibrahem: Department of Electrical Engineering, Faculty of Engineering at Shoubra, Benha University, Cairo 11672, Egypt
Energies, 2024, vol. 17, issue 18, 1-28
Abstract:
The integration of renewable energy sources, such as rooftop solar panels, into smart grids poses significant challenges for managing customer-side battery storage. In response, this paper introduces a novel reinforcement learning (RL) approach aimed at optimizing the coordination of these batteries. Our approach utilizes a single-agent, multi-environment RL system designed to balance power saving, customer satisfaction, and fairness in power distribution. The RL agent dynamically allocates charging power while accounting for individual battery levels and grid constraints, employing an actor–critic algorithm. The actor determines the optimal charging power based on real-time conditions, while the critic iteratively refines the policy to enhance overall performance. The key advantages of our approach include: (1) Adaptive Power Allocation: The RL agent effectively reduces overall power consumption by optimizing grid power allocation, leading to more efficient energy use. (2) Enhanced Customer Satisfaction: By increasing the total available power from the grid, our approach significantly reduces instances of battery levels falling below the critical state of charge (SoC), thereby improving customer satisfaction. (3) Fair Power Distribution: Fairness improvements are notable, with the highest fair reward rising by 173.7% across different scenarios, demonstrating the effectiveness of our method in minimizing discrepancies in power distribution. (4) Improved Total Reward: The total reward also shows a significant increase, up by 94.1%, highlighting the efficiency of our RL-based approach. Experimental results using a real-world dataset confirm that our RL approach markedly improves fairness, power efficiency, and customer satisfaction, underscoring its potential for optimizing smart grid operations and energy management systems.
Keywords: charging coordination; deep reinforcement learning (RL); smart power grid; multi-objective optimization; single-agent multi-environment RL (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
https://www.mdpi.com/1996-1073/17/18/4557/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/18/4557/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:18:p:4557-:d:1476145
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().