EconPapers    
Economics at your fingertips  
 

Battery and Hydrogen Energy Storage Control in a Smart Energy Network with Flexible Energy Demand Using Deep Reinforcement Learning

Cephas Samende (), Zhong Fan, Jun Cao, Renzo Fabián, Gregory N. Baltas and Pedro Rodriguez
Additional contact information
Cephas Samende: Power Networks Demonstration Centre, University of Strathclyde, Glasgow G1 1XQ, UK
Zhong Fan: Engineering Department, University of Exeter, Exeter EX4 4PY, UK
Jun Cao: Environmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, Luxembourg
Renzo Fabián: Environmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, Luxembourg
Gregory N. Baltas: Environmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, Luxembourg
Pedro Rodriguez: Environmental Research and Innovation Department, Sustainable Energy Systems Group, Luxembourg Institute of Science and Technology, 4362 Esch-sur-Alzette, Luxembourg

Energies, 2023, vol. 16, issue 19, 1-20

Abstract: Smart energy networks provide an effective means to accommodate high penetrations of variable renewable energy sources like solar and wind, which are key for the deep decarbonisation of energy production. However, given the variability of the renewables as well as the energy demand, it is imperative to develop effective control and energy storage schemes to manage the variable energy generation and achieve desired system economics and environmental goals. In this paper, we introduce a hybrid energy storage system composed of battery and hydrogen energy storage to handle the uncertainties related to electricity prices, renewable energy production, and consumption. We aim to improve renewable energy utilisation and minimise energy costs and carbon emissions while ensuring energy reliability and stability within the network. To achieve this, we propose a multi-agent deep deterministic policy gradient approach, which is a deep reinforcement learning-based control strategy to optimise the scheduling of the hybrid energy storage system and energy demand in real time. The proposed approach is model-free and does not require explicit knowledge and rigorous mathematical models of the smart energy network environment. Simulation results based on real-world data show that (i) integration and optimised operation of the hybrid energy storage system and energy demand reduce carbon emissions by 78.69%, improve cost savings by 23.5%, and improve renewable energy utilisation by over 13.2% compared to other baseline models; and (ii) the proposed algorithm outperforms the state-of-the-art self-learning algorithms like the deep-Q network.

Keywords: deep reinforcement learning; multi-agent deep deterministic policy gradient; battery and hydrogen energy storage systems; decarbonisation; renewable energy; carbon emissions; deep-Q network (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
https://www.mdpi.com/1996-1073/16/19/6770/pdf (application/pdf)
https://www.mdpi.com/1996-1073/16/19/6770/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:16:y:2023:i:19:p:6770-:d:1245684

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:16:y:2023:i:19:p:6770-:d:1245684