EconPapers    
Economics at your fingertips  
 

Reinforcement Learning for Energy-Storage Systems in Grid-Connected Microgrids: An Investigation of Online vs. Offline Implementation

Khawaja Haider Ali, Marvin Sigalo, Saptarshi Das, Enrico Anderlini, Asif Ali Tahir and Mohammad Abusara
Additional contact information
Khawaja Haider Ali: Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK
Marvin Sigalo: Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK
Saptarshi Das: Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK
Enrico Anderlini: Department of Mechanical Engineering, Roberts Building, University College London, London WC1E 7JE, UK
Asif Ali Tahir: Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK
Mohammad Abusara: Penryn Campus, College of Engineering, Mathematics and Physical Sciences, University of Exeter, Cornwall TR10 9FE, UK

Energies, 2021, vol. 14, issue 18, 1-18

Abstract: Grid-connected microgrids consisting of renewable energy sources, battery storage, and load require an appropriate energy management system that controls the battery operation. Traditionally, the operation of the battery is optimised using 24 h of forecasted data of load demand and renewable energy sources (RES) generation using offline optimisation techniques, where the battery actions (charge/discharge/idle) are determined before the start of the day. Reinforcement Learning (RL) has recently been suggested as an alternative to these traditional techniques due to its ability to learn optimal policy online using real data. Two approaches of RL have been suggested in the literature viz. offline and online. In offline RL, the agent learns the optimum policy using predicted generation and load data. Once convergence is achieved, battery commands are dispatched in real time. This method is similar to traditional methods because it relies on forecasted data. In online RL, on the other hand, the agent learns the optimum policy by interacting with the system in real time using real data. This paper investigates the effectiveness of both the approaches. White Gaussian noise with different standard deviations was added to real data to create synthetic predicted data to validate the method. In the first approach, the predicted data were used by an offline RL algorithm. In the second approach, the online RL algorithm interacted with real streaming data in real time, and the agent was trained using real data. When the energy costs of the two approaches were compared, it was found that the online RL provides better results than the offline approach if the difference between real and predicted data is greater than 1.6%.

Keywords: reinforcement learning (RL); microgrid; battery management; offline and online RL; optimisation (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)

Downloads: (external link)
https://www.mdpi.com/1996-1073/14/18/5688/pdf (application/pdf)
https://www.mdpi.com/1996-1073/14/18/5688/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:14:y:2021:i:18:p:5688-:d:632482

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:14:y:2021:i:18:p:5688-:d:632482