Real-Time Energy Management of a Microgrid Using Deep Reinforcement Learning
Ying Ji,
Jianhui Wang,
Jiacan Xu,
Xiaoke Fang and
Huaguang Zhang
Additional contact information
Ying Ji: Northeastern University, College of Information Science and Engineering, Shenyang 110819, China
Jianhui Wang: Northeastern University, College of Information Science and Engineering, Shenyang 110819, China
Jiacan Xu: Northeastern University, College of Information Science and Engineering, Shenyang 110819, China
Xiaoke Fang: Northeastern University, College of Information Science and Engineering, Shenyang 110819, China
Huaguang Zhang: Northeastern University, College of Information Science and Engineering, Shenyang 110819, China
Energies, 2019, vol. 12, issue 12, 1-21
Abstract:
Driven by the recent advances and applications of smart-grid technologies, our electric power grid is undergoing radical modernization. Microgrid (MG) plays an important role in the course of modernization by providing a flexible way to integrate distributed renewable energy resources (RES) into the power grid. However, distributed RES, such as solar and wind, can be highly intermittent and stochastic. These uncertain resources combined with load demand result in random variations in both the supply and the demand sides, which make it difficult to effectively operate a MG. Focusing on this problem, this paper proposed a novel energy management approach for real-time scheduling of an MG considering the uncertainty of the load demand, renewable energy, and electricity price. Unlike the conventional model-based approaches requiring a predictor to estimate the uncertainty, the proposed solution is learning-based and does not require an explicit model of the uncertainty. Specifically, the MG energy management is modeled as a Markov Decision Process (MDP) with an objective of minimizing the daily operating cost. A deep reinforcement learning (DRL) approach is developed to solve the MDP. In the DRL approach, a deep feedforward neural network is designed to approximate the optimal action-value function, and the deep Q-network (DQN) algorithm is used to train the neural network. The proposed approach takes the state of the MG as inputs, and outputs directly the real-time generation schedules. Finally, using real power-grid data from the California Independent System Operator (CAISO), case studies are carried out to demonstrate the effectiveness of the proposed approach.
Keywords: microgrids; energy management system; model-free; deep reinforcement learning; neural network (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2019
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (45)
Downloads: (external link)
https://www.mdpi.com/1996-1073/12/12/2291/pdf (application/pdf)
https://www.mdpi.com/1996-1073/12/12/2291/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:12:y:2019:i:12:p:2291-:d:240125
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().