Co-Optimizing Battery Storage for Energy Arbitrage and Frequency Regulation in Real-Time Markets Using Deep Reinforcement Learning
Yushen Miao,
Tianyi Chen,
Shengrong Bu,
Hao Liang and
Zhu Han
Additional contact information
Yushen Miao: James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
Tianyi Chen: James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
Shengrong Bu: Department of Engineering, Brock University, St. Catharines, ON L2S 3A1, Canada
Hao Liang: Department of Electrical and Computer Engineering, University of Alberta, Edmonton, AB T6G 2R3, Canada
Zhu Han: Department of Electrical and Computer Engineering, University of Houston, Houston, TX 77204, USA
Energies, 2021, vol. 14, issue 24, 1-17
Abstract:
Battery energy storage systems (BESSs) play a critical role in eliminating uncertainties associated with renewable energy generation, to maintain stability and improve flexibility of power networks. In this paper, a BESS is used to provide energy arbitrage (EA) and frequency regulation (FR) services simultaneously to maximize its total revenue within the physical constraints. The EA and FR actions are taken at different timescales. The multitimescale problem is formulated as two nested Markov decision process (MDP) submodels. The problem is a complex decision-making problem with enormous high-dimensional data and uncertainty (e.g., the price of the electricity). Therefore, a novel co-optimization scheme is proposed to handle the multitimescale problem, and also coordinate EA and FR services. A triplet deep deterministic policy gradient with exploration noise decay (TDD–ND) approach is used to obtain the optimal policy at each timescale. Simulations are conducted with real-time electricity prices and regulation signals data from the American PJM regulation market. The simulation results show that the proposed approach performs better than other studied policies in literature.
Keywords: battery energy storage; energy arbitrage; frequency regulation; real-time market; deep reinforcement learning (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (3)
Downloads: (external link)
https://www.mdpi.com/1996-1073/14/24/8365/pdf (application/pdf)
https://www.mdpi.com/1996-1073/14/24/8365/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:14:y:2021:i:24:p:8365-:d:700404
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().