Dynamic Pricing Based on Demand Response Using Actor–Critic Agent Reinforcement Learning
Ahmed Ismail () and
Mustafa Baysal ()
Additional contact information
Ahmed Ismail: Faculty of Electrical and Electronics Engineering, Yildiz Technical University, Davutpasa Campus, Esenler, 34220 Istanbul, Turkey
Mustafa Baysal: Faculty of Electrical and Electronics Engineering, Yildiz Technical University, Davutpasa Campus, Esenler, 34220 Istanbul, Turkey
Energies, 2023, vol. 16, issue 14, 1-19
Abstract:
Eco-friendly technologies for sustainable energy development require the efficient utilization of energy resources. Real-time pricing (RTP), also known as dynamic pricing, offers advantages over other pricing systems by enabling demand response (DR) actions. However, existing methods for determining and controlling DR have limitations in managing an increasing demand and predicting future pricing. This paper presents a novel approach to address the limitations of existing methods for determining and controlling demand response (DR) in the context of dynamic pricing systems for sustainable energy development. By leveraging actor–critic agent reinforcement learning (RL) techniques, a dynamic pricing DR model is proposed for efficient energy management. The model’s learning framework was trained using DR and real-time pricing data extracted from the Australian Energy Market Operator (AEMO) spanning a period of 17 years. The efficacy of the RL-based dynamic pricing approach was evaluated through two predicting cases: actual-predicted demand and actual-predicted price. Initially, long short-term memory (LSTM) models were employed to predict price and demand, and the results were subsequently enhanced using the deep RL model. Remarkably, the proposed approach achieved an impressive accuracy of 99% for every 30 min future price prediction. The results demonstrated the efficiency of the proposed RL-based model in accurately predicting both demand and price for effective energy management.
Keywords: dynamic pricing; demand response; actor–critic agent; reinforcement learning; real-time pricing (RTP); long short-term memory (LSTM); Australian Energy Market Operator (AEMO); pricing prediction (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2023
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1996-1073/16/14/5469/pdf (application/pdf)
https://www.mdpi.com/1996-1073/16/14/5469/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:16:y:2023:i:14:p:5469-:d:1197254
Access Statistics for this article
Energies is currently edited by Ms. Agatha Cao
More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().