AQ-Learning-Based Demand Response Algorithm for Industrial Processes with Operational Flexibility
Farzaneh Karami (),
Manu Lahariya () and
Guillaume Crevecoeur ()
Additional contact information
Farzaneh Karami: Ghent University
Manu Lahariya: Ghent University
Guillaume Crevecoeur: Ghent University
A chapter in Handbook of Smart Energy Systems, 2023, pp 3009-3025 from Springer
Abstract:
Abstract This chapter defines a Q-learning reinforcement learning policy to develop demand response (DR) for the management of the energy consumption of energy-intensive industrial customers (EICUs). The main idea is to exploit the flexibility offered in the control system equipped with a buffer (storage) system and thus consume and store energy when beneficial. This stabilizes the power balance in the grid by managing efficient energy flow and thus decreasing the dependency on energy generated from fossil fuels and reducing carbon emissions. Results confirmed that the presented dynamic pricing DR algorithm can boost service provider efficiency, lower energy costs for EICUs, and balance energy supply and demand in the electricity market.
Keywords: Buffer/ storage; Energy flexibility; Q-learning reinforcement learning; Industrial process (search for similar items in EconPapers)
Date: 2023
References: Add references at CitEc
Citations:
There are no downloads for this item, see the EconPapers FAQ for hints about obtaining it.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:sprchp:978-3-030-97940-9_172
Ordering information: This item can be ordered from
http://www.springer.com/9783030979409
DOI: 10.1007/978-3-030-97940-9_172
Access Statistics for this chapter
More chapters in Springer Books from Springer
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().