EconPapers    
Economics at your fingertips  
 

Energy Demand Response in a Food-Processing Plant: A Deep Reinforcement Learning Approach

Philipp Wohlgenannt (), Sebastian Hegenbart, Elias Eder, Mohan Kolhe and Peter Kepplinger
Additional contact information
Philipp Wohlgenannt: Josef Ressel Centre for Intelligent Thermal Energy Systems, Illwerke vkw Endowed Professorship for Energy Efficiency, Energy Research Centre, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria
Sebastian Hegenbart: Department of Engineering and Technology, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria
Elias Eder: Josef Ressel Centre for Intelligent Thermal Energy Systems, Illwerke vkw Endowed Professorship for Energy Efficiency, Energy Research Centre, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria
Mohan Kolhe: Faculty of Engineering and Science, University of Agder, Jon Lilletuns vei 9, 4879 Grimstad, Norway
Peter Kepplinger: Josef Ressel Centre for Intelligent Thermal Energy Systems, Illwerke vkw Endowed Professorship for Energy Efficiency, Energy Research Centre, Vorarlberg University of Applied Sciences, Hochschulstrasse 1, 6850 Dornbirn, Austria

Energies, 2024, vol. 17, issue 24, 1-19

Abstract: The food industry faces significant challenges in managing operational costs due to its high energy intensity and rising energy prices. Industrial food-processing facilities, with substantial thermal capacities and large demands for cooling and heating, offer promising opportunities for demand response (DR) strategies. This study explores the application of deep reinforcement learning (RL) as an innovative, data-driven approach for DR in the food industry. By leveraging the adaptive, self-learning capabilities of RL, energy costs in the investigated plant are effectively decreased. The RL algorithm was compared with the well-established optimization method Mixed Integer Linear Programming (MILP), and both were benchmarked against a reference scenario without DR. The two optimization strategies demonstrate cost savings of 17.57% and 18.65% for RL and MILP, respectively. Although RL is slightly less efficient in cost reduction, it significantly outperforms in computational speed, being approximately 20 times faster. During operation, RL only needs 2ms per optimization compared to 19s for MILP, making it a promising optimization tool for edge computing. Moreover, while MILP’s computation time increases considerably with the number of binary variables, RL efficiently learns dynamic system behavior and scales to more complex systems without significant performance degradation. These results highlight that deep RL, when applied to DR, offers substantial cost savings and computational efficiency, with broad applicability to energy management in various applications.

Keywords: industrial energy-management systems; demand response; reinforcement learning; machine learning; double deep Q-learning (search for similar items in EconPapers)
JEL-codes: Q Q0 Q4 Q40 Q41 Q42 Q43 Q47 Q48 Q49 (search for similar items in EconPapers)
Date: 2024
References: View references in EconPapers View complete reference list from CitEc
Citations:

Downloads: (external link)
https://www.mdpi.com/1996-1073/17/24/6430/pdf (application/pdf)
https://www.mdpi.com/1996-1073/17/24/6430/ (text/html)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:gam:jeners:v:17:y:2024:i:24:p:6430-:d:1548622

Access Statistics for this article

Energies is currently edited by Ms. Agatha Cao

More articles in Energies from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().

 
Page updated 2025-03-19
Handle: RePEc:gam:jeners:v:17:y:2024:i:24:p:6430-:d:1548622