Evaluating the Adaptability of Reinforcement Learning Based HVAC Control for Residential Houses
Kuldeep Kurte,
Jeffrey Munk,
Olivera Kotevska,
Kadir Amasyali,
Robert Smith,
Evan McKee,
Yan Du,
Borui Cui,
Teja Kuruganti and
Helia Zandi
Additional contact information
Kuldeep Kurte: Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Jeffrey Munk: Energy and Transportation Science Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Olivera Kotevska: Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Kadir Amasyali: Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Robert Smith: Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Evan McKee: Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN 37996, USA
Yan Du: Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN 37996, USA
Borui Cui: Energy and Transportation Science Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Teja Kuruganti: Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Helia Zandi: Computational Sciences and Engineering Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831, USA
Sustainability, 2020, vol. 12, issue 18, 1-38
Abstract:
Intelligent Heating, Ventilation, and Air Conditioning (HVAC) control using deep reinforcement learning (DRL) has recently gained a lot of attention due to its ability to optimally control the complex behavior of the HVAC system. However, more exploration is needed on understanding the adaptability challenges that the DRL agent could face during the deployment phase. Using online learning for such applications is not realistic due to the long learning period and likely poor comfort control during the learning process. Alternatively, DRL can be pre-trained using a building model prior to deployment. However, developing an accurate building model for every house and deploying a pre-trained DRL model for HVAC control would not be cost-effective. In this study, we focus on evaluating the ability of DRL-based HVAC control to provide cost savings when pre-trained on one building model and deployed on different house models with varying user comforts. We observed around 30% of cost reduction by pre-trained model over baseline when validated in a simulation environment and achieved up to 21% cost reduction when deployed in the real house. This finding provides experimental evidence that the pre-trained DRL has the potential to adapt to different house environments and comfort settings.
Keywords: building energy; demand response; smart grid; optimal HVAC control; deep reinforcement learning; adaptability; building simulation (search for similar items in EconPapers)
JEL-codes: O13 Q Q0 Q2 Q3 Q5 Q56 (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
https://www.mdpi.com/2071-1050/12/18/7727/pdf (application/pdf)
https://www.mdpi.com/2071-1050/12/18/7727/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jsusta:v:12:y:2020:i:18:p:7727-:d:415558
Access Statistics for this article
Sustainability is currently edited by Ms. Alexandra Wu
More articles in Sustainability from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().