Architecture for Enabling Edge Inference via Model Transfer from Cloud Domain in a Kubernetes Environment
Pekka Pääkkönen,
Daniel Pakkala,
Jussi Kiljander and
Roope Sarala
Additional contact information
Pekka Pääkkönen: VTT Technical Research Centre of Finland, 90571 Oulu, Finland
Daniel Pakkala: VTT Technical Research Centre of Finland, 90571 Oulu, Finland
Jussi Kiljander: VTT Technical Research Centre of Finland, 90571 Oulu, Finland
Roope Sarala: VTT Technical Research Centre of Finland, 90571 Oulu, Finland
Future Internet, 2020, vol. 13, issue 1, 1-24
Abstract:
The current approaches for energy consumption optimisation in buildings are mainly reactive or focus on scheduling of daily/weekly operation modes in heating. Machine Learning (ML)-based advanced control methods have been demonstrated to improve energy efficiency when compared to these traditional methods. However, placing of ML-based models close to the buildings is not straightforward. Firstly, edge-devices typically have lower capabilities in terms of processing power, memory, and storage, which may limit execution of ML-based inference at the edge. Secondly, associated building information should be kept private. Thirdly, network access may be limited for serving a large number of edge devices. The contribution of this paper is an architecture, which enables training of ML-based models for energy consumption prediction in private cloud domain, and transfer of the models to edge nodes for prediction in Kubernetes environment. Additionally, predictors at the edge nodes can be automatically updated without interrupting operation. Performance results with sensor-based devices (Raspberry Pi 4 and Jetson Nano) indicated that a satisfactory prediction latency (~7–9 s) can be achieved within the research context. However, model switching led to an increase in prediction latency (~9–13 s). Partial evaluation of a Reference Architecture for edge computing systems, which was used as a starting point for architecture design, may be considered as an additional contribution of the paper.
Keywords: Rancher; k3s; Docker; reference architecture; ML (search for similar items in EconPapers)
JEL-codes: O3 (search for similar items in EconPapers)
Date: 2020
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/1999-5903/13/1/5/pdf (application/pdf)
https://www.mdpi.com/1999-5903/13/1/5/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jftint:v:13:y:2020:i:1:p:5-:d:470173
Access Statistics for this article
Future Internet is currently edited by Ms. Grace You
More articles in Future Internet from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().