Transfer learning for hierarchical forecasting: Reducing computational efforts of M5 winning methods
Arnoud P. Wellens,
Maxi Udenio and
Robert N. Boute
International Journal of Forecasting, 2022, vol. 38, issue 4, 1482-1491
Abstract:
The winning machine learning methods of the M5 Accuracy competition demonstrated high levels of forecast accuracy compared to the top-performing benchmarks in the history of the M-competitions. Yet, large-scale adoption is hampered due to the significant computational requirements to model, tune, and train these state-of-the-art algorithms. To overcome this major issue, we discuss the potential of transfer learning (TL) to reduce the computational effort in hierarchical forecasting and provide a proof of concept that TL can be applied on M5 top-performing methods. We demonstrate our easy-to-use TL framework on the recursive store-level LightGBM models of the M5 winning method and attain similar levels of forecast accuracy with roughly 25% less training time. Our findings provide evidence for a novel application of TL to facilitate the practical applicability of the M5 winning methods in large-scale settings with hierarchically structured data.
Keywords: M5 Accuracy competition; Computational requirements; Transfer learning; LightGBM; Hierarchical forecasting (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0169207021001606
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:intfor:v:38:y:2022:i:4:p:1482-1491
DOI: 10.1016/j.ijforecast.2021.09.011
Access Statistics for this article
International Journal of Forecasting is currently edited by R. J. Hyndman
More articles in International Journal of Forecasting from Elsevier
Bibliographic data for series maintained by Catherine Liu ().