Fathoming empirical forecasting competitions’ winners
Azzam Alroomi,
Georgios Karamatzanis,
Konstantinos Nikolopoulos,
Anna Tilba and
Shujun Xiao
International Journal of Forecasting, 2022, vol. 38, issue 4, 1519-1525
Abstract:
The M5 forecasting competition has provided strong empirical evidence that machine learning methods can outperform statistical methods: in essence, complex methods can be more accurate than simple ones. Regardless, this result challenges the flagship empirical result that led the forecasting discipline for the last four decades: keep methods sophisticatedly simple. Nevertheless, this was a first, and we can argue that this will not happen again. There has been a different winner in each forecasting competition. This inevitably raises the question: can a method win more than once (and should it be expected to)? Furthermore, we argue for the need to elaborate on the perks of competing methods, and what makes them winners?
Keywords: Forecasting; Competitions; Performance; Machine learning; Benchmarks (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0169207022000504
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:intfor:v:38:y:2022:i:4:p:1519-1525
DOI: 10.1016/j.ijforecast.2022.03.010
Access Statistics for this article
International Journal of Forecasting is currently edited by R. J. Hyndman
More articles in International Journal of Forecasting from Elsevier
Bibliographic data for series maintained by Catherine Liu ().