Forecasting Principles from Experience with Forecasting Competitions
Jennifer Castle,
Jurgen Doornik and
David Hendry
Forecasting, 2021, vol. 3, issue 1, 1-28
Abstract:
Economic forecasting is difficult, largely because of the many sources of nonstationarity influencing observational time series. Forecasting competitions aim to improve the practice of economic forecasting by providing very large data sets on which the efficacy of forecasting methods can be evaluated. We consider the general principles that seem to be the foundation for successful forecasting, and show how these are relevant for methods that did well in the M4 competition. We establish some general properties of the M4 data set, which we use to improve the basic benchmark methods, as well as the Card method that we created for our submission to that competition. A data generation process is proposed that captures the salient features of the annual data in M4.
Keywords: automatic forecasting; calibration; prediction intervals; regression; forecasting competitions; seasonality; software; time series; nonstationarity (search for similar items in EconPapers)
JEL-codes: A1 B4 C0 C1 C2 C3 C4 C5 C8 M0 Q2 Q3 Q4 (search for similar items in EconPapers)
Date: 2021
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (9)
Downloads: (external link)
https://www.mdpi.com/2571-9394/3/1/10/pdf (application/pdf)
https://www.mdpi.com/2571-9394/3/1/10/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jforec:v:3:y:2021:i:1:p:10-165:d:504406
Access Statistics for this article
Forecasting is currently edited by Ms. Joss Chen
More articles in Forecasting from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().