A note on the validity of cross-validation for evaluating autoregressive time series prediction
Rob Hyndman () and
Computational Statistics & Data Analysis, 2018, vol. 120, issue C, 70-83
One of the most widely used standard procedures for model evaluation in classification and regression is K-fold cross-validation (CV). However, when it comes to time series forecasting, because of the inherent serial correlation and potential non-stationarity of the data, its application is not straightforward and often replaced by practitioners in favour of an out-of-sample (OOS) evaluation. It is shown that for purely autoregressive models, the use of standard K-fold CV is possible provided the models considered have uncorrelated errors. Such a setup occurs, for example, when the models nest a more appropriate model. This is very common when Machine Learning methods are used for prediction, and where CV can control for overfitting the data. Theoretical insights supporting these arguments are presented, along with a simulation study and a real-world example. It is shown empirically that K-fold CV performs favourably compared to both OOS evaluation and other time-series-specific techniques such as non-dependent cross-validation.
Keywords: Cross-validation; Time series; Autoregression (search for similar items in EconPapers)
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5) Track citations by RSS feed
Downloads: (external link)
Full text for ScienceDirect subscribers only.
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:eee:csdana:v:120:y:2018:i:c:p:70-83
Access Statistics for this article
Computational Statistics & Data Analysis is currently edited by S.P. Azen
More articles in Computational Statistics & Data Analysis from Elsevier
Bibliographic data for series maintained by Dana Niculescu ().