Can out-of-sample forecast comparisons help prevent overfitting?
Todd Clark
Journal of Forecasting, 2004, vol. 23, issue 2, 115-139
Abstract:
This paper shows that out-of-sample forecast comparisons can help prevent data mining-induced overfitting. The basic results are drawn from simulations of a simple Monte Carlo design and a real data-based design similar to those used in some previous studies. In each simulation, a general-to-specific procedure is used to arrive at a model. If the selected specification includes any of the candidate explanatory variables, forecasts from the model are compared to forecasts from a benchmark model that is nested within the selected model. In particular, the competing forecasts are tested for equal MSE and encompassing. The simulations indicate most of the post-sample tests are roughly correctly sized. Moreover, the tests have relatively good power, although some are consistently more powerful than others. The paper concludes with an application, modelling quarterly US inflation. Copyright © 2004 John Wiley & Sons, Ltd.
Date: 2004
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (53)
Downloads: (external link)
http://hdl.handle.net/10.1002/for.904 Link to full text; subscription required (text/html)
Related works:
Working Paper: Can out-of-sample forecast comparisons help prevent overfitting? (2000) 
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:jof:jforec:v:23:y:2004:i:2:p:115-139
DOI: 10.1002/for.904
Access Statistics for this article
Journal of Forecasting is currently edited by Derek W. Bunn
More articles in Journal of Forecasting from John Wiley & Sons, Ltd.
Bibliographic data for series maintained by Wiley-Blackwell Digital Licensing () and Christopher F. Baum ().