In-sample tests of predictability are superior to pseudo-out-of-sample tests, even when data mining
Ian Hunt
International Journal of Forecasting, 2022, vol. 38, issue 3, 872-877
Abstract:
This paper analyses straightforward Bonferroni adjustments to critical values of in-sample tests of predictability, when data mining is used to search across models. Unlike conventional pseudo-out-of-sample tests, these in-sample tests have stable family-wise error rates (FWERs) when searching for models that predict well. Furthermore, when data mining, these in-sample tests have more power than pseudo-out-of-sample tests for identifying true predictability.
Keywords: Pseudo-out-of-sample tests; Over-fitting; Data mining; FWER; False discovery rate (search for similar items in EconPapers)
Date: 2022
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0169207021000789
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:intfor:v:38:y:2022:i:3:p:872-877
DOI: 10.1016/j.ijforecast.2021.05.006
Access Statistics for this article
International Journal of Forecasting is currently edited by R. J. Hyndman
More articles in International Journal of Forecasting from Elsevier
Bibliographic data for series maintained by Catherine Liu ().