Measuring the prediction error. A comparison of cross-validation, bootstrap and covariance penalty methods
Simone Borra () and
Agostino Di Ciaccio
Computational Statistics & Data Analysis, 2010, vol. 54, issue 12, 2976-2989
Abstract:
The estimators most widely used to evaluate the prediction error of a non-linear regression model are examined. An extensive simulation approach allowed the comparison of the performance of these estimators for different non-parametric methods, and with varying signal-to-noise ratio and sample size. Estimators based on resampling methods such as Leave-one-out, parametric and non-parametric Bootstrap, as well as repeated Cross Validation methods and Hold-out, were considered. The methods used are Regression Trees, Projection Pursuit Regression and Neural Networks. The repeated-corrected 10-fold Cross-Validation estimator and the Parametric Bootstrap estimator obtained the best performance in the simulations.
Keywords: Prediction; error; Extra-sample; error; In-sample; error; Optimism; Cross-validation; Leave-one-out; Bootstrap; Covariance; penalty; Regression; trees; Projection; pursuit; regression; Neural; networks (search for similar items in EconPapers)
Date: 2010
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (13)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0167-9473(10)00106-4
Full text for ScienceDirect subscribers only.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:csdana:v:54:y:2010:i:12:p:2976-2989
Access Statistics for this article
Computational Statistics & Data Analysis is currently edited by S.P. Azen
More articles in Computational Statistics & Data Analysis from Elsevier
Bibliographic data for series maintained by Catherine Liu ().