Robustified L2 boosting
Roman Werner Lutz,
Markus Kalisch and
Peter Buhlmann
Computational Statistics & Data Analysis, 2008, vol. 52, issue 7, 3331-3341
Abstract:
Five robustifications of L2 boosting for linear regression with various robustness properties are considered. The first two use the Huber loss as implementing loss function for boosting and the second two use robust simple linear regression for the fitting in L2 boosting (i.e. robust base learners). Both concepts can be applied with or without down-weighting of leverage points. Our last method uses robust correlation estimates and appears to be most robust. Crucial advantages of all methods are that they do not compute covariance matrices of all covariates and that they do not have to identify multivariate leverage points. When there are no outliers, the robust methods are only slightly worse than L2 boosting. In the contaminated case though, the robust methods outperform L2 boosting by a large margin. Some of the robustifications are also computationally highly efficient and therefore well suited for truly high-dimensional problems.
Date: 2008
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (6)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0167-9473(07)00441-0
Full text for ScienceDirect subscribers only.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:csdana:v:52:y:2008:i:7:p:3331-3341
Access Statistics for this article
Computational Statistics & Data Analysis is currently edited by S.P. Azen
More articles in Computational Statistics & Data Analysis from Elsevier
Bibliographic data for series maintained by Catherine Liu ().