EconPapers    
Economics at your fingertips  
 

Post-l1-penalized estimators in high-dimensional linear regression models

Alexandre Belloni and Victor Chernozhukov
Additional contact information
Alexandre Belloni: Institute for Fiscal Studies

No CWP13/10, CeMMAP working papers from Centre for Microdata Methods and Practice, Institute for Fiscal Studies

Abstract:

In this paper we study post-penalized estimators which apply ordinary, unpenalized linear regression to the model selected by first-step penalized estimators, typically LASSO. It is well known that LASSO can estimate the regression function at nearly the oracle rate, and is thus hard to improve upon. We show that post-LASSO performs at least as well as LASSO in terms of the rate of convergence, and has the advantage of a smaller bias. Remarkably, this performance occurs even if the LASSO-based model selection 'fails' in the sense of missing some components of the 'true' regression model. By the 'true' model we mean here the best s-dimensional approximation to the regression function chosen by the oracle. Furthermore, post-LASSO can perform strictly better than LASSO, in the sense of a strictly faster rate of convergence, if the LASSO-based model selection correctly includes all components of the 'true' model as a subset and also achieves a sufficient sparsity. In the extreme case, when LASSO perfectly selects the 'true' model, the post-LASSO estimator becomes the oracle estimator. An important ingredient in our analysis is a new sparsity bound on the dimension of the model selected by LASSO which guarantees that this dimension is at most of the same order as the dimension of the 'true' model. Our rate results are non-asymptotic and hold in both parametric and nonparametric models. Moreover, our analysis is not limited to the LASSO estimator in the first step, but also applies to other estimators, for example, the trimmed LASSO, Dantzig selector, or any other estimator with good rates and good sparsity. Our analysis covers both traditional trimming and a new practical, completely data-driven trimming scheme that induces maximal sparsity subject to maintaining a certain goodness-of-fit. The latter scheme has theoretical guarantees similar to those of LASSO or post-LASSO, but it dominates these procedures as well as traditional trimming in a wide variety of experiments.

Date: 2010-06-03
New Economics Papers: this item is included in nep-ecm
References: Add references at CitEc
Citations: View citations in EconPapers (2)

Downloads: (external link)
http://cemmap.ifs.org.uk/wps/cwp1310.pdf (application/pdf)

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:ifs:cemmap:13/10

Ordering information: This working paper can be ordered from
The Institute for Fiscal Studies 7 Ridgmount Street LONDON WC1E 7AE

Access Statistics for this paper

More papers in CeMMAP working papers from Centre for Microdata Methods and Practice, Institute for Fiscal Studies The Institute for Fiscal Studies 7 Ridgmount Street LONDON WC1E 7AE. Contact information at EDIRC.
Bibliographic data for series maintained by Emma Hyman ().

 
Page updated 2025-03-31
Handle: RePEc:ifs:cemmap:13/10