EconPapers    
Economics at your fingertips  
 

Fast sparse regression and classification

Jerome H. Friedman

International Journal of Forecasting, 2012, vol. 28, issue 3, 722-738

Abstract: Many present day applications of statistical learning involve large numbers of predictor variables. Often, that number is much larger than the number of cases or observations available for training the learning algorithm. In such situations, traditional methods fail. Recently, new techniques have been developed, based on regularization, which can often produce accurate models in these settings. This paper describes the basic principles underlying the method of regularization, then focuses on those methods which exploit the sparsity of the predicting model. The potential merits of these methods are then explored by example.

Keywords: Regression; Classification; Regularization; Sparsity; Variable selection; Bridge-regression; Lasso; Elastic net; lp-norm penalization (search for similar items in EconPapers)
Date: 2012
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (13)

Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0169207012000490
Full text for ScienceDirect subscribers only

Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.

Export reference: BibTeX RIS (EndNote, ProCite, RefMan) HTML/Text

Persistent link: https://EconPapers.repec.org/RePEc:eee:intfor:v:28:y:2012:i:3:p:722-738

DOI: 10.1016/j.ijforecast.2012.05.001

Access Statistics for this article

International Journal of Forecasting is currently edited by R. J. Hyndman

More articles in International Journal of Forecasting from Elsevier
Bibliographic data for series maintained by Catherine Liu ().

 
Page updated 2025-03-19
Handle: RePEc:eee:intfor:v:28:y:2012:i:3:p:722-738