Sparsity and smoothness via the fused lasso
Ji Zhu and
Journal of the Royal Statistical Society Series B, 2005, vol. 67, issue 1, 91-108
The lasso penalizes a least squares regression by the sum of the absolute values ("L" 1 -norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the 'fused lasso', a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the "L" 1 -norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences-i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features "p" is much greater than "N", the sample size. The technique is also extended to the 'hinge' loss function that underlies the support vector classifier. We illustrate the methods on examples from protein mass spectroscopy and gene expression data. Copyright 2005 Royal Statistical Society.
References: Add references at CitEc
Citations View citations in EconPapers (54) Track citations by RSS feed
Downloads: (external link)
http://www.blackwell-synergy.com/doi/abs/10.1111/j.1467-9868.2005.00490.x link to full text (text/html)
Access to full text is restricted to subscribers.
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:bla:jorssb:v:67:y:2005:i:1:p:91-108
Ordering information: This journal article can be ordered from
http://ordering.onli ... 1111/(ISSN)1467-9868
Access Statistics for this article
Journal of the Royal Statistical Society Series B is currently edited by P. Fryzlewicz and I. Van Keilegom
More articles in Journal of the Royal Statistical Society Series B from Royal Statistical Society Contact information at EDIRC.
Series data maintained by Wiley-Blackwell Digital Licensing ().