When does more regularization imply fewer degrees of freedom? Sufficient conditions and counterexamples
S. Kaufman and
S. Rosset
Biometrika, 2014, vol. 101, issue 4, 771-784
Abstract:
Regularization aims to improve prediction performance by trading an increase in training error for better agreement between training and prediction errors, which is often captured through decreased degrees of freedom. In this paper we give examples which show that regularization can increase the degrees of freedom in common models, including the lasso and ridge regression. In such situations, both training error and degrees of freedom increase, making the regularization inherently without merit. Two important scenarios are described where the expected reduction in degrees of freedom is guaranteed: all symmetric linear smoothers and convex constrained linear regression models like ridge regression and the lasso, when compared to unconstrained linear regression.
Date: 2014
References: Add references at CitEc
Citations: View citations in EconPapers (2)
Downloads: (external link)
http://hdl.handle.net/10.1093/biomet/asu034 (application/pdf)
Access to full text is restricted to subscribers.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:oup:biomet:v:101:y:2014:i:4:p:771-784.
Ordering information: This journal article can be ordered from
https://academic.oup.com/journals
Access Statistics for this article
Biometrika is currently edited by Paul Fearnhead
More articles in Biometrika from Biometrika Trust Oxford University Press, Great Clarendon Street, Oxford OX2 6DP, UK.
Bibliographic data for series maintained by Oxford University Press ().