Unraveling Similarities and Differences Between Non-Negative Garrote and Adaptive Lasso: A Simulation Study in Low- and High-Dimensional Data
Edwin Kipruto () and
Willi Sauerbrei
Additional contact information
Edwin Kipruto: Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center, University of Freiburg, Stefan-Meier-Street 26, 79104 Freiburg, Germany
Willi Sauerbrei: Institute of Medical Biometry and Statistics, Faculty of Medicine and Medical Center, University of Freiburg, Stefan-Meier-Street 26, 79104 Freiburg, Germany
Stats, 2025, vol. 8, issue 3, 1-33
Abstract:
Penalized regression methods are widely used for variable selection. Non-negative garrote (NNG) was one of the earliest methods to combine variable selection with shrinkage of regression coefficients, followed by lasso. About a decade after the introduction of lasso, adaptive lasso (ALASSO) was proposed to address lasso’s limitations. ALASSO has two tuning parameters ( λ and γ ), and its penalty resembles that of NNG when γ = 1 , though NNG imposes additional constraints. Given ALASSO’s greater flexibility, which may increase instability, this study investigates whether NNG provides any practical benefit or can be replaced by ALASSO. We conducted simulations in both low- and high-dimensional settings to compare selected variables, coefficient estimates, and prediction accuracy. Ordinary least squares and ridge estimates were used as initial estimates. NNG and ALASSO ( γ = 1 ) showed similar performance in low-dimensional settings with low correlation, large samples, and moderate to high R 2 . However, under high correlation, small samples, and low R 2 , their selected variables and estimates differed, though prediction accuracy remained comparable. When γ ≠ 1 , the differences between NNG and ALASSO became more pronounced, with ALASSO generally performing better. Assuming linear relationships between predictors and the outcome, the results suggest that NNG may offer no practical advantage over ALASSO. The γ parameter in ALASSO allows for adaptability to model complexity, making ALASSO a more flexible and practical alternative to NNG.
Keywords: adaptive lasso; non-negative garrote; prediction accuracy; similarity; simulation study; variable selection (search for similar items in EconPapers)
JEL-codes: C1 C10 C11 C14 C15 C16 (search for similar items in EconPapers)
Date: 2025
References: Add references at CitEc
Citations:
Downloads: (external link)
https://www.mdpi.com/2571-905X/8/3/70/pdf (application/pdf)
https://www.mdpi.com/2571-905X/8/3/70/ (text/html)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:gam:jstats:v:8:y:2025:i:3:p:70-:d:1718858
Access Statistics for this article
Stats is currently edited by Mrs. Minnie Li
More articles in Stats from MDPI
Bibliographic data for series maintained by MDPI Indexing Manager ().