Sub-optimality of some continuous shrinkage priors
Anirban Bhattacharya,
David B. Dunson,
Debdeep Pati and
Natesh S. Pillai
Stochastic Processes and their Applications, 2016, vol. 126, issue 12, 3828-3842
Abstract:
Two-component mixture priors provide a traditional way to induce sparsity in high-dimensional Bayes models. However, several aspects of such a prior, including computational complexities in high-dimensions, interpretation of exact zeros and non-sparse posterior summaries under standard loss functions, have motivated an amazing variety of continuous shrinkage priors, which can be expressed as global–local scale mixtures of Gaussians. Interestingly, we demonstrate that many commonly used shrinkage priors, including the Bayesian Lasso, do not have adequate posterior concentration in high-dimensional settings.
Keywords: Bayesian; Convergence rate; High dimensional; Lasso; ℓ1; Lower bound; Penalized regression; Regularization; Shrinkage prior; Sub-optimal (search for similar items in EconPapers)
Date: 2016
References: View references in EconPapers View complete reference list from CitEc
Citations:
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S030441491630134X
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:spapps:v:126:y:2016:i:12:p:3828-3842
Ordering information: This journal article can be ordered from
http://http://www.elsevier.com/wps/find/supportfaq.cws_home/regional
https://shop.elsevie ... _01_ooc_1&version=01
DOI: 10.1016/j.spa.2016.08.007
Access Statistics for this article
Stochastic Processes and their Applications is currently edited by T. Mikosch
More articles in Stochastic Processes and their Applications from Elsevier
Bibliographic data for series maintained by Catherine Liu ().