Dirichlet--Laplace Priors for Optimal Shrinkage
Natesh S. Pillai and
David B. Dunson
Journal of the American Statistical Association, 2015, vol. 110, issue 512, 1479-1490
Penalized regression methods, such as L 1 regularization, are routinely used in high-dimensional applications, and there is a rich literature on optimality properties under sparsity assumptions. In the Bayesian paradigm, sparsity is routinely induced through two-component mixture priors having a probability mass at zero, but such priors encounter daunting computational problems in high dimensions. This has motivated continuous shrinkage priors, which can be expressed as global-local scale mixtures of Gaussians, facilitating computation. In contrast to the frequentist literature, little is known about the properties of such priors and the convergence and concentration of the corresponding posterior distribution. In this article, we propose a new class of Dirichlet--Laplace priors, which possess optimal posterior concentration and lead to efficient posterior computation. Finite sample performance of Dirichlet--Laplace priors relative to alternatives is assessed in simulated and real data examples.
References: View references in EconPapers View complete reference list from CitEc
Citations View citations in EconPapers (5) Track citations by RSS feed
Downloads: (external link)
Access to full text is restricted to subscribers.
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
Persistent link: https://EconPapers.repec.org/RePEc:taf:jnlasa:v:110:y:2015:i:512:p:1479-1490
Ordering information: This journal article can be ordered from
Access Statistics for this article
Journal of the American Statistical Association is currently edited by Xuming He, Jun Liu, Joseph Ibrahim and Alyson Wilson
More articles in Journal of the American Statistical Association from Taylor & Francis Journals
Bibliographic data for series maintained by Chris Longhurst ().