A joint convex penalty for inverse covariance matrix estimation
Ashwini Maurya
Computational Statistics & Data Analysis, 2014, vol. 75, issue C, 15-27
Abstract:
The paper proposes a joint convex penalty for estimating the Gaussian inverse covariance matrix. A proximal gradient method is developed to solve the resulting optimization problem with more than one penalty constraints. The analysis shows that imposing a single constraint is not enough and the estimator can be improved by a trade-off between two convex penalties. The developed framework can be extended to solve wide arrays of constrained convex optimization problems. A simulation study is carried out to compare the performance of the proposed method to graphical lasso and the SPICE estimate of the inverse covariance matrix.
Keywords: Proximal gradient; Joint penalty; Convex optimization; Sparsity (search for similar items in EconPapers)
Date: 2014
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (5)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0167947314000267
Full text for ScienceDirect subscribers only.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:csdana:v:75:y:2014:i:c:p:15-27
DOI: 10.1016/j.csda.2014.01.015
Access Statistics for this article
Computational Statistics & Data Analysis is currently edited by S.P. Azen
More articles in Computational Statistics & Data Analysis from Elsevier
Bibliographic data for series maintained by Catherine Liu ().