An efficient ADMM algorithm for high dimensional precision matrix estimation via penalized quadratic loss
Cheng Wang and
Binyan Jiang
Computational Statistics & Data Analysis, 2020, vol. 142, issue C
Abstract:
The estimation of high dimensional precision matrices has been a central topic in statistical learning. However, as the number of parameters scales quadratically with the dimension p, many state-of-the-art methods do not scale well to solve problems with a very large p. In this paper, we propose a very efficient algorithm for precision matrix estimation via penalized quadratic loss functions. Under the high dimension low sample size setting, the computation complexity of our algorithm is linear in both the sample size and the number of parameters. Such a computation complexity is in some sense optimal, as it is the same as the complexity needed for computing the sample covariance matrix. Numerical studies show that our algorithm is much more efficient than other state-of-the-art methods when the dimension p is very large.
Keywords: ADMM; High dimension; Penalized quadratic loss; Precision matrix (search for similar items in EconPapers)
Date: 2020
References: Add references at CitEc
Citations: View citations in EconPapers (6)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S0167947319301574
Full text for ScienceDirect subscribers only.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:csdana:v:142:y:2020:i:c:s0167947319301574
DOI: 10.1016/j.csda.2019.106812
Access Statistics for this article
Computational Statistics & Data Analysis is currently edited by S.P. Azen
More articles in Computational Statistics & Data Analysis from Elsevier
Bibliographic data for series maintained by Catherine Liu ().