## Electronic Journal of Statistics

### The graphical lasso: New insights and alternatives

#### Abstract

The graphical lasso [5] is an algorithm for learning the structure in an undirected Gaussian graphical model, using $\ell_{1}$ regularization to control the number of zeros in the precision matrix $\boldsymbol{\Theta}=\boldsymbol{\Sigma}^{-1}$ [2, 11]. The R package GLASSO [5] is popular, fast, and allows one to efficiently build a path of models for different values of the tuning parameter. Convergence of GLASSO can be tricky; the converged precision matrix might not be the inverse of the estimated covariance, and occasionally it fails to converge with warm starts. In this paper we explain this behavior, and propose new algorithms that appear to outperform GLASSO.

By studying the “normal equations” we see that, GLASSO is solving the dual of the graphical lasso penalized likelihood, by block coordinate ascent; a result which can also be found in [2]. In this dual, the target of estimation is $\boldsymbol{\Sigma}$, the covariance matrix, rather than the precision matrix $\boldsymbol{\Theta}$. We propose similar primal algorithms P-GLASSO and DP-GLASSO, that also operate by block-coordinate descent, where $\boldsymbol{\Theta}$ is the optimization target. We study all of these algorithms, and in particular different approaches to solving their coordinate sub-problems. We conclude that DP-GLASSO is superior from several points of view.

#### Article information

Source
Electron. J. Statist., Volume 6 (2012), 2125-2149.

Dates
First available in Project Euclid: 9 November 2012

https://projecteuclid.org/euclid.ejs/1352470831

Digital Object Identifier
doi:10.1214/12-EJS740

Mathematical Reviews number (MathSciNet)
MR3020259

Zentralblatt MATH identifier
1295.62066

#### Citation

Mazumder, Rahul; Hastie, Trevor. The graphical lasso: New insights and alternatives. Electron. J. Statist. 6 (2012), 2125--2149. doi:10.1214/12-EJS740. https://projecteuclid.org/euclid.ejs/1352470831

#### References

• [1] Alon, U., Barkai, N., Notterman, D. A., Gish, K., Ybarra, S., Mack, D. and Levine, A. J. (1999). Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays., Proceedings of the National Academy of Sciences of the United States of America 96 6745–6750.
• [2] Banerjee, O., Ghaoui, L. E. and d’Aspremont, A. (2008). Model Selection Through Sparse Maximum Likelihood Estimation for Multivariate Gaussian or Binary Data., Journal of Machine Learning Research 9 485–516.
• [3] Beck, A. and Teboulle, M. (2009). A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems., SIAM J. Imaging Sciences 2 183-202.
• [4] Boyd, S. and Vandenberghe, L. (2004)., Convex Optimization. Cambridge University Press.
• [5] Friedman, J., Hastie, T. and Tibshirani, R. (2007). Sparse inverse covariance estimation with the graphical lasso., Biostatistics 9 432–441.
• [6] Hastie, T., Tibshirani, R. and Friedman, J. (2009)., The Elements of Statistical Learning, Second Edition: Data Mining, Inference, and Prediction (Springer Series in Statistics), 2 ed. Springer New York.
• [7] Mazumder, R. and Hastie, T. (2012). Exact Covariance Thresholding into Connected Components for Large-Scale Graphical Lasso., Journal of Machine Learning Research 13 781–794.
• [8] Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso., Annals of Statistics 34 1436–1462.
• [9] Nesterov, Y. (2007). Gradient methods for minimizing composite objective function Technical Report, Center for Operations Research and Econometrics (CORE), Catholic University of Louvain. Tech. Rep, 76.
• [10] Rothman, A. J., Bickel, P. J., Levina, E. and Zhu, J. (2008). Sparse Permutation Invariant Covariance Estimation., Electronic Journal of Statistics 2 494-515.
• [11] Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model., Biometrika 94 19-35.