The Annals of Statistics

Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation

T. Tony Cai, Weidong Liu, and Harrison H. Zhou

Full-text: Access denied (no subscription detected) We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Precision matrix is of significant importance in a wide range of applications in multivariate analysis. This paper considers adaptive minimax estimation of sparse precision matrices in the high dimensional setting. Optimal rates of convergence are established for a range of matrix norm losses. A fully data driven estimator based on adaptive constrained $\ell_{1}$ minimization is proposed and its rate of convergence is obtained over a collection of parameter spaces. The estimator, called ACLIME, is easy to implement and performs well numerically.

A major step in establishing the minimax rate of convergence is the derivation of a rate-sharp lower bound. A “two-directional” lower bound technique is applied to obtain the minimax lower bound. The upper and lower bounds together yield the optimal rates of convergence for sparse precision matrix estimation and show that the ACLIME estimator is adaptively minimax rate optimal for a collection of parameter spaces and a range of matrix norm losses simultaneously.

Article information

Source
Ann. Statist. Volume 44, Number 2 (2016), 455-488.

Dates
Received: January 2013
Revised: June 2013
First available in Project Euclid: 17 March 2016

Permanent link to this document
https://projecteuclid.org/euclid.aos/1458245724

Digital Object Identifier
doi:10.1214/13-AOS1171

Mathematical Reviews number (MathSciNet)
MR3476606

Zentralblatt MATH identifier
1341.62115

Subjects
Primary: 62H12: Estimation
Secondary: 62F12: Asymptotic properties of estimators 62G09: Resampling methods

Keywords
Constrained $\ell_{1}$-minimization covariance matrix graphical model minimax lower bound optimal rate of convergence precision matrix sparsity spectral norm

Citation

Cai, T. Tony; Liu, Weidong; Zhou, Harrison H. Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation. Ann. Statist. 44 (2016), no. 2, 455--488. doi:10.1214/13-AOS1171. https://projecteuclid.org/euclid.aos/1458245724.


Export citation

References

  • Bickel, P. J. and Levina, E. (2008a). Regularized estimation of large covariance matrices. Ann. Statist. 36 199–227.
  • Bickel, P. J. and Levina, E. (2008b). Covariance regularization by thresholding. Ann. Statist. 36 2577–2604.
  • Cai, T. T. and Jiang, T. (2011). Limiting laws of coherence of random matrices with applications to testing covariance structure and construction of compressed sensing matrices. Ann. Statist. 39 1496–1525.
  • Cai, T. and Liu, W. (2011). Adaptive thresholding for sparse covariance matrix estimation. J. Amer. Statist. Assoc. 106 672–684.
  • Cai, T., Liu, W. and Luo, X. (2011). A constrained $\ell_{1}$ minimization approach to sparse precision matrix estimation. J. Amer. Statist. Assoc. 106 594–607.
  • Cai, T. T. and Yuan, M. (2012). Adaptive covariance matrix estimation through block thresholding. Ann. Statist. 40 2014–2042.
  • Cai, T. T., Zhang, C.-H. and Zhou, H. H. (2010). Optimal rates of convergence for covariance matrix estimation. Ann. Statist. 38 2118–2144.
  • Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence for sparse covariance matrix estimation. Ann. Statist. 40 2389–2420.
  • d’Aspremont, A., Banerjee, O. and El Ghaoui, L. (2008). First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30 56–66.
  • El Karoui, N. (2008). Operator norm consistent estimation of large-dimensional sparse covariance matrices. Ann. Statist. 36 2717–2756.
  • Friedman, J., Hastie, T. and Tibshirani, T. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 432–441.
  • Lam, C. and Fan, J. (2009). Sparsistency and rates of convergence in large covariance matrix estimation. Ann. Statist. 37 4254–4278.
  • Lauritzen, S. L. (1996). Graphical Models. Oxford Statistical Science Series 17. Oxford Univ. Press, New York.
  • Liu, H., Lafferty, J. and Wasserman, L. (2009). The nonparanormal: Semiparametric estimation of high dimensional undirected graphs. J. Mach. Learn. Res. 10 2295–2328.
  • Liu, W.-D., Lin, Z. and Shao, Q.-M. (2008). The asymptotic distribution and Berry-Esseen bound of a new test for independence in high dimension with an application to stochastic optimization. Ann. Appl. Probab. 18 2337–2366.
  • Liu, H., Han, F., Yuan, M., Lafferty, J. and Wasserman, L. (2012). High-dimensional semiparametric Gaussian copula graphical models. Ann. Statist. 40 2293–2326.
  • Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462.
  • Petrov, V. V. (1975). Sums of Independent Random Variables. Springer, New York. Translated from the Russian by A. A. Brown, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 82.
  • Petrov, V. V. (1995). Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Oxford Studies in Probability 4. Oxford Univ. Press, New York.
  • Ravikumar, P., Wainwright, M. J., Raskutti, G. and Yu, B. (2011). High-dimensional covariance estimation by minimizing $\ell_{1}$-penalized log-determinant divergence. Electron. J. Stat. 5 935–980.
  • Rothman, A. J., Bickel, P. J., Levina, E. and Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electron. J. Stat. 2 494–515.
  • Saon, G. and Chien, J. T. (2011). Bayesian sensing hidden Markov models for speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference 5056–5059. Prague.
  • Thorin, G. O. (1948). Convexity theorems generalizing those of M. Riesz and Hadamard with some applications. Comm. Sem. Math. Univ. Lund [Medd. Lunds Univ. Mat. Sem.] 9 1–58.
  • Xue, L. and Zou, H. (2012). Regularized rank-based estimation of high-dimensional nonparanormal graphical models. Ann. Statist. 40 2541–2571.
  • Yuan, M. (2010). High dimensional inverse covariance matrix estimation via linear programming. J. Mach. Learn. Res. 11 2261–2286.
  • Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika 94 19–35.