• Bernoulli
  • Volume 24, Number 4B (2018), 3833-3863.

Covariance estimation via sparse Kronecker structures

Chenlei Leng and Guangming Pan

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


The problem of estimating covariance matrices is central to statistical analysis and is extensively addressed when data are vectors. This paper studies a novel Kronecker-structured approach for estimating such matrices when data are matrices and arrays. Focusing on matrix-variate data, we present simple approaches to estimate the row and the column correlation matrices, formulated separately via convex optimization. We also discuss simple thresholding estimators motivated by the recent development in the literature. Non-asymptotic results show that the proposed method greatly outperforms methods that ignore the matrix structure of the data. In particular, our framework allows the dimensionality of data to be arbitrary order even for fixed sample size, and works for flexible distributions beyond normality. Simulations and data analysis further confirm the competitiveness of the method. An extension to general array-data is also outlined.

Article information

Bernoulli, Volume 24, Number 4B (2018), 3833-3863.

Received: June 2016
Revised: April 2017
First available in Project Euclid: 18 April 2018

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

covariance matrix Kronecker structure matrix data non-asymptotic bound


Leng, Chenlei; Pan, Guangming. Covariance estimation via sparse Kronecker structures. Bernoulli 24 (2018), no. 4B, 3833--3863. doi:10.3150/17-BEJ980.

Export citation


  • [1] Allen, G.I. and Tibshirani, R. (2010). Transposable regularized covariance models with an application to missing data imputation. Ann. Appl. Stat. 4 764–790.
  • [2] Bhansali, R.J., Giraitis, L. and Kokoszka, P.S. (2007). Convergence of quadratic forms with nonvanishing diagonal. Statist. Probab. Lett. 77 726–734.
  • [3] Bickel, P.J. and Levina, E. (2008). Regularized estimation of large covariance matrices. Ann. Statist. 36 199–227.
  • [4] Bickel, P.J. and Levina, E. (2008). Covariance regularization by thresholding. Ann. Statist. 36 2577–2604.
  • [5] Bien, J. and Tibshirani, R.J. (2011). Sparse estimation of a covariance matrix. Biometrika 98 807–820.
  • [6] Cai, T. and Liu, W. (2011). Adaptive thresholding for sparse covariance matrix estimation. J. Amer. Statist. Assoc. 106 672–684.
  • [7] Chen, S.X., Zhang, L.-X. and Zhong, P.-S. (2010). Tests for high-dimensional covariance matrices. J. Amer. Statist. Assoc. 105 810–819.
  • [8] Cui, Y., Leng, C. and Sun, D. (2016). Sparse estimation of high-dimensional correlation matrices. Comput. Statist. Data Anal. 93 390–403.
  • [9] Grama, I.G. (1997). On moderate deviations for martingales. Ann. Probab. 25 152–183.
  • [10] Gupta, A.K. and Nagar, D.K. (2000). Matrix Variate Distributions. Chapman & Hall/CRC Monographs and Surveys in Pure and Applied Mathematics 104. Boca Raton, FL: Chapman & Hall/CRC.
  • [11] Hoff, P.D. (2011). Separable covariance arrays via the Tucker product, with applications to multivariate relational data. Bayesian Anal. 6 179–196.
  • [12] Kolda, T.G. and Bader, B.W. (2009). Tensor decompositions and applications. SIAM Rev. 51 455–500.
  • [13] Leng, C. and Tang, C.Y. (2012). Sparse matrix graphical models. J. Amer. Statist. Assoc. 107 1187–1200.
  • [14] Li, B., Kim, M.K. and Altman, N. (2010). On dimension folding of matrix- or array-valued statistical objects. Ann. Statist. 38 1094–1121.
  • [15] Rothman, A.J. (2012). Positive definite estimators of large covariance matrices. Biometrika 99 733–740.
  • [16] Rothman, A.J., Bickel, P.J., Levina, E. and Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electron. J. Stat. 2 494–515.
  • [17] Rothman, A.J., Levina, E. and Zhu, J. (2009). Generalized thresholding of large covariance matrices. J. Amer. Statist. Assoc. 104 177–186.
  • [18] Srivastava, M.S., von Rosen, T. and von Rosen, D. (2008). Models with a Kronecker product covariance structure: Estimation and testing. Math. Methods Statist. 17 357–370.
  • [19] Tsiligkaridis, T. and Hero, A.O. (2013). Covariance estimation in high dimensions via Kronecker product expansions. IEEE Trans. Signal Process. 61 5347–5360.
  • [20] Tzourio-Mazoyer, N., Landeau, B., Papathanassiou, D., Crivello, F., Etard, O., Delcroix, N., Mazoyer, B. and Joliot, M. (2002). Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage 15 273–289.
  • [21] Xue, L., Ma, S. and Zou, H. (2012). Positive-definite $\ell_{1}$-penalized estimation of large covariance matrices. J. Amer. Statist. Assoc. 107 1480–1491.
  • [22] Yin, J. and Li, H. (2012). Model selection and estimation in the matrix normal graphical model. J. Multivariate Anal. 107 119–140.
  • [23] Yuan, M. and Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model. Biometrika 94 19–35.
  • [24] Zhou, H., Li, L. and Zhu, H. (2013). Tensor regression with applications in neuroimaging data analysis. J. Amer. Statist. Assoc. 108 540–552.
  • [25] Zhou, J., Bhattacharya, A., Herring, A.H. and Dunson, D.B. (2015). Bayesian factorizations of big sparse tensors. J. Amer. Statist. Assoc. 110 1562–1576.
  • [26] Zhou, S. (2014). Gemini: Graph estimation with matrix variate normal instances. Ann. Statist. 42 532–562.
  • [27] Zou, H. (2006). The adaptive lasso and its oracle properties. J. Amer. Statist. Assoc. 101 1418–1429.