The Annals of Statistics

Covariance and precision matrix estimation for high-dimensional time series

Xiaohui Chen, Mengyu Xu, and Wei Biao Wu

Full-text: Open access


We consider estimation of covariance matrices and their inverses (a.k.a. precision matrices) for high-dimensional stationary and locally stationary time series. In the latter case the covariance matrices evolve smoothly in time, thus forming a covariance matrix function. Using the functional dependence measure of Wu [Proc. Natl. Acad. Sci. USA 102 (2005) 14150–14154 (electronic)], we obtain the rate of convergence for the thresholded estimate and illustrate how the dependence affects the rate of convergence. Asymptotic properties are also obtained for the precision matrix estimate which is based on the graphical Lasso principle. Our theory substantially generalizes earlier ones by allowing dependence, by allowing nonstationarity and by relaxing the associated moment conditions.

Article information

Ann. Statist., Volume 41, Number 6 (2013), 2994-3021.

First available in Project Euclid: 1 January 2014

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62H12: Estimation
Secondary: 62M10: Time series, auto-correlation, regression, etc. [See also 91B84]

High-dimensional inference sparsity covariance matrix precision matrix thresholding Lasso dependence functional dependence measure consistency Nagaev inequality nonstationary time series spatial–temporal processes


Chen, Xiaohui; Xu, Mengyu; Wu, Wei Biao. Covariance and precision matrix estimation for high-dimensional time series. Ann. Statist. 41 (2013), no. 6, 2994--3021. doi:10.1214/13-AOS1182.

Export citation


  • Abrahamsson, R., Selen, Y. and Stoica, P. (2007). Enhanced covariance matrix estimators in adaptive beamforming. In 2007 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 969–972. Honolulu, HI.
  • Adak, S. (1998). Time-dependent spectral analysis of nonstationary time series. J. Amer. Statist. Assoc. 93 1488–1501.
  • Anderson, T. W. (1958). An Introduction to Multivariate Statistical Analysis. Wiley, New York.
  • Banerjee, O., El Ghaoui, L. and d’Aspremont, A. (2008). Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J. Mach. Learn. Res. 9 485–516.
  • Bickel, P. J. and Levina, E. (2004). Some theory of Fisher’s linear discriminant function, “naive Bayes,” and some alternatives when there are many more variables than observations. Bernoulli 10 989–1010.
  • Bickel, P. J. and Levina, E. (2008a). Covariance regularization by thresholding. Ann. Statist. 36 2577–2604.
  • Bickel, P. J. and Levina, E. (2008b). Regularized estimation of large covariance matrices. Ann. Statist. 36 199–227.
  • Cai, T., Liu, W. and Luo, X. (2011). A constrained $\ell_{1}$ minimization approach to sparse precision matrix estimation. J. Amer. Statist. Assoc. 106 594–607.
  • Cai, T. T., Zhang, C.-H. and Zhou, H. H. (2010). Optimal rates of convergence for covariance matrix estimation. Ann. Statist. 38 2118–2144.
  • Cai, T. and Zhou, H. (2013). Minimax estimation of large covariance matrices under $\ell_{1}$-norm (with discussion). Statist. Sinica 22 1319–1349.
  • Cai, T. T. and Zhou, H. H. (2012). Optimal rates of convergence for sparse covariance matrix estimation. Ann. Statist. 40 2389–2420.
  • Cao, G., Bachega, L. R. and Bouman, C. A. (2011). The sparse matrix transform for covariance estimation and analysis of high-dimensional signals. IEEE Trans. Image Process. 20 625–640.
  • Chen, X., Xu, M. and Wu, W. B. (2013). Supplement to “Covariance and precision matrix estimation for high-dimensional time series.” DOI:10.1214/13-AOS1182SUPP.
  • Dahlhaus, R. (1997). Fitting time series models to nonstationary processes. Ann. Statist. 25 1–37.
  • Draghicescu, D., Guillas, S. and Wu, W. B. (2009). Quantile curve estimation and visualization for nonstationary time series. J. Comput. Graph. Statist. 18 1–20.
  • Fan, J., Feng, Y. and Wu, Y. (2009). Network exploration via the adaptive lasso and SCAD penalties. Ann. Appl. Stat. 3 521–541.
  • Fan, J. and Gijbels, I. (1996). Local Polynomial Modelling and Its Applications. Monographs on Statistics and Applied Probability 66. Chapman & Hall, London.
  • Friedman, J., Hastie, T. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 432–441.
  • Guerci, J. R. (1999). Theory and application of covariance matrix tapers for robust adaptive beamforming. IEEE Trans. Signal Process. 47 977–985.
  • Huang, J. Z., Liu, N., Pourahmadi, M. and Liu, L. (2006). Covariance matrix selection and estimation via penalised normal likelihood. Biometrika 93 85–98.
  • Jacquier, E., Polson, N. G. and Rossi, P. E. (2004). Bayesian analysis of stochastic volatility models with fat-tails and correlated errors. J. Econometrics 122 185–212.
  • Johnstone, I. M. (2001). On the distribution of the largest eigenvalue in principal components analysis. Ann. Statist. 29 295–327.
  • Johnstone, I. M. and Lu, A. Y. (2009). On consistency and sparsity for principal components analysis in high-dimensions. J. Amer. Statist. Assoc. 104 682–693.
  • Kolar, M. and Xing, E. (2011). On time varying undirected graphs. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS) 2011 (JMLR), Vol. 15 407–415. Ft. Lauderdale, FL.
  • Kondrashov, D., Kravtsov, S., Robertson, A. W. and Ghil, M. (2005). A hierachy of data-based ENSO models. Journal of Climate 18 4425–4444.
  • Lam, C. and Fan, J. (2009). Sparsistency and rates of convergence in large covariance matrix estimation. Ann. Statist. 37 4254–4278.
  • Ledoit, O. and Wolf, M. (2003). Improved estimation of the covariance matrix of stock returns with an application to portfolio selection. Journal of Empirical Finance 10 603–621.
  • Li, J., Stocia, P. and Wang, Z. (2003). On robust capon beamforming and diagonal loading. IEEE Trans. Signal Process. 51 1702–1715.
  • Liu, W. and Luo, X. (2012). High-dimensional sparse precision matrix estimation via sparse column inverse operator. Preprint. Available at arXiv:1203.3896.
  • Liu, W., Xiao, H. and Wu, W. B. (2013). Probability and moment inequalities under dependence. Statist. Sinica. To appear. DOI:10.5705/ss.2011.287.
  • Marčenko, V. A. and Pastur, L. A. (1967). Distribution of eigenvalues in certain sets of random matrices. Mat. Sb. 72 507–536.
  • Meinshausen, N. and Bühlmann, P. (2006). High-dimensional graphs and variable selection with the lasso. Ann. Statist. 34 1436–1462.
  • Rasmussen, C. E. and Williams, C. K. I. (2006). Gaussian Processes for Machine Learning. MIT Press, Cambridge, MA.
  • Ravikumar, P., Wainwright, M. J., Raskutti, G. and Yu, B. (2011). High-dimensional covariance estimation by minimizing $\ell_{1}$-penalized log-determinant divergence. Electron. J. Stat. 5 935–980.
  • Rothman, A. J., Bickel, P. J., Levina, E. and Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electron. J. Stat. 2 494–515.
  • Stein, M. L. (1999). Interpolation of Spatial Data: Some Theory for Kriging. Springer, New York.
  • Talih, M. (2003). Markov random fields on time-varying graphs, with an application to portfolio selection. Ph.D. thesis, Yale Univ., ProQuest LLC, Ann Arbor, MI.
  • Ward, J. (1994). Space time adaptive processing for airborne radar. Technical Report 1015, MIT, Lincoln Lab, Lexington.
  • Wikle, C. K. and Hooten, M. B. (2010). A general science-based framework for dynamical spatio-temporal models. TEST 19 417–451.
  • Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proc. Natl. Acad. Sci. USA 102 14150–14154 (electronic).
  • Wu, W. B. (2007). Strong invariance principles for dependent random variables. Ann. Probab. 35 2294–2320.
  • Wu, W. B. (2011). Asymptotic theory for stationary processes. Stat. Interface 4 207–226.
  • Wu, W. B. and Pourahmadi, M. (2003). Nonparametric estimation of large covariance matrices of longitudinal data. Biometrika 90 831–844.
  • Wu, W. B. and Shao, X. (2004). Limit theorems for iterated random functions. J. Appl. Probab. 41 425–436.
  • Xiao, H. and Wu, W. B. (2012). Covariance matrix estimation for stationary time series. Ann. Statist. 40 466–493.
  • Yuan, M. (2010). High dimensional inverse covariance matrix estimation via linear programming. J. Mach. Learn. Res. 11 2261–2286.
  • Zheng, Y. R., Chen, G. and Blasch, E. (2007). A normalized fractionally lower-order moment algorithm for space–time adaptive processing. In IEEE Military Communications Conference, 2007 (MILCOM 2007) 1–6. Orlando, FL.
  • Zhou, S., Lafferty, J. and Wasserman, L. (2010). Time varying undirected graphs. Mach. Learn. 80 295–319.

Supplemental materials