Electronic Journal of Statistics

Spectral analysis of high-dimensional time series

Mark Fiecas, Chenlei Leng, Weidong Liu, and Yi Yu

Full-text: Open access


A useful approach for analysing multiple time series is via characterising their spectral density matrix as the frequency domain analog of the covariance matrix. When the dimension of the time series is large compared to their length, regularisation based methods can overcome the curse of dimensionality, but the existing ones lack theoretical justification. This paper develops the first non-asymptotic result for characterising the difference between the sample and population versions of the spectral density matrix, allowing one to justify a range of high-dimensional models for analysing time series. As a concrete example, we apply this result to establish the convergence of the smoothed periodogram estimators and sparse estimators of the inverse of spectral density matrices, namely precision matrices. These results, novel in the frequency domain time series analysis, are corroborated by simulations and an analysis of the Google Flu Trends data.

Article information

Electron. J. Statist., Volume 13, Number 2 (2019), 4079-4101.

Received: November 2018
First available in Project Euclid: 9 October 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

frequency domain time series high dimension functional dependency smoothed periodogram sparse precision matrix estimation

Creative Commons Attribution 4.0 International License.


Fiecas, Mark; Leng, Chenlei; Liu, Weidong; Yu, Yi. Spectral analysis of high-dimensional time series. Electron. J. Statist. 13 (2019), no. 2, 4079--4101. doi:10.1214/19-EJS1621. https://projecteuclid.org/euclid.ejs/1570586483

Export citation


  • [1] Barigozzi, M., & Hallin, M. (2017). A network analysis of the volatility of high dimensional financial series., Journal of the Royal Statistical Society: Series C (Applied Statistics), 66, 581–605.
  • [2] Basu, S. & Michailidis, G. (2015). Regularized estimation in sparse high-dimensional time series models., The Annals of Statistics, 43, 1535–1567.
  • [3] Bickel, P. J. & Levina, E. (2008). Covariance regularization by thresholding., The Annals of Statistics, 36, 2577–2604
  • [4] Böhm, H. & von Sachs, R. (2009). Shrinkage estimation in the frequency domain of multivariate time series., Journal of Multivariate Analysis. 100, 913–935.
  • [5] Brillinger, D. R. (2001). Time Series: Data Analysis and Theory. Vol. 36., SIAM.
  • [6] Brockwell, P. J., and Richard A. D. (2006). Time Series: Theory and Methods. Springer Science & Business, Media.
  • [7] Cai, T. T., Zhang, C. H. & Zhou, H. H. (2010). Optimal rates of convergence for covariance matrix estimation., The Annals of Statistics, 38, 2118–2144.
  • [8] Cai, T., Liu, W., & Luo, X. (2011). A constrained $\ell _1$ minimization approach to sparse precision matrix estimation., Journal of the American Statistical Association, 106, 594–607.
  • [9] Cai, T. T., Liu, W. & Zhou, H. H. (2016). Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation., The Annals of Statistics, 44, 455–488.
  • [10] Chang, J., Yao, Q., & Zhou, W. (2017). Testing for high-dimensional white noise using maximum cross-correlations., Biometrika, 104, 111–127.
  • [11] Chen, X., Xu, M., & Wu, W. B. (2013). Covariance and precision matrix estimation for high-dimensional time series., The Annals of Statistics, 41, 2994–3021.
  • [12] Dahlhaus, R. (2000). Graphical interaction models for multivariate time series., Metrika, 51, 157–172.
  • [13] Davis, R. A., Zang, P., & Zheng, T. (2016). Sparse vector autoregressive modeling., Journal of Computational and Graphical Statistics, 25, 1077–1096.
  • [14] Fan, J. and Lv, J. and Qi, L. (2011). Sparse high-dimensional models in economics., Annu. Rev. Econ, 3, 291–317.
  • [15] Fan, Y. & Lv, J. (2016). Innovated scalable efficient estimation in ultra-large Gaussian graphical models., The Annals of Statistics, 44, 2098–2126.
  • [16] Fan, J., & Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties., Journal of the American Statistical Association, 96, 1348–1360.
  • [17] Fiecas, M. and Ombao, H. (2011). The generalized shrinkage estimator for the analysis of functional connectivity of brain signals., The Annals of Applied Statistics, 1102–1125.
  • [18] Fiecas, M. and von Sachs, R. (2014). Data-driven shrinkage of the spectral density matrix of a high-dimensional time series., Electronic Journal of Statistics, 8, 2975–3003.
  • [19] Gather, U. and Imhoff, M. and Fried, R. (2002). Graphical models for multivariate time series from intensive care monitoring., Statistics in Medicine, 21, 2685–2701.
  • [20] Ginsberg, J., Mohebbi, M. H., Patel, R. S., Brammer, L., Smolinski, M. S., & Brilliant, L. (2009). Detecting influenza epidemics using search engine query data., Nature, 457, 1012–1014.
  • [21] Guo, S., Wang, Y., & Yao, Q. (2016). High-dimensional and banded vector autoregressions., Biometrika, 103, 889–903
  • [22] Holbrook, A., Lan, S., Vandenberg-Rodes, A., & Shahbaba, B. (2018). Geodesic Lagrangian Monte Carlo over the space of positive definite matrices: with application to Bayesian spectral density estimation., Journal of Statistical Computation and Simulation, 88, 982–1002.
  • [23] Huang, J., Sun, T., Ying, Z., Yu, Y., & Zhang, C. H. (2013). Oracle inequalities for the lasso in the Cox model., Annals of Statistics, 41, 1142.
  • [24] Jukić, D. and Denić-Jukić, V. (2011). Partial spectral analysis of hydrological time series., Journal of Hydrology, 408, 223–233.
  • [25] Jung, A., Hannak, G., & Goertz, N. (2015). Graphical lasso based model selection for time series., IEEE Signal Processing Letters, 22, 1781–1785.
  • [26] Lam, C., & Yao, Q. (2012). Factor modeling for high-dimensional time series: inference for the number of factors., The Annals of Statistics, 40, 694–726.
  • [27] Leclere, Q. (2009). Multi-channel spectral analysis of multi-pass acquisition measurements., Mechanical Systems and Signal Processing, 23, 1415–1422.
  • [28] Liu, H., Roeder, K. and Larry, W. (2010) Stability approach to regularization selection (StARS) for high dimensional graphical models. Advances in Neural Information Processing Systems 23, 1432–1440.
  • [29] Liu, W., & Wu, W. B. (2010). Asymptotics of spectral density estimates., Econometric Theory, 26, 1218–1245.
  • [30] Meinshausen, N., & Bühlmann, P. (2006). High-dimensional graphs and variable selection with the Lasso., The annals of statistics, 34, 1436–1462.
  • [31] Meinshausen, N., & Bühlmann, P. (2010). Stability selection., Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72, 417–473.
  • [32] Negahban, S., & Wainwright, M. J. (2011). Estimation of (near) low-rank matrices with noise and high-dimensional scaling., The Annals of Statistics, 1069–1097.
  • [33] Ombao, H. C., Raz, J. A., Strawderman, R. L., & von Sachs, R. (2001). A simple generalised crossvalidation method of span selection for periodogram smoothing., Biometrika, 88(4), 1186–1192.
  • [34] Orini, M. and Bailón, R. and Laguna, P. and Mainardi, L. T. and Barbieri, R. (2012). A multivariate time-frequency method to characterize the influence of respiration over heart period and arterial pressure., EURASIP Journal on Advances in Signal Processing, 2012(1), 214.
  • [35] Qin, T., & Rohe, K. (2013). Regularized spectral clustering under the degree-corrected stochastic blockmodel. In, Advances in Neural Information Processing Systems, 3120–3128.
  • [36] Salvador, R. and Suckling, J. and Schwarzbauer, C. and Bullmore, E. (2005). Undirected graphs of frequency-dependent functional connectivity in whole brain networks., Philosophical Transactions of the Royal Society B: Biological Sciences, 360(1457), 937–946.
  • [37] Schneider-Luftman, D., & Walden, A. T. (2016). Partial coherence estimation via spectral matrix shrinkage under quadratic loss. In, IEEE Transactions on Signal Processing, 64, 5767–5777.
  • [38] Shu, H. & Nan, B. (2014). Estimation of large covariance and precision matrices from temporally dependent observations., arXiv preprint arXiv:1412.5059.
  • [39] Stewart, G. & Sun, J.-G. (1990). Matrix Perturbation Theory. Computer Science and Scientific Computing. Academic, Press.
  • [40] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso., Journal of the Royal Statistical Society. Series B (Methodological), 267–288.
  • [41] Van de Geer, S. A. (2008). High-dimensional generalized linear models and the lasso., The Annals of Statistics, 36, 614–645.
  • [42] Wang, D., Yu, Y., & Rinaldo, A. (2017). Optimal covariance change point detection in high dimension., ArXiv preprint, arXiv:1712.09912.
  • [43] Wasserman, L., & Roeder, K. (2009). High dimensional variable selection., The Annals of Statistics, 37, 2178.
  • [44] Wilms, I., Basu, S., Bien, J., & Matteson, D. (2017). Sparse identification and estimation of high-dimensional vector autoregressive moving averages., ArXiv preprint, arXiv:1707.09208.
  • [45] Wu, W. B. (2005). Nonlinear system theory: Another look at dependence., Proceedings of the National Academy of Sciences of the United States of America, 102, 14150–14154.
  • [46] Xie, H., & Huang, J. (2009). SCAD-penalized regression in high-dimensional partially linear models., The Annals of Statistics, 37, 673–696.
  • [47] Yu, Y., Bradic, J. & Samworth, R. J. (2018). Confidence intervals for high-dimensional Cox models., ArXiv preprint arXiv:1803.01150.
  • [48] Yuan, M. & Lin, Y. (2007). Model selection and estimation in the Gaussian graphical model., Biometrika, 94, 19–35.
  • [49] Zhang, C. H. (2010). Nearly unbiased variable selection under minimax concave penalty., The Annals of Statistics, 38, 894–942.
  • [50] Zhang, C.-H. & Zhang, S. S. (2014) Confidence intervals for low dimensional parameters in high dimensional linear models., Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76, 217–242.
  • [51] Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net., Journal of the Royal Statistical Society: Series B (Statistical Methodology), 67, 301–320.