The Annals of Applied Probability

Recursive estimation of time-average variance constants

Wei Biao Wu

Full-text: Open access


For statistical inference of means of stationary processes, one needs to estimate their time-average variance constants (TAVC) or long-run variances. For a stationary process, its TAVC is the sum of all its covariances and it is a multiple of the spectral density at zero. The classical TAVC estimate which is based on batched means does not allow recursive updates and the required memory complexity is O(n). We propose a faster algorithm which recursively computes the TAVC, thus having memory complexity of order O(1) and the computational complexity scales linearly in n. Under short-range dependence conditions, we establish moment and almost sure convergence of the recursive TAVC estimate. Convergence rates are also obtained.

Article information

Ann. Appl. Probab., Volume 19, Number 4 (2009), 1529-1552.

First available in Project Euclid: 27 July 2009

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60F05: Central limit and other weak theorems
Secondary: 60F17: Functional limit theorems; invariance principles

Central limit theorem consistency linear process Markov chains martingale Monte Carlo nonlinear time series recursive estimation spectral density


Wu, Wei Biao. Recursive estimation of time-average variance constants. Ann. Appl. Probab. 19 (2009), no. 4, 1529--1552. doi:10.1214/08-AAP587.

Export citation


  • Alexopoulos, C. and Goldsman, D. (2004). To batch or not to batch? ACM Transactions on Modeling and Computer Simulation 14 76–114.
  • Borkar, V. S. (1993). White-noise representations in stochastic realization theory. SIAM J. Control Optim. 31 1093–1102.
  • Box, G. E. P., Jenkins, G. M. and Reinsel, G. C. (1994). Time Series Analysis: Forecasting and Control. Prentice Hall, Englewood Cliffs, NJ.
  • Bradley, R. C. (2007). Introduction to Strong Mixing Conditions. Kendrick Press, Heber City, UT.
  • Brooks, S. P. and Roberts, G. O. (1998). Convergence assessment techniques for Markov chain Monte Carlo. Statist. Comput. 8 319–335.
  • Bühlmann, P. (2002). Bootstraps for time series. Statist. Sci. 17 52–72.
  • Bühlmann, P. and Künsch, H. R. (1999). Block length selection in the bootstrap for time series. Comput. Statist. Data Anal. 31 295–310.
  • Carlstein, E. (1986). The use of subseries values for estimating the variance of a general statistic from a stationary sequence. Ann. Statist. 14 1171–1179.
  • Chauveau, D. and Diebolt, J. (1999). An automated stopping rule for MCMC convergence assessment. Comput. Statist. 14 419–442.
  • Chauveau, D. and Diebolt, J. (2003). Estimation of the asymptotic variance in the CLT for Markov chains. Stoch. Models 19 449–465.
  • Chauveau, D., Diebolt, J. and Robert, C. P. (1998). Control by the central limit theorem. In Discretization and MCMC Convergence Assessment. Lecture Notes in Statistics 135 (C. P. Robert, ed.) 99–126. Springer, New York.
  • Chow, Y. S. and Teicher, H. (1988). Probability Theory, 2nd ed. Springer, New York.
  • Diaconis, P. and Freedman, D. (1999). Iterated random functions. SIAM Rev. 41 45–76.
  • Fishman, G. S. (1996). Monte Carlo: Concepts, Algorithms, and Applications. Springer, New York.
  • Geyer, C. J. (1992). Practical Markov chain Monte Carlo (with discussion). Statist. Sci. 7 473–511.
  • Glynn, P. W. and Whitt, W. (1992). The asymptotic validity of sequential stopping rules for stochastic simulations. Ann. Appl. Probab. 2 180–198.
  • Goodman, J. and Sokal, A. D. (1989). Multigrid Monte Carlo method. Conceptual foundations. Phys. Rev. D 40 2035–2071.
  • Hannan, E. J. (1979). The central limit theorem for time series regression. Stochastic Process. Appl. 9 281–289.
  • Ibragimov, I. A. and Linnik, Y. V. (1971). Independent and Stationary Sequences of Random Variables. Wolters-Noordhoff, Groningen.
  • Jones, G. L., Haran, M., Caffo, B. S. and Neath, R. (2006). Fixed-width output analysis for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 101 1537–1547.
  • Künsch, H. R. (1989). The jackknife and the bootstrap for general stationary observations. Ann. Statist. 17 1217–1241.
  • Lahiri, S. N. (2003). Resampling Methods for Dependent Data. Springer, New York.
  • Politis, D. N., Romano, J. P. and Wolf, M. (1999). Subsampling. Springer, New York.
  • Priestley, M. B. (1988). Nonlinear and Nonstationary Time Series Analysis. Academic Press, London.
  • Robert, C. P. (1995). Convergence control methods for Markov chain Monte Carlo algorithms. Statist. Sci. 10 231–253.
  • Rosenblatt, M. (1959). Stationary processes as shifts of functions of independent random variables. J. Math. Mech. 8 665–681.
  • Sherman, M. (1998). Other data-based choice of batch size for simulation output analysis. Simulation 71 38–47.
  • Song, W. M. and Schmeiser, B. W. (1995). Optimal mean-squared-error batch sizes. Manage. Sci. 41 110–123.
  • Tong, H. (1990). Nonlinear Time Series: A Dynamical System Approach. Oxford Statistical Science Series 6. Oxford Univ. Press, New York.
  • Volný, D. (1993). Approximating martingales and the central limit theorem for strictly stationary processes. Stochastic Process. Appl. 44 41–74.
  • Wiener, N. (1958). Nonlinear Problems in Random Theory. MIT Press, New York.
  • Wu, W. B. (2005). Nonlinear system theory: Another look at dependence. Proc. Natl. Acad. Sci. USA 102 14150–14154.
  • Wu, W. B. (2007). Strong invariance principles for dependent random variables. Ann. Probab. 35 2294–2320.
  • Wu, W. B. and Shao, X. (2004). Limit theorems for iterated random functions. J. Appl. Probab. 41 425–436.