The Annals of Applied Probability

Which ergodic averages have finite asymptotic variance?

George Deligiannidis and Anthony Lee

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


We show that the class of $L^{2}$ functions for which ergodic averages of a reversible Markov chain have finite asymptotic variance is determined by the class of $L^{2}$ functions for which ergodic averages of its associated jump chain have finite asymptotic variance. This allows us to characterize completely which ergodic averages have finite asymptotic variance when the Markov chain is an independence sampler. From a practical perspective, the most important result identifies a simple sufficient condition for all ergodic averages of $L^{2}$ functions of the primary variable in a pseudo-marginal Markov chain to have finite asymptotic variance.

Article information

Ann. Appl. Probab., Volume 28, Number 4 (2018), 2309-2334.

Received: July 2016
Revised: May 2017
First available in Project Euclid: 9 August 2018

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60J05: Discrete-time Markov processes on general state spaces 60J22: Computational methods in Markov chains [See also 65C40] 60F05: Central limit and other weak theorems 65C40: Computational Markov chains

Markov chain Monte Carlo asymptotic variance jump chain independent Metropolis–Hastings pseudo-marginal method


Deligiannidis, George; Lee, Anthony. Which ergodic averages have finite asymptotic variance?. Ann. Appl. Probab. 28 (2018), no. 4, 2309--2334. doi:10.1214/17-AAP1358.

Export citation


  • Andrieu, C. and Roberts, G. O. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. Ann. Statist. 37 697–725.
  • Andrieu, C. and Vihola, M. (2015). Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. Ann. Appl. Probab. 25 1030–1077.
  • Andrieu, C. and Vihola, M. (2016). Establishing some order amongst exact approximations of MCMCs. Ann. Appl. Probab. 26 2661–2696.
  • Banterle, M., Grazian, C., Lee, A. and Robert, C. P. (2015). Accelerating Metropolis–Hastings algorithms by delayed acceptance. Available at arXiv:1503.00996.
  • Beaumont, M. A. (2003). Estimation of population growth or decline in genetically monitored populations. Genetics 164 1139–1160.
  • Bednorz, W., Łatuszyński, K. and Latała, R. (2008). A regeneration proof of the central limit theorem for uniformly ergodic Markov chains. Electron. Commun. Probab. 13 85–98.
  • Bornn, L., Pillai, N., Smith, A. and Woodard, D. (2017). The use of a single pseudo-sample in approximate Bayesian computation. Stat. Comput. 27 583–590.
  • Caracciolo, S., Pelissetto, A. and Sokal, A. D. (1990). Nonlocal Monte Carlo algorithm for self-avoiding walks with fixed endpoints. J. Stat. Phys. 60 1–53.
  • Costa, O. L. V. (1990). Stationary distributions for piecewise-deterministic Markov processes. J. Appl. Probab. 27 60–73.
  • Costa, O. L. V. and Dufour, F. (2008). Stability and ergodicity of piecewise deterministic Markov processes. SIAM J. Control Optim. 47 1053–1077.
  • Douc, R. and Robert, C. P. (2011). A vanilla Rao–Blackwellization of Metropolis–Hastings algorithms. Ann. Statist. 39 261–277.
  • Doucet, A., Pitt, M. K., Deligiannidis, G. and Kohn, R. (2015). Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika 102 295–313.
  • Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7 473–483.
  • Häggström, O. and Rosenthal, J. S. (2007). On variance conditions for Markov chain CLTs. Electron. Commun. Probab. 12 454–464.
  • Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97–109.
  • Jarner, S. F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 341–361.
  • Jarner, S. F. and Roberts, G. O. (2002). Polynomial convergence rates of Markov chains. Ann. Appl. Probab. 12 224–247.
  • Jarner, S. F. and Roberts, G. O. (2007). Convergence of heavy-tailed Monte Carlo Markov chain algorithms. Scand. J. Stat. 34 781–815.
  • Kipnis, C. and Varadhan, S. R. S. (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104 1–19.
  • Lee, A. and Łatuszyński, K. (2014). Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation. Biometrika 101 655–671.
  • Lin, L., Liu, K. F. and Sloan, J. (2000). A noisy Monte Carlo algorithm. Phys. Rev. D 61 074505.
  • Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101–121.
  • Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953). Equation of state calculations by fast computing machines. J. Chem. Phys. 21 1087–1092.
  • Meyn, S. and Tweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd ed. Cambridge Univ. Press, Cambridge.
  • Pardoux, E. (2008). Markov Processes and Applications. Wiley, New York.
  • Peskun, P. H. (1973). Optimum Monte-Carlo sampling using Markov chains. Biometrika 60 607–612.
  • Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1 20–71.
  • Roberts, G. O. and Rosenthal, J. S. (2008). Variance bounding Markov chains. Ann. Appl. Probab. 18 1201–1214.
  • Sherlock, C., Thiery, A. H. and Lee, A. (2017). Pseudo-marginal Metropolis–Hastings sampling using averages of unbiased estimators. Biometrika 104 727–734.
  • Sherlock, C., Thiery, A. H., Roberts, G. O. and Rosenthal, J. S. (2015). On the efficiency of pseudo-marginal random walk Metropolis algorithms. Ann. Statist. 43 238–275.
  • Tavaré, S., Balding, D. J., Griffiths, R. C. and Donnelly, P. (1997). Inferring coalescence times from DNA sequence data. Genetics 145 505–518.
  • Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701–1762.
  • Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.
  • Yosida, K. (1980). Functional Analysis, 6th ed. Springer, Berlin.