Bernoulli

  • Bernoulli
  • Volume 20, Number 2 (2014), 457-485.

A central limit theorem for adaptive and interacting Markov chains

G. Fort, E. Moulines, P. Priouret, and P. Vandekerkhove

Full-text: Open access

Abstract

Adaptive and interacting Markov Chains Monte Carlo (MCMC) algorithms are a novel class of non-Markovian algorithms aimed at improving the simulation efficiency for complicated target distributions. In this paper, we study a general (non-Markovian) simulation framework covering both the adaptive and interacting MCMC algorithms. We establish a central limit theorem for additive functionals of unbounded functions under a set of verifiable conditions, and identify the asymptotic variance. Our result extends all the results reported so far. An application to the interacting tempering algorithm (a simplified version of the equi-energy sampler) is presented to support our claims.

Article information

Source
Bernoulli, Volume 20, Number 2 (2014), 457-485.

Dates
First available in Project Euclid: 28 February 2014

Permanent link to this document
https://projecteuclid.org/euclid.bj/1393593994

Digital Object Identifier
doi:10.3150/12-BEJ493

Mathematical Reviews number (MathSciNet)
MR3178506

Zentralblatt MATH identifier
1303.60020

Keywords
interacting MCMC limit theorems MCMC

Citation

Fort, G.; Moulines, E.; Priouret, P.; Vandekerkhove, P. A central limit theorem for adaptive and interacting Markov chains. Bernoulli 20 (2014), no. 2, 457--485. doi:10.3150/12-BEJ493. https://projecteuclid.org/euclid.bj/1393593994


Export citation

References

  • [1] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2007). Convergence of the equi-energy sampler. In Conference Oxford sur les Méthodes de Monte Carlo Séquentielles. ESAIM Proc. 19 1–5. Les Ulis: EDP Sci.
  • [2] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2007). Non-linear Markov chain Monte Carlo. In Conference Oxford sur les Méthodes de Monte Carlo Séquentielles. ESAIM Proc. 19 79–84. Les Ulis: EDP Sci.
  • [3] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2008). A note on convergence of the equi-energy sampler. Stoch. Anal. Appl. 26 298–312.
  • [4] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2011). On nonlinear Markov chain Monte Carlo. Bernoulli 17 987–1014.
  • [5] Andrieu, C. and Moulines, É. (2006). On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab. 16 1462–1505.
  • [6] Andrieu, C. and Thoms, J. (2008). A tutorial on adaptive MCMC. Stat. Comput. 18 343–373.
  • [7] Atchadé, Y. and Fort, G. (2010). Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16 116–154.
  • [8] Atchadé, Y., Fort, G., Moulines, E. and Priouret, P. (2011). Adaptive Markov chain Monte Carlo: Theory and methods. In Bayesian Time Series Models 32–51. Cambridge: Cambridge Univ. Press.
  • [9] Atchadé, Y.F. (2010). A cautionary tale on the efficiency of some adaptive Monte Carlo schemes. Ann. Appl. Probab. 20 841–868.
  • [10] Atchadé, Y.F. (2011). Kernel estimators of asymptotic variance for adaptive Markov chain Monte Carlo. Ann. Statist. 39 990–1011.
  • [11] Baxendale, P.H. (2005). Renewal theory and computable convergence rates for geometrically ergodic Markov chains. Ann. Appl. Probab. 15 700–738.
  • [12] Bercu, B., Del Moral, P. and Doucet, A. (2009). A functional central limit theorem for a class of interacting Markov chain Monte Carlo methods. Electron. J. Probab. 14 2130–2155.
  • [13] Brockwell, A., Del Moral, P. and Doucet, A. (2010). Sequentially interacting Markov chain Monte Carlo methods. Ann. Statist. 38 3387–3411.
  • [14] Del Moral, P. and Doucet, A. (2010). Interacting Markov chain Monte Carlo methods for solving nonlinear measure-valued equations. Ann. Appl. Probab. 20 593–639.
  • [15] Douc, R. and Moulines, E. (2008). Limit theorems for weighted samples with applications to sequential Monte Carlo methods. Ann. Statist. 36 2344–2376.
  • [16] Douc, R., Moulines, E. and Rosenthal, J.S. (2004). Quantitative bounds on convergence of time-inhomogeneous Markov chains. Ann. Appl. Probab. 14 1643–1665.
  • [17] Fort, G. and Moulines, E. (2003). Polynomial ergodicity of Markov transition kernels. Stochastic Process. Appl. 103 57–99.
  • [18] Fort, G., Moulines, E. and Priouret, P. (2011). Convergence of adaptive and interacting Markov chain Monte Carlo algorithms. Ann. Statist. 39 3262–3289.
  • [19] Fort, G., Moulines, E., Priouret, P. and Vandekerkhove, P. (2012). A simple variance inequality for $U$-statistics of a Markov chain with applications. Statist. Probab. Lett. 82 1193–1201.
  • [20] Fort, G., Moulines, E., Priouret, P. and Vandekerkhove, P. (2014). Supplement to “A central limit theorem for adaptive and interacting Markov chains.” DOI:10.3150/12-BEJ493SUPP.
  • [21] Haario, H., Saksman, E. and Tamminen, J. (1999). Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Statist. 14 375–395.
  • [22] Hall, P. and Heyde, C.C. (1980). Martingale Limit Theory and Its Application. Probability and Mathematical Statistics. New York: Academic Press [Harcourt Brace Jovanovich Publishers].
  • [23] Jarner, S.F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 341–361.
  • [24] Kou, S.C., Zhou, Q. and Wong, W.H. (2006). Equi-energy sampler with applications in statistical inference and statistical mechanics. Ann. Statist. 34 1581–1652.
  • [25] Liang, F., Liu, C. and Carroll, R.J. (2007). Stochastic approximation in Monte Carlo computation. J. Amer. Statist. Assoc. 102 305–320.
  • [26] Mengersen, K.L. and Tweedie, R.L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101–121.
  • [27] Meyn, S. and Tweedie, R.L. (2009). Markov Chains and Stochastic Stability, 2nd ed. Cambridge: Cambridge Univ. Press. With a prologue by Peter W. Glynn.
  • [28] Roberts, G.O. and Rosenthal, J.S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1 20–71.
  • [29] Roberts, G.O. and Rosenthal, J.S. (2007). Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Probab. 44 458–475.
  • [30] Roberts, G.O. and Tweedie, R.L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83 95–110.
  • [31] Rosenthal, J.S. (2011). Optimal proposal distributions and adaptive MCMC. In Handbook of Markov Chain Monte Carlo. Chapman & Hall/CRC Handb. Mod. Stat. Methods 93–111. Boca Raton, FL: CRC Press.
  • [32] Saksman, E. and Vihola, M. (2010). On the ergodicity of the adaptive Metropolis algorithm on unbounded domains. Ann. Appl. Probab. 20 2178–2203.
  • [33] Serfling, R.J. (1980). Approximation Theorems of Mathematical Statistics. Wiley Series in Probability and Mathematical Statistics. New York: Wiley.
  • [34] Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.
  • [35] van der Vaart, A.W. and Wellner, J.A. (1996). Weak Convergence and Empirical Processes: With Applications to Statistics. Springer Series in Statistics. New York: Springer.
  • [36] Wang, F. and Landau, D.P. (2001). An efficient, multiple-range random walk algorithm to calculate the density of states. Phys. Rev. Lett. 86 2050–2053.

Supplemental materials

  • Supplementary material: Supplement to “A central limit theorem for adaptive and interacting Markov chains”. We detail in this supplement: (1) the gap in the proof of Atchade’s [9] theorem, (2) the proofs of technical Lemmas 4.1, 4.3, A.1–A.3, (3) some additional proofs of [18], Section 3.1, (4) results on the variance of completely degenerated V-statistics of asymptotically stationary Markov chains, and (5) the weak law of large number for adaptive and interacting Markov chains.