Open Access
December 2011 Convergence of adaptive and interacting Markov chain Monte Carlo algorithms
G. Fort, E. Moulines, P. Priouret
Ann. Statist. 39(6): 3262-3289 (December 2011). DOI: 10.1214/11-AOS938

Abstract

Adaptive and interacting Markov chain Monte Carlo algorithms (MCMC) have been recently introduced in the literature. These novel simulation algorithms are designed to increase the simulation efficiency to sample complex distributions. Motivated by some recently introduced algorithms (such as the adaptive Metropolis algorithm and the interacting tempering algorithm), we develop a general methodological and theoretical framework to establish both the convergence of the marginal distribution and a strong law of large numbers. This framework weakens the conditions introduced in the pioneering paper by Roberts and Rosenthal [J. Appl. Probab. 44 (2007) 458–475]. It also covers the case when the target distribution π is sampled by using Markov transition kernels with a stationary distribution that differs from π.

Citation

Download Citation

G. Fort. E. Moulines. P. Priouret. "Convergence of adaptive and interacting Markov chain Monte Carlo algorithms." Ann. Statist. 39 (6) 3262 - 3289, December 2011. https://doi.org/10.1214/11-AOS938

Information

Published: December 2011
First available in Project Euclid: 5 March 2012

zbMATH: 1246.65003
MathSciNet: MR3012408
Digital Object Identifier: 10.1214/11-AOS938

Subjects:
Primary: 60F05 , 62L10 , 65C05 , 65C05
Secondary: 60J05 , 65C40 , 93E35

Keywords: adaptive Metropolis , adaptive Monte Carlo , equi-energy sampler , ergodic theorems , interacting tempering , Law of Large Numbers , Markov chain Monte Carlo , Markov chains , parallel tempering

Rights: Copyright © 2011 Institute of Mathematical Statistics

Vol.39 • No. 6 • December 2011
Back to Top