Journal of Applied Probability

The containment condition and AdapFail algorithms

Krzysztof Łatuszyński and Jeffrey S. Rosenthal

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


This short note investigates convergence of adaptive Markov chain Monte Carlo algorithms, i.e. algorithms which modify the Markov chain update probabilities on the fly. We focus on the containment condition introduced Roberts and Rosenthal (2007). We show that if the containment condition is not satisfied, then the algorithm will perform very poorly. Specifically, with positive probability, the adaptive algorithm will be asymptotically less efficient then any nonadaptive ergodic MCMC algorithm. We call such algorithms AdapFail, and conclude that they should not be used.

Article information

J. Appl. Probab., Volume 51, Number 4 (2014), 1189-1195.

First available in Project Euclid: 20 January 2015

Permanent link to this document

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60J05: Discrete-time Markov processes on general state spaces
Secondary: 65C05: Monte Carlo methods

Markov chain Monte Carlo adaptive MCMC containment condition ergodicity convergence rate


Łatuszyński, Krzysztof; Rosenthal, Jeffrey S. The containment condition and AdapFail algorithms. J. Appl. Probab. 51 (2014), no. 4, 1189--1195.

Export citation


  • Andrieu, C. and Thoms, J. (2008). A tutorial on adaptive MCMC. Statist. Comput. 18, 343–373.
  • Atchadé, Y. F. and Rosenthal, J. S. (2005). On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11, 815–828.
  • Bai, Y., Roberts, G. O. and Rosenthal, J. S. (2011). On the containment condition for adaptive Markov chain Monte Carlo algorithms. Adv. Appl. Statist. 21, 1–54.
  • Fort, G., Moulines, E. and Priouret, P. (2011). Convergence of adaptive and interacting Markov chain Monte Carlo algorithms. Ann. Statist. 39, 3262–3289.
  • Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7, 473–483.
  • Gilks, W. R., Roberts, G. O. and Sahu, S. K. (1998). Adaptive Markov chain Monte Carlo through regeneration. J. Amer. Statist. Assoc. 93, 1045–1054.
  • Giordani, P. and Kohn, R. (2008). Efficient Bayesian inference for multiple change-point and mixture innovation models. J. Business Econom. Statist. 26, 66–77.
  • Griffin, J. E., Łatuszyński, K. and Steel, M. F. J. (2014). Individual adaptation: an adaptive MCMC scheme for variable selection problems. Submitted.
  • Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7, 223–242.
  • Łatuszyński, K. (2012). A path stability condition for adaptive MCMC. In preparation.
  • Łatuszyński, K., Roberts, G. O. and Rosenthal, J. S. (2013). Adaptive Gibbs samplers and related MCMC methods. Ann. Appl. Prob. 23, 66–98.
  • Meyn, S. and Tweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd edn. Cambridge University Press.
  • Mira, A. and Geyer, C. J. (1999). Ordering Monte Carlo Markov chains. Tech. Rep. No. 632, School of Statistics, U. of Minnesota, April 1999. Available at:
  • Richardson, S., Bottolo, L. and Rosenthal, J. S. (2011). Bayesian models for sparse regression analysis of high dimensional data. In Bayesian Statistics 9, Oxford University Press, pp. 539–568.
  • Roberts, G. O. and Rosenthal, J. S. (1997). Geometric ergodicity and hybrid Markov chains. Electron. Commun. Prob. 2, 13–25.
  • Roberts, G. O. and Rosenthal, J. S. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statist. Sci. 16, 351–367.
  • Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Prob. Surveys 1, 20–71.
  • Roberts, G. O. and Rosenthal, J. S. (2007). Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Prob. 44, 458–475.
  • Roberts, G. O. and Rosenthal, J. S. (2009). Examples of adaptive MCMC. J. Comput. Graphical Statist. 18, 349–367.
  • Roberts, G. O. and Rosenthal, J. S. (2013). A note on formal constructions of sequential conditional couplings. Statist. Prob. Lett. 83, 2073–2076.
  • Roberts, G. O., Gelman, A. and Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Prob. 7, 110–120.
  • Solonen, A. et al. (2012). Efficient MCMC for climate model parameter estimation: parallel adaptive chains and early rejection. Bayesian Anal. 7, 715–736.