The Annals of Statistics

Rates of convergence of the Hastings and Metropolis algorithms

K. L. Mengersen and R. L. Tweedie

Full-text: Open access


We apply recent results in Markov chain theory to Hastings and Metropolis algorithms with either independent or symmetric candidate distributions, and provide necessary and sufficient conditions for the algorithms to converge at a geometric rate to a prescribed distribution $\pi$. In the independence case (in $\mathbb{R}^k$) these indicate that geometric convergence essentially occurs if and only if the candidate density is bounded below by a multiple of $\pi$; in the symmetric case (in $\mathbb{R}$ only) we show geometric convergence essentially occurs if and only if $\pi$ has geometric tails. We also evaluate recently developed computable bounds on the rates of convergence in this context: examples show that these theoretical bounds can be inherently extremely conservative, although when the chain is stochastically monotone the bounds may well be effective.

Article information

Ann. Statist. Volume 24, Number 1 (1996), 101-121.

First available in Project Euclid: 26 September 2002

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62J05: Linear regression 62-04: Explicit machine computation and programs (not the theory of computation or programming) 65C05: Monte Carlo methods

Posterior distributions Hastings algorithms Metropolis algorithms Gibbs sampling Markov chain Monte Carlo irreducible Markov processes geometric ergodicity stochastic monotonicity log-concave distributions


Mengersen, K. L.; Tweedie, R. L. Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 (1996), no. 1, 101--121. doi:10.1214/aos/1033066201.

Export citation


  • 1 AMIT, Y. 1991. On rates of convergence of stochastic relaxation for Gaussian and nonGaussian distributions. J. Multivariate Anal. 38 82 99.
  • 2 BAXENDALE, P. H. 1993. Uniform estimates for geometric ergodicity of recurrent Markov processes. Technical report, Dept. Mathematics, Univ. Southern California.
  • 3 BESAG, J. E. and GREEN, P. J. 1993. Spatial statistics and Bayesian computation with. discussion. J. Roy. Statist. Soc. Ser. B 55 25 38.
  • 4 BESAG, J. E., GREEN, P. J., HIGDON, D. and MENGERSEN, K. L. 1995. Bayesian computation Z. and stochastic sy stems with discussion. Statist. Sci. 10 3 66.
  • 5 CHAN, K. S. 1993. Asy mptotic behavior of the Gibbs sampler. J. Amer. Statist. Assoc. 88 320 326.
  • 6 HASTINGS, W. K. 1970. Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 97 109.
  • 7 LUND, R. B. and TWEEDIE, R. L. 1996. Geometric convergence rates for stochastically ordered Markov chains. Math. Oper. Res. To appear.
  • 8 MENGERSEN, K. L. and TWEEDIE, R. L. 1993. Rates of convergence of the Hastings and Metropolis algorithms. Technical Report 93 30, Dept. Statistics, Colorado State Univ.
  • 9 METROPOLIS, N., ROSENBLUTH, A., ROSENBLUTH, M., TELLER, A. and TELLER, E. 1953. Equations of state calculations by fast computing machines. J. Chem. Phy s. 21 1087 1091.
  • 10 MEy N, S. P. and TWEEDIE, R. L. 1993. Markov Chains and Stochastic Stability. Springer, New York.
  • 11 MEy N, S. P. and TWEEDIE, R. L. 1994. Computable bounds for convergence rates of Markov chains. Ann. Appl. Probab. 4 981 1011.
  • 12 NUMMELIN, E. 1984. General Irreducible Markov Chains and Non-Negative Operators. Cambridge Univ. Press.
  • 13 ROBERTS, G. O. and POLSON, N. G. 1994. A note on the geometric convergence of the Gibbs sampler. J. Roy. Statist. Soc. Ser. B 56 377 384.
  • 14 ROBERTS, G. O., MENGERSEN, K. L., SCOTT, D. J. and TWEEDIE, R. L. 1995. Bounds on convergence rates in MCMC: a comparison of methods. Unpublished manuscript.
  • 15 ROBERTS, G. O. and ROSENTHAL, J. S. 1995. Shift-coupling and convergence rates of ergodic averages. Unpublished manuscript.
  • 16 ROBERTS, G. O. and TWEEDIE, R. L. 1996. Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika. To appear.
  • 17 ROSENTHAL, J. S. 1995. Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90 558 566.
  • 18 SCHERVISH, M. J. and CARLIN, B. P. 1992. On the convergence of successive substitution sampling. Journal of Computational and Graphical Statistics 1 111 127.
  • 19 SCOTT, D. J. and TWEEDIE, R. L. 1996. Explicit rates of convergence of stochastically ordered Markov chains. In Proceedings of Athens Conference on Applied Probability and Time Series. Springer, New York. To appear.
  • 20 SMITH, A. F. M. and GELFAND, A. E. 1992. Bayesian statistics without tears: a sampling resampling perspective. Amer. Statist. 46 84 88.
  • 21 SMITH, A. F. M. and ROBERTS, G. O. 1993. Bayesian computation via the Gibbs sampler Z. and related Markov chain Monte Carlo methods with discussion. J. Roy. Statist. Soc. Ser. B 55 3 24.
  • 22 TIERNEY, L. 1994. Markov chains for exploring posterior distributions with discussion. Ann. Statist. 22 1701 1762.
  • 23 TUOMINEN, P. and TWEEDIE, R. L. 1994. Subgeometric rates of convergence of f-ergodic Markov chains. Adv. in Appl. Probab. 26 775 798.