## Statistical Science

### Honest Exploration of Intractable Probability Distributions via Markov Chain Monte Carlo

#### Abstract

Two important questions that must be answered whenever a Markov chain Monte Carlo (MCMC) algorithm is used are (Q1) What is an appropriate burn-in? and (Q2) How long should the sampling continue after burn-in?Developing rigorous answers to these questions presently requires a detailed study of the convergence properties of the underlying Markov chain. Consequently, in most practical applications of MCMC, exact answers to (Q1)and (Q2) are not sought. The goal of this paper is to demystify the analysis that leads to honest answers to (Q1) and (Q2). The authors hope that this article will serve as a bridge between those developing Markov chain theory and practitioners using MCMC to solve practical problems.

The ability to address (Q1) and (Q2) formally comes from establishing a drift condition and an associated minorization condition, which together imply that the underlying Markov chain is geometrically ergodic. In this article, we explain exactly what drift and minorization are as well as how and why these conditions can be used to form rigorous answers to (Q1) and (Q2). The basic ideas are as follows. The results of Rosenthal (1995) and Roberts and Tweedie (1999) allow one to use drift and minorization conditions to construct a formula giving an analytic upper bound on the distance to stationarity. A rigorous answer to (Q1) can be calculated using this formula. The desired characteristics of the target distribution are typically estimated using ergodic averages. Geometric ergodicity of the underlying Markov chain implies that there are central limit theorems available for ergodic averages (Chan and Geyer 1994). The regenerative simulation technique (Mykland, Tierney and Yu, 1995; Robert, 1995) can be used to get a consistent estimate of the variance of the asymptotic normal distribution. Hence, an asymptotic standard error can be calculated, which provides an answer to (Q2) in the sense that an appropriate time to stop sampling can be determined. The methods are illustrated using a Gibbs sampler for a Bayesian version of the one-way random effects model and a data set concerning styrene exposure.

#### Article information

Source
Statist. Sci., Volume 16, Number 4 (2001), 312-334.

Dates
First available in Project Euclid: 5 March 2002

https://projecteuclid.org/euclid.ss/1015346317

Digital Object Identifier
doi:10.1214/ss/1015346317

Mathematical Reviews number (MathSciNet)
MR1888447

Zentralblatt MATH identifier
1127.60309

#### Citation

Jones, Galin L.; Hobert, James P. Honest Exploration of Intractable Probability Distributions via Markov Chain Monte Carlo. Statist. Sci. 16 (2001), no. 4, 312--334. doi:10.1214/ss/1015346317. https://projecteuclid.org/euclid.ss/1015346317

#### References

• Athrey a, K. B. and Ney, P. (1978). A new approach to the limit theory of recurrent Markov chains. Trans. Amer. Math. Soc. 245 493-501.
• Besag, J., Green, P., Higdon, D. and Mengersen, K. (1995). Bayesian computation and stochastic sy stems (with discussion). Statist. Sci. 10 3-66.
• Billera, L. J. and Diaconis, P. (2001). A geometric interpretation of the Metropolis algorithm. Statist. Sci. 16 335-339.
• Bratley, P., Fox, B. L. and Schrage, L. E. (1987). A Guide to Simulation. Springer, New York.
• Caffo, B. S., Booth, J. G. and Davison, A. C. (2001). Empirical sup rejection sampling, Technical report, Univ. Florida.
• Chan, K. S. and Gey er, C. J. (1994). Comment on "Markov chains for exploring posterior distributions." Ann. Statist. 22 1747-1758.
• Chib, S. and Greenberg, E. (1995). Understanding the Metropolis-Hastings algorithm. Amer. Statist. 49 327-335.
• Cowles, M. K. and Rosenthal, J. S. (1998). A simulation approach to convergence rates for Markov chain Monte Carlo algorithms. Statist. Comput. 8 115-124.
• Crane, M. A. and Iglehart, D. L. (1975). Simulating stable stochastic sy stems III: Regenerative processes and discreteevent simulations. Oper. Res. 23 33-45.
• Diaconis, P. and Stroock, D. (1991). Geometric bounds for eigenvalues of Markov chains. Ann. Appl. Probab. 1 36-61.
• Diaconis, P. and Sturmfels, B. (1998). Algebraic algorithms for sampling from conditional distributions. Ann. Statist. 26 363-397.
• Frigessi, A., di Stefano, P., Hwang, C.-R. and Sheu, S.-J. (1993). Convergence rates of the Gibbs sampler, the Metropolis algorithm and other single-site updating dy namics. J. Roy. Statist. Soc. Ser. B 55 205-219. Gelfand, A. E., Hills, S. E., Racine-Poon, A. and Smith, A.
• F. M. (1990). Illustration of Bayesian inference in normal data models using Gibbs sampling. J. Amer. Statist. Assoc. 85 972-985.
• Gelfand, A. E. and Smith, A. F. M. (1990). Sampling-based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85 398-409.
• Gey er, C. J. (1992). Practical Markov chain Monte Carlo (with discussion). Statist. Sci. 7 473-511.
• Gey er, C. J. and Thompson, E. A. (1995). Annealing Markov chain Monte Carlo with applications to ancestral inference. J. Amer. Statist. Assoc. 90 909-920.
• Gilks, W. R., Richardson, S. and Spiegelhalter, D. J. E. (1996). Markov Chain Monte Carlo in Practice. Chapman and Hall, London.
• Gilks, W. R., Roberts, G. O. and Sahu, S. K. (1998). Adaptive Markov chain Monte Carlo through regeneration. J. Amer. Statist. Assoc. 93 1045-1054.
• Gly nn, P. W. (1985). Regenerative structure of Markov chains simulated via common random numbers. Oper. Res. Lett. 4 49-53.
• Gly nn, P. W. and Iglehart, D. L. (1987). A joint central limit theorem for the sample mean and regenerative variance estimator. Ann. Oper. Res. 8 41-55.
• Gly nn, P. W. and Iglehart, D. L. (1990). Simulation output analysis using standardized time series. Math. Oper. Res. 15 1-16.
• Guihenneuc-Jouy aux, C. and Robert, C. P. (1998). Discretization of continuous Markov chains and Markov chain Monte Carlo convergence assessment. J. Amer. Statist. Assoc. 93 1055-1067.
• Hobert, J. P. (2001). Discussion of "The art of data augmentation." J. Comput. Graph. Statist. 10 59-68.
• Hobert, J. P. and Gey er, C. J. (1998). Geometric ergodicity of Gibbs and blockGibbs samplers for a hierarchical random effects model. J. Multivariate Anal. 67 414-430. Hobert, J. P., Jones, G. L., Presnell, B. and Rosenthal, J. S.
• (2001). On the applicability of regenerative simulation in Markov chain Monte Carlo. Technical report, Univ. Florida.
• Ingrassia, S. (1994). On the rate of convergence of the Metropolis algorithm and Gibbs sampler by geometric bounds. Ann. Appl. Probab. 4 347-389.
• Jarner, S. F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 341-361.
• Jarner, S. F. and Roberts, G. O. (2001). Poly nomial convergence rates of Markov chains. Ann. Appl. Probab. To appear.
• Jones, G. L. and Hobert, J. P. (2001). Upper bounds on the distance to stationarity for the blockGibbs sampler for a hierarchical random effects model. Technical report, Univ. Florida.
• Levine, R. A. and Casella, G. (2001). Implementations of the Monte Carlo EM algorithm. J. Comput. Graph. Statist. 10 422-439.
• Lindvall, T. (1992). Lectures on the Coupling Method. Wiley Interscience, New York.
• Liu, J. S., Wong, W. H. and Kong, A. (1994). Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes. Biometrika 81 27-40.
• Lund, R. B. and Tweedie, R. L. (1996). Geometric convergence rates for stochastically ordered Markov chains. Math. Oper. Res. 20 182-194.
• Ly les, R. H., Kupper, L. L. and Rappaport, S. M. (1997). Assessing regulatory compliance of occupational exposures via the balanced one-way random effects ANOVA model. J. Agricultural Biol. Environ. Statist. 2 64-86.
• McCulloch, C. E. (1997). Maximum likelihood algorithms for generalized linear mixed models. J. Amer. Statist. Assoc. 92 162-170.
• Mengersen, K. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101-121.
• Mey n, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.
• Mey n, S. P. and Tweedie, R. L. (1994). Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Probab. 4 981-1011.
• Mira, A. and Tierney, L. (2001). On the use of auxiliary variables in Markov chain Monte Carlo sampling. Scand. J. Statist. To appear.
• My kland, P., Tierney, L. and Yu, B. (1995). Regeneration in Markov chain samplers. J. Amer. Statist. Assoc. 90 233-241.
• Natarajan, R. and McCulloch, C. E. (1998). Gibbs sampling with diffuse proper priors: A valid approach to data-driven inference? J. Comput. Graph. Statist. 7 267-277.
• Nummelin, E. (1978). A splitting technique for Harris recurrent Markov chains. Z. Wahrsch. Verw. Gebiete 43 309-318.
• Nummelin, E. (1984). General Irreducible Markov Chains and Non-negative Operators. Cambridge Univ. Press.
• Ripley, B. D. (1987). Stochastic Simulation. Wiley, New York.
• Robert, C. P. (1995). Convergence control methods for Markov chain Monte Carlo algorithms. Statist. Sci. 10 231-253.
• Robert, C. P. and Casella, G. (1999). Monte Carlo Statistical Methods. Springer, New York.
• Roberts, G. O. (1999). A note on acceptance rate criteria for CLTs for Metropolis-Hastings algorithms. J. Appl. Probab. 36 1210-1217. Roberts, G. O. and Rosenthal, J. S. (1998a). Markov chain Monte Carlo: Some practical implications of theoretical results (with discussion). Canad. J. Statist. 26 5-31. Roberts, G. O. and Rosenthal, J. S. (1998b). On convergence rates of Gibbs samplers for uniform distributions. Ann. Appl. Probab. 8 1291-1302.
• Roberts, G. O. and Rosenthal, J. S. (1999). Convergence of slice sampler Markov chains. J. Roy. Statist. Soc. Ser. B 61 643-660.
• Roberts, G. O. and Sahu, S. K. (1997). Updating schemes, correlation structure, blocking and parameterization for the Gibbs sampler. J. Roy. Statist. Soc. Ser. B 59 291-317.
• Roberts, G. O. and Tweedie, R. L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83 95-110.
• Roberts, G. O. and Tweedie, R. L. (1999). Bounds on regeneration times and convergence rates for Markov chains. Stochastic Process. Appl. 80 211-229.
• Roberts, G. O. and Tweedie, R. L. (2001). Bounds on regeneration times and convergence rates for Markov chains. (Corrigendum). Stochastic Process. Appl. 91 337-338.
• Rosenthal, J. S. (1995). Minorization conditions and convergence rates for Markov chain Monte Carlo. J. Amer. Statist. Assoc. 90 558-566.
• Rosenthal, J. S. (1996). Analy sis of the Gibbs sampler for a model related to James-Stein estimators. Statist. Comput. 6 269-275.
• Rosenthal, J. S. (2001). A review of asy mptotic convergence for general state space Markov chains. Far East J. Theoret. Statist. 5 37-50.
• Schmeiser, B. (1982). Batch size effects in the analysis of simulation output. Oper. Res. 30 556-568.
• Searle, S. R., Casella, G. and McCulloch, C. E. (1992). Variance Componenets. Wiley, New York.
• Spiegelhalter, D. J., Thomas, A. and Best, N. G. (1999). WinBUGS Version 1.2, MRC Biostatistics Unit, Cambridge, UK.
• Tanner, M. A. and Wong, W. H. (1987). The calculation of posterior distributions by data augmentation (with discussion). J. Amer. Statist. Assoc. 82 528-550.
• Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22 1701-1762.
• Tierney, L. (1998). A note on Metropolis-Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1-9.
• van Dy k, D. A. and Meng, X.-L. (2001). The art of data augmentation (with discussion). J. Comput. Graph. Statist. 10 1-111.
• Yuen, W. K. (2000). Applications of geometric bounds to the convergence rate of Markov chains on n. Stochastic Process. Appl. 87 1-23.