Statistical Science

Ordering and Improving the Performance of Monte Carlo Markov Chains

Antonietta Mira

Full-text: Open access


An overview of orderings defined on the space of Markov chains having a prespecified unique stationary distribution is given. The intuition gained by studying these orderings is used to improve existing Markov chain Monte Carlo algorithms.

Article information

Statist. Sci., Volume 16, Number 4 (2001), 340-350.

First available in Project Euclid: 5 March 2002

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Asymptotic variance convergence ordering covariance ordering efficiency ordering Metropolis–Hastings algorithm Peskun ordering reversible jumps


Mira, Antonietta. Ordering and Improving the Performance of Monte Carlo Markov Chains. Statist. Sci. 16 (2001), no. 4, 340--350. doi:10.1214/ss/1015346319.

Export citation


  • Bellman, R. (1972). Introduction to Matrix Analy sis. McGraw- Hill, New York.
  • Bendat, J. and Sherman, S. (1955). Monotone and convex operator functions. Trans. Amer. Math. Soc. 79 58-71.
  • Besag, J. (2000). Markov chain Monte Carlo for statistical inference. Technical Report 9, Center for Statistics and the Social Sciences. Available at papers/
  • Besag, J. and Green, P. J. (1993). Spatial statistics and Bayesian computation. J. Roy. Statist. Soc. Ser. B 55 25-37.
  • Billera, L. J. and Diaconis, P. (2001). A geometric interpretation of the Metropolis algorithm. Statist. Sci. 16 335-339.
  • Brockwell, A. E. and Kadane, J. B. (2001). Practical regeneration for Markov chain Monte Carlo simulation. Technical Report 757, Dept. Statist., Carnegie Mellon Univ. Available at
  • Casella, G. and Robert, C. P. (1996). Rao-Blackwellization of sampling schemes. Biometrika 83 81-94.
  • Chauveau, D. and Vandekerkhove, P. (2001). Improving convergence of the Hastings-Metropolis algorithm with a learning proposal. Scand. J. Statist. To appear.
  • Diaconis, P., Holmes, S. and Neal, R. M. (2000). Analy sis of a nonreversible Markov chain sampler. Ann. Appl. Probab. 10 726-752.
  • Frigessi, A., Di Stefano, P., Hwang, A. and Sheu, A. (1993). Convergence rates of the Gibbs sampler, the Metropolis algorithm and other single-site updating dy namics. J. Roy. Statist. Soc. Ser. B 55 205-219.
  • Frigessi, A., Hwang, C. and Younes, L. (1992). Optimal spectral structure of reversible stochastic matrices, Monte Carlo methods and the simulation of Markov random fields. Ann. Appl. Probab. 2 610-628.
  • Gelfand, A. E. and Smith, A. F. M. (1990). Sampling based approaches to calculating marginal densities. J. Amer. Statist. Assoc. 85 398-409.
  • Gilks, W. R., Roberts, G. O. and Sahu, S. K. (1998). Adaptive Markov chain Monte Carlo through regeneration. J. Amer. Statist. Assoc. 93 1045-1054.
  • Green, P. J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82 711-732.
  • Green, P. J. and Mira, A. (2001). Delay ed rejection in reversible jump Metropolis-Hastings. Biometrika 88 1035-1053. Greenwood, P. E., McKeague, I. W. and Wefelmey er, W.
  • (1996). Outperforming the Gibbs sampler empirical estimator for nearest-neighbor random fields. Ann. Statist. 24 1433-1456.
  • Greenwood, P. E. and Wefelmey er, W. (1995). Efficiency of empirical estimators for Markov chains. Ann. Statist. 23 132-143.
  • Greenwood, P. E. and Wefelmey er, W. (1999). Reversible Markov chains and optimality of sy mmetrized empirical estimators. Bernoulli 5 109-123.
  • Haario, H., Saksman, E. and Tamminen, J. (1999). Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Statist. 14 375-395.
  • Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7 223-242.
  • Holden, L. (1998). Adaptive chains. Technical Report SAND/ 11/98, Norwegian Computing Center. Available at Liu, J. S. (1996a). Metropolized independent sampling. Statist. Comput. 6 113-119. Liu, J. S. (1996b). Peskun theorem and a modified discrete-state Gibbs sampler. Biometrika 83 681-682.
  • Liu, J. S., Wong, W. H. and Kong, A. (1995). Correlation structure and convergence rate of the Gibbs sampler with various scans. J. Roy. Statist. Soc. Ser. B 57 157-169.
  • L ¨owner, K. (1934). ¨Uber monotone Matrixfunktionen. Math. Z. 38 177-216.
  • McKeague and Wefelmey er (2000). Markov chain Monte Carlo and Rao-Blackwellization. J. Statist. Plann. Inf. 85 171-182. Mira, A. (2001a). Efficiency increasing and stationarity preserving probability mass transfers for MCMC. Statist. Probab. Lett. To appear. Mira, A. (2001b). On Metropolis-Hastings algorithms with delay ed rejection. Metron. To appear.
  • Mira, A. and Gey er, C. J. (1999). Ordering Monte Carlo Markov chains. Technical Report 632, School of Statistics, Univ. Minnesota. Available at anto/
  • Mira, A. and Gey er, C. J. (2000). On non-reversible Markov chains. Fields Inst. Comm. 26 93-108.
  • Mira, A., Omtzigt, P. and Roberts, G. (2001). Stationary preserving and efficiency increasing probability mass transfers made possible. Technical Report 14, Dept. Economics, Univ. Insubria. Available at anto/
  • Mira, A. and Tierney, L. (2001). Efficiency and convergence properties of slice samplers. Scand. J. Statist. 29 1035-1053.
  • Neal, R. M. (1998). Suppressing random walks in Markov chain Monte Carlo using ordered overrelaxation. In Learning in Graphical Models (M. I. Jordan, ed.) 205-225. Kluwer Academic, Dordrecht.
  • Peskun, P. H. (1973). Optimum Monte Carlo sampling using Markov chains. Biometrika 60 607-612.
  • Roberts, G. O. (1996). Markov chain concepts related to sampling algorithms. In Markov Chain Monte Carlo in Practice (W. R. Gilks, S. Richardson and D. J. Spiegelhalter, eds.). Chapman and Hall, New York.
  • Roberts, G. O. and Sahu, S. K. (1997). Updating schemes, covariance structure, blocking and parametrisation for the Gibbs sampler. J. Roy. Statist. Soc. Ser. B 59 291-317.
  • Roberts, G. O. and Sahu, S. K. (2001). Approximate predetermined convergence properties of the Gibbs sampler. J. Comput. Graph. Statist. 10 216-229.
  • Sahu, S. K. and Roberts, G. O. (1999). On convergence of the EMalgorithm and the Gibbs sampler. Statist. Comput. 9 55-64.
  • Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701-1762.
  • Tierney, L. (1998). A note on Metropolis-Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1-9.
  • Tierney, L. and Mira, A. (1999). Some adaptive Monte Carlo methods for Bayesian inference. Statist. Med. 18 2507-2515.