Electronic Communications in Probability

How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling

Antar Bandyopadhyay and David Aldous

Full-text: Open access


Given a probability law $\pi$ on a set $S$ and a function $g : S \rightarrow R$, suppose one wants to estimate the mean $\bar{g} = \int g d\pi$. The Markov Chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution $\pi$. Typically one has no a priori bounds on the chain's mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for $\bar{g}$. But suppose there is also a separate method which (slowly) gives samples exactly from $\pi$. Using $n$ exact samples, one could immediately get a confidence interval of length $O(n^{-1/2})$. But one can do better. Use each exact sample as the initial state of a Markov chain, and run each of these $n$ chains for $m$ steps. We show how to construct confidence intervals which are always valid, and which, if the (unknown) relaxation time of the chain is sufficiently small relative to $m/n$, have length $O(n^{-1} \log n)$ with high probability.

Article information

Electron. Commun. Probab. Volume 6 (2001), paper no. 8, 79-89.

Accepted: 28 July 2001
First available in Project Euclid: 19 April 2016

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces)
Secondary: 62M05: Markov processes: estimation 68W20: Randomized algorithms

Confidence interval Exact sampling Markov chain Monte Carlo

This work is licensed under a Creative Commons Attribution 3.0 License.


Bandyopadhyay, Antar; Aldous, David. How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling. Electron. Commun. Probab. 6 (2001), paper no. 8, 79--89. doi:10.1214/ECP.v6-1037. https://projecteuclid.org/euclid.ecp/1461097553.

Export citation


  • D.J. Aldous and J.A. Fill. (2001), Reversible Markov chains and random walks on graphs. Book in preparation.
  • N. Alon and J. H. Spencer. (1992), The Probabilistic Method. Wiley.
  • E. Behrends. (2000), Introduction to Markov Chains, with special emphasis on rapid mixing. Veiweg.
  • M. H. Chen, Q. M. Shao, and J.G. Ibrahim. (2000), Monte Carlo Methods in Bayesian Computation. Springer-Verlag.
  • P. Diaconis and L. Saloff-Coste. (1998), What do we know about the Metropolis algorithm ? J. Comput. System Sci. 57 20-36.
  • S.T. Garren and R.L. Smith. (2000), Estimating the second largest eigenvalue of a Markov transition matrix. Bernoulli 6 215–242.
  • W.R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors. (1996), Markov Chain Monte Carlo in Practice. London, Chapman and Hall.
  • P. Lezaud. (1998), Chernoff-type bound for finite Markov chains. Ann. Appl. Probab. 8 849–867.
  • J.S. Liu. (2001), Monte Carlo Strategies in Scientific Computing. Springer.
  • D. Randall and A. Sinclair. (2000), Self-testing algorithms for self-avoiding walks. J. Math. Phys. 41 1570–1584.
  • C.P. Robert, editor. (1998), Discretization and MCMC Convergence Assessment. Number 135 in Lecture Notes in Statistics. Springer-Verlag.
  • C.P. Robert and G. Casella. (1999), Monte Carlo Statistical Methods. Springer-Verlag.