Electronic Communications in Probability

How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling

Antar Bandyopadhyay and David Aldous

Full-text: Open access

Abstract

Given a probability law $\pi$ on a set $S$ and a function $g : S \rightarrow R$, suppose one wants to estimate the mean $\bar{g} = \int g d\pi$. The Markov Chain Monte Carlo method consists of inventing and simulating a Markov chain with stationary distribution $\pi$. Typically one has no a priori bounds on the chain's mixing time, so even if simulations suggest rapid mixing one cannot infer rigorous confidence intervals for $\bar{g}$. But suppose there is also a separate method which (slowly) gives samples exactly from $\pi$. Using $n$ exact samples, one could immediately get a confidence interval of length $O(n^{-1/2})$. But one can do better. Use each exact sample as the initial state of a Markov chain, and run each of these $n$ chains for $m$ steps. We show how to construct confidence intervals which are always valid, and which, if the (unknown) relaxation time of the chain is sufficiently small relative to $m/n$, have length $O(n^{-1} \log n)$ with high probability.

Article information

Source
Electron. Commun. Probab. Volume 6 (2001), paper no. 8, 79-89.

Dates
Accepted: 28 July 2001
First available in Project Euclid: 19 April 2016

Permanent link to this document
https://projecteuclid.org/euclid.ecp/1461097553

Digital Object Identifier
doi:10.1214/ECP.v6-1037

Mathematical Reviews number (MathSciNet)
MR1855344

Zentralblatt MATH identifier
0987.60084

Subjects
Primary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces)
Secondary: 62M05: Markov processes: estimation 68W20: Randomized algorithms

Keywords
Confidence interval Exact sampling Markov chain Monte Carlo

Rights
This work is licensed under a Creative Commons Attribution 3.0 License.

Citation

Bandyopadhyay, Antar; Aldous, David. How to Combine Fast Heuristic Markov Chain Monte Carlo with Slow Exact Sampling. Electron. Commun. Probab. 6 (2001), paper no. 8, 79--89. doi:10.1214/ECP.v6-1037. https://projecteuclid.org/euclid.ecp/1461097553.


Export citation

References

  • D.J. Aldous and J.A. Fill. (2001), Reversible Markov chains and random walks on graphs. Book in preparation.
  • N. Alon and J. H. Spencer. (1992), The Probabilistic Method. Wiley.
  • E. Behrends. (2000), Introduction to Markov Chains, with special emphasis on rapid mixing. Veiweg.
  • M. H. Chen, Q. M. Shao, and J.G. Ibrahim. (2000), Monte Carlo Methods in Bayesian Computation. Springer-Verlag.
  • P. Diaconis and L. Saloff-Coste. (1998), What do we know about the Metropolis algorithm ? J. Comput. System Sci. 57 20-36.
  • S.T. Garren and R.L. Smith. (2000), Estimating the second largest eigenvalue of a Markov transition matrix. Bernoulli 6 215–242.
  • W.R. Gilks, S. Richardson, and D.J. Spiegelhalter, editors. (1996), Markov Chain Monte Carlo in Practice. London, Chapman and Hall.
  • P. Lezaud. (1998), Chernoff-type bound for finite Markov chains. Ann. Appl. Probab. 8 849–867.
  • J.S. Liu. (2001), Monte Carlo Strategies in Scientific Computing. Springer.
  • D. Randall and A. Sinclair. (2000), Self-testing algorithms for self-avoiding walks. J. Math. Phys. 41 1570–1584.
  • C.P. Robert, editor. (1998), Discretization and MCMC Convergence Assessment. Number 135 in Lecture Notes in Statistics. Springer-Verlag.
  • C.P. Robert and G. Casella. (1999), Monte Carlo Statistical Methods. Springer-Verlag.