The Annals of Applied Probability

Adaptive independent Metropolis–Hastings

Lars Holden, Ragnar Hauge, and Marit Holden

Full-text: Open access

Abstract

We propose an adaptive independent Metropolis–Hastings algorithm with the ability to learn from all previous proposals in the chain except the current location. It is an extension of the independent Metropolis–Hastings algorithm. Convergence is proved provided a strong Doeblin condition is satisfied, which essentially requires that all the proposal functions have uniformly heavier tails than the stationary distribution. The proof also holds if proposals depending on the current state are used intermittently, provided the information from these iterations is not used for adaption. The algorithm gives samples from the exact distribution within a finite number of iterations with probability arbitrarily close to 1. The algorithm is particularly useful when a large number of samples from the same distribution is necessary, like in Bayesian estimation, and in CPU intensive applications like, for example, in inverse problems and optimization.

Article information

Source
Ann. Appl. Probab., Volume 19, Number 1 (2009), 395-413.

Dates
First available in Project Euclid: 20 February 2009

Permanent link to this document
https://projecteuclid.org/euclid.aoap/1235140343

Digital Object Identifier
doi:10.1214/08-AAP545

Mathematical Reviews number (MathSciNet)
MR2498682

Zentralblatt MATH identifier
1192.65009

Subjects
Primary: 65C05: Monte Carlo methods
Secondary: 65C40: Computational Markov chains

Keywords
Adaption Metropolis–Hastings Markov chain Monte Carlo inverse problems

Citation

Holden, Lars; Hauge, Ragnar; Holden, Marit. Adaptive independent Metropolis–Hastings. Ann. Appl. Probab. 19 (2009), no. 1, 395--413. doi:10.1214/08-AAP545. https://projecteuclid.org/euclid.aoap/1235140343


Export citation

References

  • Andrieu, C. and Moulines, E. (2003). On the ergodicity properties of some adaptive Monte Carlo algorithms. Technical report, Univ. Bristol.
  • Atchade, Y. and Rosenthal, J. (2003). On adaptive Markov chain Monte Carlo algorithms. Technical report, Univ. Ottawa.
  • Erland, S. (2003). Approximating hidden Gaussian Markov random fields. Ph.D. thesis, Dept. Mathematical Sciences, Norwegian Univ. Science and Technology, Trondheim, Norway.
  • Gåsemyr, J. (2003). On an adaptive version of the Metropolis–Hastings algorithm with independent proposal distribution. Scand. J. Statist. 30 159–173.
  • Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7 473–483.
  • Gilks, W. R., Richardson, S. and Spiegelhalter, D. J., eds. (1996). Markov Chain Monte Carlo in Practice. Chapman & Hall, London.
  • Gilks, W. R., Roberts, G. O. and Sahu, S. K. (1998). Adaptive Markov chain Monte Carlo through regeneration. J. Amer. Statist. Assoc. 93 1045–1054.
  • Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7 223–242.
  • Holden, L. (1998a). Adaptive chains. NR-note SAND/11/1998, Norwegian Computing Center. Available at http://www.statslab.cam.ac.uk/~mcmc/.
  • Holden, L. (1998b). Geometric convergence of the Metropolis–Hastings simulation algorithm. Statist. Probab. Lett. 39 371–377.
  • Kleijnen, J. P. C. and Sargent, R. G. (2000). A methodology for fitting and validating metamodels in simulation. European J. Oper. Res. 120 14–29.
  • Liu, J. S. (1996). Metropolized independent sampling with comparison to rejection sampling and importance sampling. Statist. Comput. 6 113–119.
  • Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.
  • Park, J. S. and Jeon, J. (2002). Estimation of input parameters in complex simulatioin using a gaussian process metamodel. Probabilistic Engineering Mechanics 17 219–225.
  • Roberts, G. O. and Rosenthal, J. S. (1998). Markov-chain Monte Carlo: Some practical implications of theoretical results. Canad. J. Statist. 26 5–31. With discussion by Hemant Ishwaran and Neal Madras and a rejoinder by the authors.
  • Roberts, G. O. and Rosenthal, J. (2005). Coupling and ergodicity of adaptive MCMC. Technical Report 314, Univ. Toronto.
  • Roberts, G. O. and Stramer, O. (2002). Tempered Langevin diffusions and algorithms. Technical Report 314, Univ. Iowa.
  • Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701–1762. With discussion and a rejoinder by the author.
  • Tjelmeland, H. and Hegstad, B. K. (2001). Mode jumping proposals in MCMC. Scand. J. Statist. 28 205–223.