Bernoulli

  • Bernoulli
  • Volume 7, Number 2 (2001), 223-242.

An adaptive Metropolis algorithm

Heikki Haario, Eero Saksman, and Johanna Tamminen

Full-text: Open access

Abstract

A proper choice of a proposal distribution for Markov chain Monte Carlo methods, for example for the Metropolis-Hastings algorithm, is well known to be a crucial factor for the convergence of the algorithm. In this paper we introduce an adaptive Metropolis (AM) algorithm, where the Gaussian proposal distribution is updated along the process using the full information cumulated so far. Due to the adaptive nature of the process, the AM algorithm is non-Markovian, but we establish here that it has the correct ergodic properties. We also include the results of our numerical tests, which indicate that the AM algorithm competes well with traditional Metropolis-Hastings algorithms, and demonstrate that the AM algorithm is easy to use in practical computation.

Article information

Source
Bernoulli Volume 7, Number 2 (2001), 223-242.

Dates
First available in Project Euclid: 25 March 2004

Permanent link to this document
http://projecteuclid.org/euclid.bj/1080222083

Mathematical Reviews number (MathSciNet)
MR1828504

Zentralblatt MATH identifier
0989.65004

Keywords
adaptive Markov chain Monte Carlo comparison convergence ergodicity Markov chain Monte Carlo Metropolis-Hastings algorithm

Citation

Haario, Heikki; Saksman, Eero; Tamminen, Johanna. An adaptive Metropolis algorithm. Bernoulli 7 (2001), no. 2, 223--242. http://projecteuclid.org/euclid.bj/1080222083.


Export citation

References

  • [1] Davidson, J. and de Jong, R. (1997) Strong laws of large numbers for dependent heterogeneous processes: a synthesis of recent and new results. Econometric Rev., 16, 251-279. Abstract can also be found in the ISI/STMA publication
  • [2] Dobrushin, R. (1956) Central limit theorems for non-stationary Markov chains II. Theory Probab. Appl., 1, 329-383.
  • [3] Evans, M. (1991) Chaining via annealing. Ann. Statist., 19, 382-393. Abstract can also be found in the ISI/STMA publication
  • [4] Fishman, G.S. (1996) Monte Carlo: Concepts, Algorithms and Applications. New York: Springer- Verlag.
  • [5] Gelfand, A.E. and Sahu, S.K. (1994) On Markov chain Monte Carlo acceleration. J. Comput. Graph. Statist., 3, 261-276. Abstract can also be found in the ISI/STMA publication
  • [6] Gelman, A.G., Roberts, G.O. and Gilks, W.R. (1996) Efficient Metropolis jumping rules. In J.M. Bernardo, J.O. Berger, A.F. David and A.F.M. Smith (eds), Bayesian Statistics V, pp. 599-608. Oxford: Oxford University Press.
  • [7] Gilks, W.R. and Roberts, G.O. (1995) Strategies for improving MCMC. In W.R. Gilks, S. Richardson and D.J. Spiegelhalter (eds), Markov Chain Monte Carlo in Practice, pp. 75-88. London: Chapman & Hall.
  • [8] Gilks, W.R., Roberts, G.O. and George, E.I. (1994) Adaptive direction sampling. The Statistician, 43, 179-189. Abstract can also be found in the ISI/STMA publication
  • [9] Gilks, W.R., Richardson, S. and Spiegelhalter, D.J. (1995) Introducing Markov chain Monte Carlo. In W.R. Gilks, S. Richardson and D.J. Spiegelhalter (eds), Markov Chain Monte Carlo in Practice, pp. 1-19. London: Chapman & Hall.
  • [10] Gilks, W.R., Roberts, G.O. and Sahu, S.K. (1998) Adaptive Markov chain Monte Carlo. J. Amer. Statist. Assoc., 93, 1045-1054. Abstract can also be found in the ISI/STMA publication
  • [11] Haario, H. and Saksman, E. (1991) Simulated annealing process in general state space. Adv. Appl. Probab., 23, 866-893. Abstract can also be found in the ISI/STMA publication
  • [12] Haario, H., Saksman, E. and Tamminen, J. (1999) Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Statist., 14, 375-395. Abstract can also be found in the ISI/STMA publication
  • [13] Hall, P. and Heyde, C.C. (1980) Martingale Limit Theory and Its Application. New York, Academic Press.
  • [14] Hastings, W.K. (1970) Monte Carlo sampling methods using Markov chains and their applications. Biometrika, 57, 97-109.
  • [15] McLeish, D.L. (1975) A maximal inequality and dependent strong laws. Ann. Probab., 3, 829-839.
  • [16] Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H. and Teller, E. (1953) Equations of state calculations by fast computing machines. J. Chem. Phys., 21, 1087-1091.
  • [17] Neveu, J. (1965) Mathematical Foundations of the Calculus of Probability. San Francisco: Holden-Day.
  • [18] Nummelin, E. (1984) General Irreducible Markov Chains and Non-negative Operators. Cambridge: Cambridge University Press.
  • [19] Roberts, G.O., Gelman, A. and Gilks, W.R. (1997) Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab., 7, 110-120. Abstract can also be found in the ISI/STMA publication
  • [20] Sahu, S.K. and Zhigljavsky, A.A. (1999) Self regenerative Markov chain Monte Carlo with adaptation. Preprint. http://www.statslab.cam.ac.uk/ mcmc.
  • [21] Tierney, L. (1994) Markov chains for exploring posterior distributions (with discussion). Ann. Statist., 22, 1701-1762. Abstract can also be found in the ISI/STMA publication