Open Access
February 1997 Weak convergence and optimal scaling of random walk Metropolis algorithms
A. Gelman, W. R. Gilks, G. O. Roberts
Ann. Appl. Probab. 7(1): 110-120 (February 1997). DOI: 10.1214/aoap/1034625254

Abstract

This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to $\infty$. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain converges to the appropriate limiting Langevin diffusion process.

The limiting diffusion approximation admits a straightforward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions.

The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed.

Citation

Download Citation

A. Gelman. W. R. Gilks. G. O. Roberts. "Weak convergence and optimal scaling of random walk Metropolis algorithms." Ann. Appl. Probab. 7 (1) 110 - 120, February 1997. https://doi.org/10.1214/aoap/1034625254

Information

Published: February 1997
First available in Project Euclid: 14 October 2002

zbMATH: 0876.60015
MathSciNet: MR1428751
Digital Object Identifier: 10.1214/aoap/1034625254

Subjects:
Primary: 60F05
Secondary: 65U05

Keywords: Markov chain Monte Carlo , Metropolis algorithm , Optimal scaling , weak convergence

Rights: Copyright © 1997 Institute of Mathematical Statistics

Vol.7 • No. 1 • February 1997
Back to Top