Abstract
This paper considers the problem of scaling the proposal distribution of a multidimensional random walk Metropolis algorithm in order to maximize the efficiency of the algorithm. The main result is a weak convergence result as the dimension of a sequence of target densities, n, converges to $\infty$. When the proposal variance is appropriately scaled according to n, the sequence of stochastic processes formed by the first component of each Markov chain converges to the appropriate limiting Langevin diffusion process.
The limiting diffusion approximation admits a straightforward efficiency maximization problem, and the resulting asymptotically optimal policy is related to the asymptotic acceptance rate of proposed moves for the algorithm. The asymptotically optimal acceptance rate is 0.234 under quite general conditions.
The main result is proved in the case where the target density has a symmetric product form. Extensions of the result are discussed.
Citation
A. Gelman. W. R. Gilks. G. O. Roberts. "Weak convergence and optimal scaling of random walk Metropolis algorithms." Ann. Appl. Probab. 7 (1) 110 - 120, February 1997. https://doi.org/10.1214/aoap/1034625254
Information