## Annals of Applied Probability

### Spectral gaps for a Metropolis–Hastings algorithm in infinite dimensions

#### Abstract

We study the problem of sampling high and infinite dimensional target measures arising in applications such as conditioned diffusions and inverse problems. We focus on those that arise from approximating measures on Hilbert spaces defined via a density with respect to a Gaussian reference measure. We consider the Metropolis–Hastings algorithm that adds an accept–reject mechanism to a Markov chain proposal in order to make the chain reversible with respect to the target measure. We focus on cases where the proposal is either a Gaussian random walk (RWM) with covariance equal to that of the reference measure or an Ornstein–Uhlenbeck proposal (pCN) for which the reference measure is invariant.

Previous results in terms of scaling and diffusion limits suggested that the pCN has a convergence rate that is independent of the dimension while the RWM method has undesirable dimension-dependent behaviour. We confirm this claim by exhibiting a dimension-independent Wasserstein spectral gap for pCN algorithm for a large class of target measures. In our setting this Wasserstein spectral gap implies an $L^{2}$-spectral gap. We use both spectral gaps to show that the ergodic average satisfies a strong law of large numbers, the central limit theorem and nonasymptotic bounds on the mean square error, all dimension independent. In contrast we show that the spectral gap of the RWM algorithm applied to the reference measures degenerates as the dimension tends to infinity.

#### Article information

Source
Ann. Appl. Probab., Volume 24, Number 6 (2014), 2455-2490.

Dates
First available in Project Euclid: 26 August 2014

https://projecteuclid.org/euclid.aoap/1409058037

Digital Object Identifier
doi:10.1214/13-AAP982

Mathematical Reviews number (MathSciNet)
MR3262508

Zentralblatt MATH identifier
1307.65002

#### Citation

Hairer, Martin; Stuart, Andrew M.; Vollmer, Sebastian J. Spectral gaps for a Metropolis–Hastings algorithm in infinite dimensions. Ann. Appl. Probab. 24 (2014), no. 6, 2455--2490. doi:10.1214/13-AAP982. https://projecteuclid.org/euclid.aoap/1409058037

#### References

• Adler, R. J. (1990). An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Processes. Institute of Mathematical Statistics Lecture Notes—Monograph Series 12. IMS, Hayward, CA.
• Athreya, K. B. and Ney, P. (1978). A new approach to the limit theory of recurrent Markov chains. Trans. Amer. Math. Soc. 245 493–501.
• Bakry, D. and Émery, M. (1985). Diffusions hypercontractives. In Séminaire de Probabilités, XIX, 1983/84. Lecture Notes in Math. 1123 177–206. Springer, Berlin.
• Beskos, A., Kalogeropoulos, K. and Pazos, E. (2013). Advanced MCMC methods for sampling on diffusion pathspace. Stochastic Process. Appl. 123 1415–1453.
• Beskos, A., Roberts, G. and Stuart, A. (2009). Optimal scalings for local Metropolis–Hastings chains on nonproduct targets in high dimensions. Ann. Appl. Probab. 19 863–898.
• Beskos, A., Roberts, G., Stuart, A. and Voss, J. (2008). MCMC methods for diffusion bridges. Stoch. Dyn. 8 319–350.
• Beskos, A., Pinski, F., Sanz-Serna, J. M. and Stuart, A. M. (2011). Hybrid Monte-Carlo on Hilbert spaces. Stochastic Process. Appl. 121 2201–2230.
• Bogachev, V. I. (1998). Gaussian Measures. Mathematical Surveys and Monographs 62. Amer. Math. Soc., Providence, RI.
• Bogachev, V. I. (2007). Measure Theory. Vol. I, II. Springer, Berlin.
• Chan, K. S. and Geyer, C. J. (1994). Discussion: Markov chains for exploring posterior distributions. Ann. Statist. 22 1747–1758.
• Cheeger, J. (1970). A lower bound for the smallest eigenvalue of the Laplacian. In Problems in Analysis (Papers Dedicated to Salomon Bochner, 1969) 195–199. Princeton Univ. Press, Princeton, NJ.
• Cotter, S. L., Roberts, G. O., Stuart, A. M. and White, D. (2013). MCMC methods for functions: Modifying old algorithms to make them faster. Statist. Sci. 28 424–446.
• Cuny, C. and Lin, M. (2009). Pointwise ergodic theorems with rate and application to the CLT for Markov chains. Ann. Inst. Henri Poincaré Probab. Stat. 45 710–733.
• Dashti, M., Harris, S. and Stuart, A. (2012). Besov priors for Bayesian inverse problems. Inverse Probl. Imaging 6 183–200.
• Dashti, M. and Stuart, A. M. (2011). Uncertainty quantification and weak approximation of an elliptic inverse problem. SIAM J. Numer. Anal. 49 2524–2542.
• Da Prato, G. and Zabczyk, J. (1992). Stochastic Equations in Infinite Dimensions. Encyclopedia of Mathematics and Its Applications 44. Cambridge Univ. Press, Cambridge.
• Diaconis, P. and Stroock, D. (1991). Geometric bounds for eigenvalues of Markov chains. Ann. Appl. Probab. 1 36–61.
• Eberle, A. (2014). Error bounds for Metropolis–Hastings algorithms applied to perturbations of Gaussian measures in high dimensions. Ann. Appl. Probab. 24 337–377.
• Frigessi, A., di Stefano, P., Hwang, C.-R. and Sheu, S. J. (1993). Convergence rates of the Gibbs sampler, the Metropolis algorithm and other single-site updating dynamics. J. Roy. Statist. Soc. Ser. B 55 205–219.
• Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7 473–483.
• Geyer, C. J. and Thompson, E. A. (1995). Annealing Markov chain Monte Carlo with applications to ancestral inference. J. Amer. Statist. Assoc. 90 909–920.
• Hairer, M. (2010). An introduction to stochastic PDEs. Lecture notes, University of Warwick.
• Hairer, M. and Majda, A. J. (2010). A simple framework to justify linear response theory. Nonlinearity 23 909–922.
• Hairer, M., Mattingly, J. C. and Scheutzow, M. (2011). Asymptotic coupling and a general form of Harris’ Theorem with applications to stochastic delay equations. Probab. Theory Related Fields 149 223–259.
• Hairer, M., Stuart, A. M. and Voss, J. (2007). Analysis of SPDEs arising in path sampling. II. The nonlinear case. Ann. Appl. Probab. 17 1657–1706.
• Hastings, W. K. (1970). Monte-Carlo sampling methods using Markov chains and their applications. Biometrika 57 97.
• Hjort, N. L., Holmes, C., Müller, P. and Walker, S. G., eds. (2010). Bayesian Nonparametrics. Cambridge Series in Statistical and Probabilistic Mathematics 28. Cambridge Univ. Press, Cambridge.
• Joulin, A. and Ollivier, Y. (2010). Curvature, concentration and error estimates for Markov chain Monte Carlo. Ann. Probab. 38 2418–2442.
• Kipnis, C. and Varadhan, S. R. S. (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104 1–19.
• Komorowski, T. and Walczuk, A. (2012). Central limit theorem for Markov processes with spectral gap in the Wasserstein metric. Stochastic Process. Appl. 122 2155–2184.
• Lassas, M., Saksman, E. and Siltanen, S. (2009). Discretization-invariant Bayesian inversion and Besov space priors. Inverse Probl. Imaging 3 87–122.
• Łatuszyński, K. and Niemiro, W. (2011). Rigorous confidence bounds for MCMC under a geometric drift condition. J. Complexity 27 23–38.
• Łatuszyński, K. and Roberts, G. O. (2013). CLTs and asymptotic variance of time-sampled Markov chains. Methodol. Comput. Appl. Probab. 15 237–247.
• Lawler, G. F. and Sokal, A. D. (1988). Bounds on the $L^{2}$ spectrum for Markov chains and Markov processes: A generalization of Cheeger’s inequality. Trans. Amer. Math. Soc. 309 557–580.
• Lee, P. M. (2004). Bayesian Statistics: An Introduction, 3rd ed. Arnold, London.
• Liu, J. S. (2008). Monte Carlo Strategies in Scientific Computing. Springer, New York.
• Lovász, L. and Simonovits, M. (1993). Random walks in a convex body and an improved volume algorithm. Random Structures Algorithms 4 359–412.
• Mattingly, J. C., Pillai, N. S. and Stuart, A. M. (2012). Diffusion limits of the random walk Metropolis algorithm in high dimensions. Ann. Appl. Probab. 22 881–930.
• Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., Teller, E. et al. (1953). Equation of state calculations by fast computing machines. J. Chem. Phys. 21 1087.
• Meyn, S. and Tweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd ed. Cambridge Univ. Press, Cambridge.
• Nummelin, E. (1978). A splitting technique for Harris recurrent Markov chains. Probab. Theory Related Fields 43 309–318.
• Pillai, N. S., Stuart, A. M. and Thiéry, A. H. (2011). Optimal proposal design for random walk type Metropolis algorithms with Gaussian random field priors. ArXiv E-prints.
• Robert, C. P. and Casella, G. (2004). Monte Carlo Statistical Methods, 2nd ed. Springer, New York.
• Roberts, G. O. and Tweedie, R. L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83 95–110.
• Röckner, M. and Wang, F.-Y. (2001). Weak Poincaré inequalities and $L^{2}$-convergence rates of Markov semigroups. J. Funct. Anal. 185 564–603.
• Rudolf, D. (2012). Explicit error bounds for Markov chain Monte Carlo. Dissertationes Math. (Rozprawy Mat.) 485 1–93.
• Schwab, C. and Stuart, A. M. (2012). Sparse deterministic approximation of Bayesian inverse problems. Inverse Problems 28 045003, 32.
• Sinclair, A. and Jerrum, M. (1989). Approximate counting, uniform generation and rapidly mixing Markov chains. Inform. and Comput. 82 93–133.
• Stuart, A. M. (2010). Inverse problems: A Bayesian perspective. Acta Numer. 19 451–559.
• Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.
• Vollmer, S. J. (2013). Dimension-independent MCMC sampling for inverse problems with non-Gaussian priors. Available at arXiv:1302.2213.
• Wang, F.-Y. (2003). Functional inequalities for the decay of sub-Markov semigroups. Potential Anal. 18 1–23.