The Annals of Statistics

Sequentially interacting Markov chain Monte Carlo methods

Anthony Brockwell, Pierre Del Moral, and Arnaud Doucet

Full-text: Open access

Abstract

Sequential Monte Carlo (SMC) is a methodology for sampling approximately from a sequence of probability distributions of increasing dimension and estimating their normalizing constants. We propose here an alternative methodology named Sequentially Interacting Markov Chain Monte Carlo (SIMCMC). SIMCMC methods work by generating interacting non-Markovian sequences which behave asymptotically like independent Metropolis–Hastings (MH) Markov chains with the desired limiting distributions. Contrary to SMC, SIMCMC allows us to iteratively improve our estimates in an MCMC-like fashion. We establish convergence results under realistic verifiable assumptions and demonstrate its performance on several examples arising in Bayesian time series analysis.

Article information

Source
Ann. Statist., Volume 38, Number 6 (2010), 3387-3411.

Dates
First available in Project Euclid: 30 November 2010

Permanent link to this document
https://projecteuclid.org/euclid.aos/1291126961

Digital Object Identifier
doi:10.1214/09-AOS747

Mathematical Reviews number (MathSciNet)
MR2766856

Zentralblatt MATH identifier
1251.65002

Subjects
Primary: 65C05: Monte Carlo methods 60J05: Discrete-time Markov processes on general state spaces
Secondary: 62F15: Bayesian inference

Keywords
Markov chain Monte Carlo normalizing constants sequential Monte Carlo state-space models

Citation

Brockwell, Anthony; Del Moral, Pierre; Doucet, Arnaud. Sequentially interacting Markov chain Monte Carlo methods. Ann. Statist. 38 (2010), no. 6, 3387--3411. doi:10.1214/09-AOS747. https://projecteuclid.org/euclid.aos/1291126961


Export citation

References

  • [1] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2007). Nonlinear Markov chain Monte Carlo via self-interacting approximations. Technical report, Dept. Mathematics, Bristol Univ.
  • [2] Andrieu, C. and Moulines, E. (2006). On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab. 16 1462–1505.
  • [3] Bercu, B., Del Moral, P. and Doucet, A. (2008). Fluctuations of interacting Markov chain Monte Carlo models. Technical Report Inria-00227536, INRIA Bordeaux Sud-Ouest. Available at http://hal.inria.fr/docs/00/23/92/48/PDF/RR-6438.pdf.
  • [4] Brockwell, A., Rojas, A. and Kass, R. (2004). Recursive Bayesian decoding of motor cortical signals by particle filtering. J. Neurophysiology 91 1899–1907.
  • [5] Carpenter, J., Clifford, P. and Fearnhead, P. (1999). An improved particle filter for non-linear problems. IEE Proceedings—Radar, Sonar and Navigation 146 2–7.
  • [6] Chopin, N. (2002). A sequential particle filter method for static models. Biometrika 89 539–552.
  • [7] Del Moral, P. (2004). Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Springer, New York.
  • [8] Del Moral, P. and Doucet, A. (2010). Interacting Markov chain Monte Carlo methods for solving nonlinear measure-valued equations. Ann. Appl. Probab. 20 593–639.
  • [9] Del Moral, P., Doucet, A. and Jasra, A. (2006). Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 411–436.
  • [10] Del Moral, P. and Miclo, L. (2004). On convergence of chains with occupational self-interactions. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 460 325–346.
  • [11] Doucet, A., Godsill, S. J. and Andrieu, C. (2000). On sequential Monte Carlo sampling methods for Bayesian filtering. Statist. Comput. 10 197–208.
  • [12] Doucet, A., de Freitas, J. F. G. and Gordon, N. J., eds. (2001). Sequential Monte Carlo Methods in Practice. Springer, New York.
  • [13] Fox, D. (2002). KLD-sampling: Adaptive particle filters. In Advances in Neural Information Processing Systems 14 713–720. MIT Press, Cambridge, MA.
  • [14] Gerlach, R., Carter, C. K. and Kohn, R. (1999). Diagnostics for time series analysis. J. Time Ser. Anal. 20 309–330.
  • [15] Gilks, W. R. and Berzuini, C. (2001). Following a moving target—Monte Carlo inference for dynamic Bayesian models. J. R. Stat. Soc. Ser. B Stat. Methodol. 63 127–146.
  • [16] Glynn, P. and Meyn, S. (1996). A Liapunov bound for solutions of Poisson’s equation. Ann. Probab. 24 916–931.
  • [17] Green, P. J. (2003). Trans-dimensional Markov chain Monte carlo. In Highly Structured Stochastic Systems (P. J. Green, N. L. Hjort and S. Richardson, eds.) 179–196. Oxford Univ. Press, Oxford.
  • [18] Kitagawa, G. (1996). Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. J. Comput. Graph. Statist. 5 1–25.
  • [19] Kou, S., Zhou, Q. and Wong, W. (2006). Equi-energy sampler with applications in statistical inference and statistical mechanics (with discussion). Ann. Statist. 34 1581–1652.
  • [20] Jasra, A., Stephens, D. A. and Holmes, C. C. (2007). On population-based simulation for static inference. Statist. Comput. 17 263–279.
  • [21] Liu, J. S. (2001). Monte Carlo Strategies in Scientific Computing. Springer, New York.
  • [22] Liu, J. S. and Chen, R. (1998). Sequential Monte Carlo for dynamic systems. J. Amer. Statist. Assoc. 93 1032–1044.
  • [23] Lyman, E. and Zuckerman, D. M. (2006). Resolution exchange simulation with incremental coarsening. J. Chem. Theory Comput. 2 656–666.
  • [24] Mengersen, K. L. and Tweedie, R. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101–121.
  • [25] Pitt, M. K. and Shephard, N. (1999). Filtering via simulation: Auxiliary particle filter. J. Amer. Statist. Assoc. 94 590–599.
  • [26] Roberts, G. O. and Rosenthal, J. S. (2007). Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Probab. 44 458–475.
  • [27] Septier, F., Pang, S. K., Carmi, A. and Godsill, S. J. (2009). On MCMC-based particle methods for Bayesian filtering: Application to multitarget tracking. In Proc. 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing 360–363. IEEE, Aruba.
  • [28] Shiryaev, A. N. (1996). Probability, 2nd ed. Springer, New York.