Bernoulli

  • Bernoulli
  • Volume 25, Number 1 (2019), 584-622.

Sequential Monte Carlo as approximate sampling: bounds, adaptive resampling via $\infty$-ESS, and an application to particle Gibbs

Jonathan H. Huggins and Daniel M. Roy

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Sequential Monte Carlo (SMC) algorithms were originally designed for estimating intractable conditional expectations within state-space models, but are now routinely used to generate approximate samples in the context of general-purpose Bayesian inference. In particular, SMC algorithms are often used as subroutines within larger Monte Carlo schemes, and in this context, the demands placed on SMC are different: control of mean-squared error is insufficient—one needs to control the divergence from the target distribution directly. Towards this goal, we introduce the conditional adaptive resampling particle filter, building on the work of Gordon, Salmond, and Smith (1993), Andrieu, Doucet, and Holenstein (2010), and Whiteley, Lee, and Heine (2016). By controlling a novel notion of effective sample size, the $\infty$-ESS, we establish the efficiency of the resulting SMC sampling algorithm, providing an adaptive resampling extension of the work of Andrieu, Lee, and Vihola (2018). We apply our results to arrive at new divergence bounds for SMC samplers with adaptive resampling as well as an adaptive resampling version of the Particle Gibbs algorithm with the same geometric-ergodicity guarantees as its nonadaptive counterpart.

Article information

Source
Bernoulli, Volume 25, Number 1 (2019), 584-622.

Dates
Received: March 2015
Revised: April 2017
First available in Project Euclid: 12 December 2018

Permanent link to this document
https://projecteuclid.org/euclid.bj/1544605257

Digital Object Identifier
doi:10.3150/17-BEJ999

Mathematical Reviews number (MathSciNet)
MR3892330

Zentralblatt MATH identifier
07007218

Keywords
adaptive resampling effective sample size geometric ergodicity particle Gibbs sequential Monte Carlo state-space models uniform ergodicity

Citation

Huggins, Jonathan H.; Roy, Daniel M. Sequential Monte Carlo as approximate sampling: bounds, adaptive resampling via $\infty$-ESS, and an application to particle Gibbs. Bernoulli 25 (2019), no. 1, 584--622. doi:10.3150/17-BEJ999. https://projecteuclid.org/euclid.bj/1544605257


Export citation

References

  • [1] Andrieu, C., Doucet, A. and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B. Stat. Methodol. 72 269–342.
  • [2] Andrieu, C., Lee, A. and Vihola, M. (2018). Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers. Bernoulli 24 842–872.
  • [3] Andrieu, C. and Roberts, G.O. (2009). The pseudo-marginal approach for efficient Monte Carlo computations. Ann. Statist. 37 697–725.
  • [4] Andrieu, C. and Vihola, M. (2015). Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. Ann. Appl. Probab. 25 1030–1077.
  • [5] Chopin, N. and Singh, S.S. (2015). On particle Gibbs sampling. Bernoulli 21 1855–1883.
  • [6] Cornebise, J., Moulines, É. and Olsson, J. (2008). Adaptive methods for sequential importance sampling with application to state space models. Stat. Comput. 18 461–480.
  • [7] Del Moral, P. (2004). Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Probability and Its Applications. New York: Springer.
  • [8] Del Moral, P., Doucet, A. and Jasra, A. (2006). Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B. Stat. Methodol. 68 411–436.
  • [9] Doucet, A., de Freitas, N. and Gordon, N. (2001). Sequential Monte Carlo in Practice. New York: Springer.
  • [10] Doucet, A., Godsill, S.J. and Andrieu, C. (2000). On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 10 197–208.
  • [11] Doucet, A. and Johansen, A.M. (2010). A tutorial on particle filtering and smoothing: Fifteen years later. In Handbook of Nonlinear Filtering (D. Crisan and B. Rozovsky, eds.). Cambridge: Cambridge Univ. Press.
  • [12] Gordon, N.J., Salmond, D.J. and Smith, A.F. (1993). Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proc., F, Radar Signal Process. 140 107–113.
  • [13] Holenstein, R. (2009). Particle Markov Chain Monte Carlo. Ph.D. thesis, Univ. British Columbia, Vancouver.
  • [14] Kantas, N., Doucet, A., Singh, S.S. and Maciejowski, J.M. (2009). An overview of sequential Monte Carlo methods for parameter estimation in general state-space models. In 15th IFAC Symposium on System Identification 774–785.
  • [15] Künsch, H.R. (2013). Particle filters. Bernoulli 19 1391–1403.
  • [16] Lee, A. and Latuszynski, K. (2014). Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for approximate Bayesian computation. Biometrika 101 655–671.
  • [17] Lindsten, F., Douc, R. and Moulines, E. (2015). Uniform ergodicity of the particle Gibbs sampler. Scand. J. Stat. 42 775–797.
  • [18] Whiteley, N., Lee, A. and Heine, K. (2016). On the role of interaction in sequential Monte Carlo algorithms. Bernoulli 22 494–529.