The Annals of Statistics

Convergence of adaptive mixtures of importance sampling schemes

R. Douc, A. Guillin, J.-M. Marin, and C. P. Robert

Full-text: Open access

Abstract

In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao–Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.

Article information

Source
Ann. Statist. Volume 35, Number 1 (2007), 420-448.

Dates
First available in Project Euclid: 6 June 2007

Permanent link to this document
http://projecteuclid.org/euclid.aos/1181100193

Digital Object Identifier
doi:10.1214/009053606000001154

Mathematical Reviews number (MathSciNet)
MR2332281

Zentralblatt MATH identifier
1132.60022

Subjects
Primary: 60F05: Central limit and other weak theorems 62L12: Sequential estimation 65-04: Explicit machine computation and programs (not the theory of computation or programming) 65C05: Monte Carlo methods 65C40: Computational Markov chains 65C60: Computational problems in statistics

Keywords
Bayesian statistics Kullback divergence LLN MCMC algorithm population Monte Carlo proposal distribution Rao–Blackwellization

Citation

Douc, R.; Guillin, A.; Marin, J.-M.; Robert, C. P. Convergence of adaptive mixtures of importance sampling schemes. The Annals of Statistics 35 (2007), no. 1, 420--448. doi:10.1214/009053606000001154. http://projecteuclid.org/euclid.aos/1181100193.


Export citation

References

  • Agresti, A. (2002). Categorical Data Analysis, 2nd ed. Wiley, New York.
  • Andrieu, C. and Robert, C. (2001). Controlled Markov chain Monte Carlo methods for optimal sampling. Technical Report 0125, Univ. Paris Dauphine.
  • Cappé, O., Guillin, A., Marin, J. and Robert, C. (2004). Population Monte Carlo. J. Comput. Graph. Statist. 13 907--929.
  • Cappé, O., Moulines, E. and Rydén, T. (2005). Inference in Hidden Markov Models. Springer, New York.
  • Celeux, G., Marin, J. and Robert, C. (2006). Iterated importance sampling in missing data problems. Comput. Statist. Data Anal. 50 3386--3404.
  • Chopin, N. (2004). Central limit theorem for sequential Monte Carlo methods and its application to Bayesian inference. Ann. Statist. 32 2385--2411.
  • Csiszár, I. and Tusnády, G. (1984). Information geometry and alternating minimization procedures. Recent results in estimation theory and related topics. Statist. Decisions 1984 (suppl. 1) 205--237.
  • Del Moral, P., Doucet, A. and Jasra, A. (2006). Sequential Monte Carlo samplers. J. R. Stat. Soc. Ser. B Stat. Methodol. 68 411--436.
  • Douc, R., Guillin, A., Marin, J. and Robert, C. (2005). Minimum variance importance sampling via population Monte Carlo. Technical report, Cahiers du CEREMADE, Univ. Paris Dauphine.
  • Douc, R. and Moulines, E. (2005). Limit theorems for properly weighted samples with applications to sequential Monte Carlo. Technical report, TSI, Telecom Paris.
  • Doucet, A., de Freitas, N. and Gordon, N., eds. (2001). Sequential Monte Carlo Methods in Practice. Springer, New York.
  • Gilks, W., Roberts, G. and Sahu, S. (1998). Adaptive Markov chain Monte Carlo through regeneration. J. Amer. Statist. Assoc. 93 1045--1054.
  • Haario, H., Saksman, E. and Tamminen, J. (1999). Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Statist. 14 375--395.
  • Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7 223--242.
  • Hesterberg, T. (1995). Weighted average importance sampling and defensive mixture distributions. Technometrics 37 185--194.
  • Iba, Y. (2000). Population-based Monte Carlo algorithms. Trans. Japanese Society for Artificial Intelligence 16 279--286.
  • Künsch, H. (2005). Recursive Monte Carlo filters: Algorithms and theoretical analysis. Ann. Statist. 33 1983--2021.
  • Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24 101--121.
  • Robert, C. (1996). Intrinsic losses. Theory and Decision 40 191--214.
  • Robert, C. and Casella, G. (2004). Monte Carlo Statistical Methods, 2nd ed. Springer, New York.
  • Roberts, G. O., Gelman, A. and Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab. 7 110--120.
  • Rubin, D. (1988). Using the SIR algorithm to simulate posterior distributions. In Bayesian Statistics 3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.) 395--402. Oxford Univ. Press.
  • Sahu, S. and Zhigljavsky, A. (1998). Adaptation for self regenerative MCMC. Technical report, Univ. of Wales, Cardiff.
  • Sahu, S. and Zhigljavsky, A. (2003). Self regenerative Markov chain Monte Carlo with adaptation. Bernoulli 9 395--422.
  • Tierney, L. (1994). Markov chains for exploring posterior distributions (with discussion). Ann. Statist. 22 1701--1762.