The Annals of Statistics
- Ann. Statist.
- Volume 35, Number 1 (2007), 420-448.
Convergence of adaptive mixtures of importance sampling schemes
In the design of efficient simulation algorithms, one is often beset with a poor choice of proposal distributions. Although the performance of a given simulation kernel can clarify a posteriori how adequate this kernel is for the problem at hand, a permanent on-line modification of kernels causes concerns about the validity of the resulting algorithm. While the issue is most often intractable for MCMC algorithms, the equivalent version for importance sampling algorithms can be validated quite precisely. We derive sufficient convergence conditions for adaptive mixtures of population Monte Carlo algorithms and show that Rao–Blackwellized versions asymptotically achieve an optimum in terms of a Kullback divergence criterion, while more rudimentary versions do not benefit from repeated updating.
Ann. Statist., Volume 35, Number 1 (2007), 420-448.
First available in Project Euclid: 6 June 2007
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 60F05: Central limit and other weak theorems 62L12: Sequential estimation 65-04: Explicit machine computation and programs (not the theory of computation or programming) 65C05: Monte Carlo methods 65C40: Computational Markov chains 65C60: Computational problems in statistics
Douc, R.; Guillin, A.; Marin, J.-M.; Robert, C. P. Convergence of adaptive mixtures of importance sampling schemes. Ann. Statist. 35 (2007), no. 1, 420--448. doi:10.1214/009053606000001154. https://projecteuclid.org/euclid.aos/1181100193