Open Access
February 2005 Dynamic importance sampling for uniformly recurrent Markov chains
Paul Dupuis, Hui Wang
Ann. Appl. Probab. 15(1A): 1-38 (February 2005). DOI: 10.1214/105051604000001016

Abstract

Importance sampling is a variance reduction technique for efficient estimation of rare-event probabilities by Monte Carlo. In standard importance sampling schemes, the system is simulated using an a priori fixed change of measure suggested by a large deviation lower bound analysis. Recent work, however, has suggested that such schemes do not work well in many situations. In this paper we consider dynamic importance sampling in the setting of uniformly recurrent Markov chains. By “dynamic” we mean that in the course of a single simulation, the change of measure can depend on the outcome of the simulation up till that time. Based on a control-theoretic approach to large deviations, the existence of asymptotically optimal dynamic schemes is demonstrated in great generality. The implementation of the dynamic schemes is carried out with the help of a limiting Bellman equation. Numerical examples are presented to contrast the dynamic and standard schemes.

Citation

Download Citation

Paul Dupuis. Hui Wang. "Dynamic importance sampling for uniformly recurrent Markov chains." Ann. Appl. Probab. 15 (1A) 1 - 38, February 2005. https://doi.org/10.1214/105051604000001016

Information

Published: February 2005
First available in Project Euclid: 28 January 2005

zbMATH: 1068.60036
MathSciNet: MR2115034
Digital Object Identifier: 10.1214/105051604000001016

Subjects:
Primary: 60F10 , 65C05 , 93E20

Keywords: asymptotic optimality , importance sampling , Markov chain , Monte Carlo simulation , Rare events , stochastic game , weak convergence

Rights: Copyright © 2005 Institute of Mathematical Statistics

Vol.15 • No. 1A • February 2005
Back to Top