Abstract
Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.
Citation
Adam W. Grace. Dirk P. Kroese. Werner Sandmann. "Automated state-dependent importance sampling for Markov jump processes via sampling from the zero-variance distribution." J. Appl. Probab. 51 (3) 741 - 755, September 2014. https://doi.org/10.1239/jap/1409932671
Information