September 2014 Automated state-dependent importance sampling for Markov jump processes via sampling from the zero-variance distribution
Adam W. Grace, Dirk P. Kroese, Werner Sandmann
Author Affiliations +
J. Appl. Probab. 51(3): 741-755 (September 2014). DOI: 10.1239/jap/1409932671


Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.


Download Citation

Adam W. Grace. Dirk P. Kroese. Werner Sandmann. "Automated state-dependent importance sampling for Markov jump processes via sampling from the zero-variance distribution." J. Appl. Probab. 51 (3) 741 - 755, September 2014.


Published: September 2014
First available in Project Euclid: 5 September 2014

zbMATH: 1305.60081
MathSciNet: MR3256224
Digital Object Identifier: 10.1239/jap/1409932671

Primary: 60J28
Secondary: 62M05

Keywords: adaptive , automated , continuous-time Markov chain , importance sampling , improved cross entropy , Markov jump process , queueing system , state dependent , Stochastic chemical kinetics , zero-variance distribution

Rights: Copyright © 2014 Applied Probability Trust


This article is only available to subscribers.
It is not available for individual sale.

Vol.51 • No. 3 • September 2014
Back to Top