Journal of Applied Probability
- J. Appl. Probab.
- Volume 51, Number 3 (2014), 741-755.
Automated state-dependent importance sampling for Markov jump processes via sampling from the zero-variance distribution
Many complex systems can be modeled via Markov jump processes. Applications include chemical reactions, population dynamics, and telecommunication networks. Rare-event estimation for such models can be difficult and is often computationally expensive, because typically many (or very long) paths of the Markov jump process need to be simulated in order to observe the rare event. We present a state-dependent importance sampling approach to this problem that is adaptive and uses Markov chain Monte Carlo to sample from the zero-variance importance sampling distribution. The method is applicable to a wide range of Markov jump processes and achieves high accuracy, while requiring only a small sample to obtain the importance parameters. We demonstrate its efficiency through benchmark examples in queueing theory and stochastic chemical kinetics.
J. Appl. Probab., Volume 51, Number 3 (2014), 741-755.
First available in Project Euclid: 5 September 2014
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Primary: 60J28: Applications of continuous-time Markov processes on discrete state spaces
Secondary: 62M05: Markov processes: estimation
Importance sampling adaptive automated improved cross entropy state dependent zero-variance distribution Markov jump process continuous-time Markov chain stochastic chemical kinetics queueing system
Grace, Adam W.; Kroese, Dirk P.; Sandmann, Werner. Automated state-dependent importance sampling for Markov jump processes via sampling from the zero-variance distribution. J. Appl. Probab. 51 (2014), no. 3, 741--755. doi:10.1239/jap/1409932671. https://projecteuclid.org/euclid.jap/1409932671