Journal of Applied Probability

Summary statistics for endpoint-conditioned continuous-time Markov chains

Asger Hobolth and Jens Ledet Jensen

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


Continuous-time Markov chains are a widely used modelling tool. Applications include DNA sequence evolution, ion channel gating behaviour, and mathematical finance. We consider the problem of calculating properties of summary statistics (e.g. mean time spent in a state, mean number of jumps between two states, and the distribution of the total number of jumps) for discretely observed continuous-time Markov chains. Three alternative methods for calculating properties of summary statistics are described and the pros and cons of the methods are discussed. The methods are based on (i) an eigenvalue decomposition of the rate matrix, (ii) the uniformization method, and (iii) integrals of matrix exponentials. In particular, we develop a framework that allows for analyses of rather general summary statistics using the uniformization method.

Article information

J. Appl. Probab., Volume 48, Number 4 (2011), 911-924.

First available in Project Euclid: 16 December 2011

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60-08: Computational methods (not classified at a more specific level) [See also 65C50]
Secondary: 60J22: Computational methods in Markov chains [See also 65C40] 60J25: Continuous-time Markov processes on general state spaces 60J27: Continuous-time Markov processes on discrete state spaces

Continuous-time Markov chain dwelling time EM algorithm transition number uniformization


Hobolth, Asger; Jensen, Jens Ledet. Summary statistics for endpoint-conditioned continuous-time Markov chains. J. Appl. Probab. 48 (2011), no. 4, 911--924. doi:10.1239/jap/1324046009.

Export citation


  • Ball, F. and Milne, R. K. (2005). Simple derivations of properties of counting processes associated with Markov renewal processes. J. Appl. Prob. 42, 1031–1043.
  • Bladt, M. and Sørensen, M. (2005). Statistical inference for discretely observed Markov jump processes. J. R. Statist. Soc B 67, 395–410.
  • Bladt, M. and Sørensen, M. (2009). Efficient estimation of transition rates between credit ratings from observations at discrete time points. Quant. Finance 9, 147–160.
  • Bladt, M., Meini, B., Neuts, M. F. and Sericola, B. (2002). Distributions of reward functions on continuous-time Markov chains. In Matrix-Analytic Methods, ed. G. Latouche, World Scientific, River Edge, NJ, pp. 39–62.
  • Dempster, A. P., Laird, N. M. and Rubin, D. B. (1977). Maximum likelihood from incomplete data via the EM algorithm. J. R. Statist. Soc B 39, 1–38.
  • Grassmann, W. K. (1993). Rounding errors in certain algorithms involving Markov chains. ACM Trans. Math. Software 19, 496–508.
  • Guttorp, P. (1995). Stochastic Modeling of Scientific Data. Chapman and Hall, London.
  • Hobolth, A. and Jensen, J. L. (2005). Statistical inference in evolutionary models of DNA sequences via the EM algorithm. Statist. Appl. Genet. Molec. Biol. 4, 20pp.
  • Holmes, I. and Rubin, G. M. (2002). An expectation maximization algorithm for training hidden substitution models. J. Molec. Biol. 317, 753–764.
  • Jensen, A. (1953). Markoff chains as an aid in the study of Markoff processes. Skand. Aktuarietidskr. 36, 87–91.
  • Klosterman, P. S. \et (2006). XRate: a fast prototyping, training and annotation tool for phylo-grammars. BMC Bioinformatics 7, 25pp.
  • Kosiol, C., Holmes, I. and Goldman, N. (2007). An empirical codon model for protein sequence evolution. Molec. Biol. Evol. 24, 1464–1479.
  • Metzner, P., Horenko, I. and Schütte, C. (2007). Generator estimation of Markov jump processes based on incomplete observations nonequidistant in time. Phys. Rev. E 76, 066702, 8pp.
  • Minin, V. N. and Suchard, M. A. (2008). Counting labeled transitions in continuous-time Markov models of evolution. J. Math. Biol. 56, 391–412.
  • Moler, C. and Van Loan, C. (2003). Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM Rev. 45, 3–49.
  • Narayana, S. and Neuts, M. F. (1992). The first two moment matrices of the counts for the Markovian arrival process. Commun. Statist. Stoch. Models 8, 459–477.
  • Ross, S. M. (1983). Stochastic Processes. John Wiley, New York.
  • Siepel, A., Pollard, K. S. and Haussler, D. (2006). New methods for detecting lineage-specific selection. In Research in Computational Molecular Biology (Lecture Notes Bioinformatics 3909), eds A. Apostolico et al., Springer, Berlin, pp. 190–205.
  • Van Loan, C. F. (1978). Computing integrals involving the matrix exponential. IEEE Trans. Automatic Control 23, 395–404.