Journal of Applied Probability

Bounded truncation error for long-run averages in infinite Markov chains

Hendrik Baumann and Werner Sandmann

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

We consider long-run averages of additive functionals on infinite discrete-state Markov chains, either continuous or discrete in time. Special cases include long-run average costs or rewards, stationary moments of the components of ergodic multi-dimensional Markov chains, queueing network performance measures, and many others. By exploiting Foster-Lyapunov-type criteria involving drift conditions for the finiteness of long-run averages we determine suitable finite subsets of the state space such that the truncation error is bounded. Illustrative examples demonstrate the application of this method.

Article information

Source
J. Appl. Probab., Volume 52, Number 3 (2015), 609-621.

Dates
First available in Project Euclid: 22 October 2015

Permanent link to this document
https://projecteuclid.org/euclid.jap/1445543835

Digital Object Identifier
doi:10.1239/jap/1445543835

Mathematical Reviews number (MathSciNet)
MR3414980

Zentralblatt MATH identifier
1326.60107

Subjects
Primary: 60J22: Computational methods in Markov chains [See also 65C40]
Secondary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces) 60J27: Continuous-time Markov processes on discrete state spaces 60J28: Applications of continuous-time Markov processes on discrete state spaces

Keywords
Infinite Markov chain additive functional long-run average state space truncation bounded truncation error Foster-Lyapunov-type criterion drift condition

Citation

Baumann, Hendrik; Sandmann, Werner. Bounded truncation error for long-run averages in infinite Markov chains. J. Appl. Probab. 52 (2015), no. 3, 609--621. doi:10.1239/jap/1445543835. https://projecteuclid.org/euclid.jap/1445543835


Export citation

References

  • Anderson, W. J. (1991). Continuous-Time Markov Chains. Springer, New York.
  • Asmussen, S. (2003). Applied Probability and Queues, 2nd edn. Springer, New York.
  • Baumann, H. and Sandmann, W. (2010). Numerical solution of level dependent quasi-birth-and-death processes. Procedia Comput. Sci. 1, 1561–1569.
  • Baumann, H. and Sandmann, W. (2012). Steady state analysis of level dependent quasi-birth-and-death processes with catastrophes. Comput. Operat. Res. 39, 413–423.
  • Baumann, H. and Sandmann, W. (2013). Computing stationary expectations in level-dependent QBD processes. J. Appl. Prob. 50, 151–165.
  • Baumann, H. and Sandmann, W. (2014). On finite long run costs and rewards in infinite Markov chains. Statist. Prob. Lett. 91, 41–46.
  • Bright, L. and Taylor, P. G. (1995). Calculating the equilibrium distribution in level dependent quasi-birth-and-death processes. Commun. Statist. Stoch. Models 11, 497–525.
  • Chung, K. L. (1960). Markov Chains with Stationary Transition Probabilities. Springer, Berlin.
  • Dayar, T., Sandmann, W., Spieler, D. and Wolf, V. (2011). Infinite level-dependent QBD processes and matrix-analytic solutions for stochastic chemical kinetics. Adv. Appl. Prob. 43, 1005–1026.
  • Foster, F. G. (1953). On the stochastic matrices associated with certain queueing processes. Ann. Math. Statist. 24, 355–360.
  • Gibson, D. and Seneta, E. (1987). Augmented truncations of infinite stochastic matrices. J. Appl. Prob. 24, 600–608.
  • Glynn, P. W. and Zeevi, A. (2008). Bounding stationary expectations of Markov processes. In Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz (Inst. Math. Statist. Collect. 4), Institute of Mathematical Statistics, Beachwood, OH, pp. 195–214.
  • Golub, G. H. and Seneta, E. (1973). Computation of the stationary distribution of an infinite Markov matrix. Bull. Austral. Math. Soc. 8, 333–341.
  • Golub, G. H. and Seneta, E. (1974). Computation of the stationary distribution of an infinite stochastic matrix of special form. Bull. Austral. Math. Soc. 10, 255–261.
  • Hanschke, T. (1999). A matrix continued fraction algorithm for the multiserver repeated order queue. Math. Comput. Modelling 30, 159–170.
  • Latouche, G. and Taylor, P. (2002). Truncation and augmentation of level-independent QBD processes. Stoch. Process. Appl. 99, 53–80.
  • Pakes, A. G. (1969). Some conditions for ergodicity and recurrence of Markov chains. Operat. Res. 17, 1058–1061.
  • Seneta, E. (1981). Nonnegative Matrices and Markov Chains, 2nd edn. Springer, New York.
  • Serfozo, R. (2009). Basics of Applied Stochastic Processes. Springer, Berlin.
  • Thattai, M. and van Oudenaarden, A. (2001). Intrinsic noise in gene regulatory networks. Proc. Nat. Acad. Sci. USA 98, 8614–8619.
  • Tweedie, R. L. (1975). Sufficient conditions for regularity, recurrence and ergodicity of Markov processes. Math. Proc. Camb. Phil. Soc. 78, 125–136.
  • Tweedie, R. L. (1983). The existence of moments for stationary Markov chains. J. Appl. Prob. 20, 191–196.
  • Tweedie, R. L. (1988). Invariant measures for Markov chains with no irreducibility assumptions. A Celebration of Applied Probability (J. Appl. Prob. Spec. Vol. 25A), Applied Probability Trust, Sheffield, pp. 275–285.
  • Tweedie, R. L. (1998). Truncation approximations of invariant measures for Markov chains. J. Appl. Prob. 35, 517–536.
  • Zhao, Y. Q. and Liu, D. (1996). The censored Markov chain and the best augmentation. J. Appl. Prob. 33, 623–629.