The Annals of Applied Probability

A scaling analysis of a cat and mouse Markov chain

Nelly Litvak and Philippe Robert

Full-text: Open access


If (Cn) is a Markov chain on a discrete state space $\mathcal{S}$, a Markov chain (Cn, Mn) on the product space $\mathcal{S}\times\mathcal{S}$, the cat and mouse Markov chain, is constructed. The first coordinate of this Markov chain behaves like the original Markov chain and the second component changes only when both coordinates are equal. The asymptotic properties of this Markov chain are investigated. A representation of its invariant measure is, in particular, obtained. When the state space is infinite it is shown that this Markov chain is in fact null recurrent if the initial Markov chain (Cn) is positive recurrent and reversible. In this context, the scaling properties of the location of the second component, the mouse, are investigated in various situations: simple random walks in ℤ and ℤ2 reflected a simple random walk in ℕ and also in a continuous time setting. For several of these processes, a time scaling with rapid growth gives an interesting asymptotic behavior related to limiting results for occupation times and rare events of Markov processes.

Article information

Ann. Appl. Probab., Volume 22, Number 2 (2012), 792-826.

First available in Project Euclid: 2 April 2012

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces) 90B18: Communication networks [See also 68M10, 94A05]

Cat and mouse Markov chains scaling of null recurrent Markov chains


Litvak, Nelly; Robert, Philippe. A scaling analysis of a cat and mouse Markov chain. Ann. Appl. Probab. 22 (2012), no. 2, 792--826. doi:10.1214/11-AAP785.

Export citation


  • [1] Abiteboul, S., Preda, M. and Cobena, G. (2003). Adaptive on-line page importance computation, WWW ’03. In Proceedings of the 12th International Conference on World Wide Web. ACM, New York.
  • [2] Aldous, D. and Fill, J. A. (1996). Reversible Markov chains and random walks on graphs. Unpublished manuscript.
  • [3] Asmussen, S. (2003). Applied Probability and Queues: Stochastic Modelling and Applied Probability, 2nd ed. Applications of Mathematics (New York) 51. Springer, New York.
  • [4] Berkhin, P. (2005). A survey on PageRank computing. Internet Math. 2 73–120.
  • [5] Bertoin, J. and Yor, M. (2005). Exponential functionals of Lévy processes. Probab. Surv. 2 191–212 (electronic).
  • [6] Billingsley, P. (1999). Convergence of Probability Measures, 2nd ed. Wiley, New York.
  • [7] Bingham, N. H. (1971). Limit theorems for occupation times of Markov processes. Z. Wahrsch. Verw. Gebiete 17 1–22.
  • [8] Borodin, A. N. (1981). The asymptotic behavior of local times of recurrent random walks with finite variance. Teor. Veroyatnost. i Primenen. 26 769–783.
  • [9] Borodin, A. N. and Salminen, P. (1996). Handbook of Brownian Motion—Facts and Formulae. Birkhäuser, Basel.
  • [10] Brin, S. and Page, L. (1998). The anatomy of a largescale hypertextual web search engine. Computer Networks and ISDN Systems 30 107–117.
  • [11] Coppersmith, D., Doyle, P., Raghavan, P. and Snir, M. (1993). Random walks on weighted graphs and applications to on-line algorithms. J. Assoc. Comput. Mach. 40 421–453.
  • [12] Garsia, A. and Lamperti, J. (1962/1963). A discrete renewal theorem with infinite mean. Comment. Math. Helv. 37 221–234.
  • [13] Gikhman, I. I. and Skorokhod, A. V. (1996). Introduction to the Theory of Random Processes. Dover, Mineola, NY.
  • [14] Goldie, C. M. (1991). Implicit renewal theory and tails of solutions of random equations. Ann. Appl. Probab. 1 126–166.
  • [15] Grimmett, G. R. and Stirzaker, D. R. (1992). Probability and Random Processes, 2nd ed. Oxford Univ. Press, New York.
  • [16] Guillemin, F., Robert, P. and Zwart, B. (2004). AIMD algorithms and exponential functionals. Ann. Appl. Probab. 14 90–117.
  • [17] Kasahara, Y. (1982). A limit theorem for slowly increasing occupation times. Ann. Probab. 10 728–736.
  • [18] Kasahara, Y. (1985). A limit theorem for sums of random number of i.i.d. random variables and its application to occupation times of Markov chains. J. Math. Soc. Japan 37 197–205.
  • [19] Keilson, J. (1979). Markov Chain Models—Rarity and Exponentiality. Applied Mathematical Sciences 28. Springer, New York.
  • [20] Kemeny, J. G., Snell, J. L. and Knapp, A. W. (1976). Denumerable Markov Chains, 2nd ed. Springer, New York.
  • [21] Kingman, J. F. C. (1970). Inequalities in the theory of queues. J. Roy. Statist. Soc. Ser. B 32 102–110.
  • [22] Knight, F. B. (1963). Random walks and a sojourn density process of Brownian motion. Trans. Amer. Math. Soc. 109 56–86.
  • [23] Litvak, L. and Robert, P. (2008). Analysis of an on-line algorithm for solving large Markov chains. In The 3rd International Workshop on Tools for Solving Structured Markov Chains, Athens, Greece. ICST, Gent, Belgium.
  • [24] Norris, J. R. (1998). Markov Chains. Cambridge Series in Statistical and Probabilistic Mathematics 2. Cambridge Univ. Press, Cambridge. Reprint of 1997 original.
  • [25] Perkins, E. (1982). Weak invariance principles for local time. Z. Wahrsch. Verw. Gebiete 60 437–451.
  • [26] Robert, P. (2003). Stochastic Networks and Queues: Stochastic Modelling and Applied Probability, french ed. Applications of Mathematics (New York) 52. Springer, Berlin.
  • [27] Rogers, L. C. G. and Williams, D. (1987). Diffusions, Markov Processes, and Martingales. Vol. 2: Itô Calculus. Wiley, New York.
  • [28] Rogers, L. C. G. and Williams, D. (1994). Diffusions, Markov Processes, and Martingales. Vol. 1: Foundations, 2nd ed. Wiley, Chichester.
  • [29] Tetali, P. (1994). Design of on-line algorithms using hitting times. In Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms (Arlington, VA, 1994) 402–411. SIAM, Philadelphia, PA.
  • [30] Yor, M. (2001). Exponential Functionals of Brownian Motion and Related Processes. Springer, Berlin.