Translator Disclaimer
August 2004 Uniform Markov renewal theory and ruin probabilities in Markov random walks
Cheng-Der Fuh
Ann. Appl. Probab. 14(3): 1202-1241 (August 2004). DOI: 10.1214/105051604000000260

Abstract

Let {Xn,n0} be a Markov chain on a general state space ${\mathcal{X}}$ with transition probability P and stationary probability π. Suppose an additive component Sn takes values in the real line R and is adjoined to the chain such that {(Xn,Sn),n0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to boundary crossing problems for {(Xn,Sn),n0}. To be more precise, for given b0, define the stopping time τ=τ(b)=inf {n:Sn>b}. When a drift μ of the random walk Sn is 0, we derive a one-term Edgeworth type asymptotic expansion for the first passage probabilities Pπ{τ<m} and Pπ{τ<m,Sm<c}, where m, cb and Pπ denotes the probability under the initial distribution π. When μ0, Brownian approximations for the first passage probabilities with correction terms are derived. Applications to sequential estimation and truncated tests in random coefficient models and first passage times in products of random matrices are also given.

Citation

Download Citation

Cheng-Der Fuh. "Uniform Markov renewal theory and ruin probabilities in Markov random walks." Ann. Appl. Probab. 14 (3) 1202 - 1241, August 2004. https://doi.org/10.1214/105051604000000260

Information

Published: August 2004
First available in Project Euclid: 13 July 2004

zbMATH: 1052.60072
MathSciNet: MR2071421
Digital Object Identifier: 10.1214/105051604000000260

Subjects:
Primary: 60K05
Secondary: 60J10, 60K15

Rights: Copyright © 2004 Institute of Mathematical Statistics

JOURNAL ARTICLE
40 PAGES


SHARE
Vol.14 • No. 3 • August 2004
Back to Top