Open Access
June 2007 Uniform convergence of exact large deviations for renewal reward processes
Zhiyi Chi
Ann. Appl. Probab. 17(3): 1019-1048 (June 2007). DOI: 10.1214/105051607000000023

Abstract

Let (Xn, Yn) be i.i.d. random vectors. Let W(x) be the partial sum of Yn just before that of Xn exceeds x>0. Motivated by stochastic models for neural activity, uniform convergence of the form sup cI|a(c, x)Pr {W(x)≥cx}−1|=o(1), x→∞, is established for probabilities of large deviations, with a(c, x) a deterministic function and I an open interval. To obtain this uniform exact large deviations principle (LDP), we first establish the exponentially fast uniform convergence of a family of renewal measures and then apply it to appropriately tilted distributions of Xn and the moment generating function of W(x). The uniform exact LDP is obtained for cases where Xn has a subcomponent with a smooth density and Yn is not a linear transform of Xn. An extension is also made to the partial sum at the first exceedance time.

Citation

Download Citation

Zhiyi Chi. "Uniform convergence of exact large deviations for renewal reward processes." Ann. Appl. Probab. 17 (3) 1019 - 1048, June 2007. https://doi.org/10.1214/105051607000000023

Information

Published: June 2007
First available in Project Euclid: 22 May 2007

zbMATH: 1129.60029
MathSciNet: MR2326239
Digital Object Identifier: 10.1214/105051607000000023

Subjects:
Primary: 60F10
Secondary: 60G51

Keywords: Continuous-time random walk , large deviations , point process , renewal reward process

Rights: Copyright © 2007 Institute of Mathematical Statistics

Vol.17 • No. 3 • June 2007
Back to Top