Large and moderate deviations for record numbers in some non-nearest neighbor random walks

The deviation principles of record numbers in random walk models have not been completely investigated, especially for the non-nearest neighbor cases. In this paper, we derive the asymptotic probabilities of large and moderate deviations for the number of"weak records"(or"ladder points") in two kinds of one-dimensional non-nearest neighbor random walks. The proofs depend only on the direct analysis of random walks. We illustrate that the traditional method of analyzing the local time of Brownian motions, which is often adopted for the simple random walks, would lead to wrong conjectures for our cases.


Introduction
The word "record" can be referred as an extreme attainment. The study of record statistics has become indispensable in different fields. In this paper, we are interested in the asymptotic properties of record numbers in random walks as the number of steps tends to infinity, and aim to study the deviations between the record numbers and their asymptotic limits.
Let S = {S n , n ≥ 0} be an integer-valued random walk on Z (may be non-nearest neighbor), namely, S 0 = 0, and S n = n i=1 X i for n ≥ 1, where X 1 , X 2 , · · · are i.i.d. integervalued random variables. Define M n = max 0≤m≤n S m for n ≥ 1. Let T 0 = 0, T n = inf{m > T n−1 , S m ≥ M m−1 } for n ≥ 1, and define A n = sup{k ≥ 1, T k ≤ n} (1.1) for each n ≥ 1, where inf ∅ def = +∞ and sup ∅ def = 0. In this paper, we call A n the weak record numbers up to time n. The word "weak" is to emphasize that we not only consider the time when a new record appears, but also keep eyes on the time when the current record is repeated. It is important to note that the weak record number we are considering now is different from the "record numbers" studied in Katzenbeisser and Panny [13], Kirschenhofer and Prodinger [14], Pȃttȃnea [17], where they discussed the number of the events {S k = M n } (rather than {S k = M k }) that occur up to time n.
In some literatures, A n is also called the number of "weak ladder points" which is a footstone in the fluctuation theory of random walks. The fluctuation theory was first proposed by Spitzer [20] and Feller [6], and has drawn much attention since then because of its wide applications. For more details, one can refer to Karlin and Taylor [12,Chapter 17]. Omey and Teugels [10] proved that a normalized version of the bivariate ladder process {(T n , S Tn )} converges in law to the bivariate ladder process of a Lévy process X whenever the normalized {S n } converges in law to X. As an immediate corollary, one can derive that a normalized version of A n (number of ladder points) of S converges in distribution to the local time at the supremum of X. Later, Chaumont and Doney [2] extended this result to a more general case. Based on the above results, one may further ask about the deviations between the normalized version of A n and its limit. As far as we know, there is very few related research to investigate such problems.
In this paper, we study the asymptotic probabilities of P(A n ≥ √ nc n ), where c n tends to infinity under some constraints. We will establish the large deviations principle (LDP) and moderate deviations principle (MDP) for A n , respectively. For a general theory of LDP and MDP, please refer to Dembo and Zeitouni [5].
Let Y k = T k −T k−1 for k ≥ 1. The strong Markov property of random walks implies that Y k 's are i.i.d, and A n = sup k, k i=1 Y i ≤ n . Namely, {A n } n≥1 is a discrete time renewal process with the inter-occurrence time sequence {Y n }. There are many results on the theory of deviations for renewal processes or renewal reward processes. See, for example, Serfozo [18], Glynn and Whitt [8], Jiang [11], Chi [4], Lefevere et al. [15], Borovkov and Mogulskii [1], Tsirelson [21], Logachov and Mogulskii [16], and the references therein. However, these proposed approaches cannot be applied directly to our case, since most of them require constraints on moments or moment generating functions for inter-occurrence times, which are not fulfilled by A n in our situations.
By adopting the celebrated invariance principle, one may naturally connect A n with the maximal value process B * (n) of a Brownian motion when the increments of random walk S have finite variance, and conceive that we can get the LDP and MDP of A n by extending the asymptotical results for 2B * (n). However, our results (see Theorem 2.1 and Corollary 2.1) show that this method would lead to wrong conjectures. Instead, in this paper, we investigate the LDP and MDP for A n via the deviation theory of occupation time of Markov process as well as some analysis on the related queueing models. This is our main contribution.
The remainder of this paper is organized as follows. In Section 2, we summarize the main results of this paper, and highlight our main contributions. In Section 3, we provide some results for queueing models, which are crucial for the analysis of left or right continuous random walks. Then we establish the LDP in Section 4 and the MDP in Section 5. Finally, we make some concluding remarks in Section 6.

Statement of main results
Let S be the random walk as defined in Section 1. We say S is right continuous if the probability mass function (p.m.f) of X i satisfies The notions of "right continuous" and "left continuous" first appeared in Spitzer [20]. Let for s ∈ [0, 1]. For convenience, in the sequel, we call the random walk S is right or left continuous with φ if the p.m.f of its increments has the form of (2.1) or (2.2), respectively. Obviously, for each s ∈ [0, 1], the equation x = sφ(x) has a solution x s ∈ [0, 1]. We denote the minimum non-negative solution by h(s), which will be discussed in more details in Lemma 3.1 later. For every λ ∈ (−∞, 0], let Λ r (λ) = ln 1 + qe λ − qe λ h(e λ ) and Λ l (λ) = λ + ln 1 − φ(h(e λ )) 1 − h(e λ ) .
If S is left continuous with φ, then To facilitate our discussion in MDP, we need the following technical assumption.
Assumption (H): There exist α ∈ (0, 1) and c > 0 such that lim The MDP for A n is as follows. (1) If S is right continuous with φ, then for any x > 0, By applying Theorem 2.2 to some special cases, we obtain the following corollaries.
Let {c n } be a sequence of positive numbers such that c n → +∞ and c n = o(n) as n tends to infinity.
(1) If S is right continuous with φ, then for any x > 0, (2) If S is left continuous with φ, then for any x > 0, To investigate the case of φ ′′ (1) = +∞, we satisfy ourselves by studying the special case . Let {c n } be a sequence of positive numbers such that c n → +∞ and c n = o(n) as n tends to infinity.
(1) If S is right continuous with φ, then for any x > 0, Remark 2.1 When φ ′ (1) = 1 and σ 2 = φ ′′ (1) < +∞, the expectation and the variance of X i are 0 and σ 2 , respectively. In this case, by the strong invariance principle, S is approximated by a Brownian motion with variance parameter σ 2 , whether S is right continuous or left continuous. However, as indicated in Theorem 2.1 and Corollary 2.1, the right or left continuity of random walk S leads to different rate functions for the LDP and MDP of A n . These observations show that for the problems investigated here, it would lead to wrong conjectures by simply extending the asymptotic results of Brownian motions to random walks via the invariance principle.

Remark 2.2
It is well known that if Y k 's are i.i.d. with the same probability generating functions φ(s), then as n → ∞, which is the characteristic function of a (1 + β)-stable distribution, saying U, without negative jumps. Therefore the distribution determined by φ(s) belongs to the domain of attraction of the stable distribution U. Furthermore, as shown by Skorohod (1957), converges weakly in the Skorohod space D([0, 1]) with J 1 topology to a Lévy stable motion L(t) whose distribution at t = 1 is U, where ⌊a⌋ denotes the maximal integer no larger than a. As a result, if S is left or right continuous with φ, then S ⌊nt⌋ /n 1/(1+β) converges weakly in D([0, 1]) to L(t) or −L(t), respectively.

Some results for queueing models
Our main approach in analyzing the property of left continuous and right continuous random walks is to relate them to the queueing models.
Let p −1 , p 0 , p 1 , · · · be a sequence of non-negative real numbers such that +∞ n=−1 p n = 1 and Intuitively, in a service system with one server, if p i,j has the form of (3.2), W is the length of the waiting line when a new customer enters the service system, where p k denotes the probability of exactly k + 1 customers served in an inter-arrival period. If p i,j has the form of (3.1), W is the length of the waiting line (excluding the customer in service) when a customer leaves the service system, where q k is the probability of exactly k + 1 customers arriving in a service period.
The following lemmas are crucial to the proof of our main results. They are of independent interest as well. Although they seem to be some fundamental conclusions for the process W , we cannot find a suitable reference. For the convenience of reference, we provide their detailed proofs in the following. Let φ(s) =   (2) h ′ (s) = φ(h(s)) 1−sφ ′ (h(s)) > 0 for all s ∈ (0, 1); Proof. Applying the intermediate value theorem to the function x − sφ(x) with the variable x and noting its monotonicity in [0, 1], we can readily know that there exists a unique function h(s) ∈ [0, 1] such that h(s) = sφ(h(s)) for all s ∈ [0, 1]. As a result, the implicit function theorem leads to (1) and (2). Next, we give the detailed proof for (3) and (4).
To prove (3), we use the L'Hospital rule and the formula of h ′ (s) in (2) to obtain that Note that as To prove (4), by the L'Hospital rule again, From (2), we have that φ(h(s)) = (1 − sφ ′ (h(s)))h ′ (s). Therefore, which implies the desired result.
Lemma 3.2 Suppose that for each pair (i, j), the transition probability p i,j is given by Proof. By the one step analysis of Markov chains, we know that the family of functions {f k (s); k ≥ 0} satisfies the following equations, where τ i is the first time that W hits i − 1 starting from i (i = 1, · · · , k). So by the Markov and the homogeneous property of W , we have as desired.

Lemma 3.3
Suppose that for each pair (i, j), the transition probability p i,j is given by Proof. By the one step analysis of Markov chains, we know that {f k (s); k ≥ 0} satisfies . Since t − sφ(t) is non-decreasing for t ∈ [0, 1], we know that u < h(s) < v for each u ∈ D − (s) and v ∈ D + (s). In addition, due to the fact that F (u, s) ≥ 0 for all (u, s) ∈ s ∈ (0, 1). From the facts that φ ′ (1) = 1, p 0 < 1, as well as the convexity of φ, we know that p −1 = φ(0) > 0, φ(s) > s and for all s ∈ (0, 1). Therefore, the function That is, , which leads to the desired result.

If (3.2) holds, then Lemma 3.3 implies that
Therefore, from Assumption (H), we obtain The proof is completed.

The proof of LDP
In this section, we will provide the proof of LDP. LetS 0 = 0 andS n = M n − S n for n ≥ 1, where {S n } is the random walk given in Section 2, and M n = max 0≤k≤n S k . For any n ≥ 0, Since S n+1 −S n is independent of {S k , 0 ≤ k ≤ n} and has the same distribution, {S n , n ≥ 0} is a nonnegative time-homogeneous Markov chain with one-step transition probabilities The basic assumption that S is right or left continuous implies that When S is right continuous, the transition probability p i,j is given by (3.1) with p −1 = q.
Similarly, when S is left continuous, the transition probability p i,j is given by (3.2) with p −1 = q. Let L 0 n (S) be the occupation time ofS at the site 0 from time 1 up to time n, that is, It is easy to see that for every n ≥ 0, Letτ 1 := inf{n > 0,S n = 0} andτ k+1 := inf{n > τ k ,S n = 0} for k ≥ 1. (4.1) suggests that A n = L 0 n (S) = sup{k ≥ 1,τ k ≤ n}. The Markov property indicates thatτ 1 and τ k+1 −τ k , k ≥ 1 are i.i.d.
We next prove the LDP for A n . Proof of Theorem 2.1. Let {Y i , i ≥ 1} be a sequence of i.i.d. random variables with the same distribution asτ 1 . Then we have for any 0 < x ≤ 1, where ⌈a⌉ and ⌊a⌋ denote the minimal integer no smaller than a and the maximal integer no larger than a, respectively.
When S is right continuous, sinceS 0 = 0 and Y d =τ 1 , we can get from Lemma 3.2 that E(e λY ) = Λ r (λ) for any λ < 0, and that E(Y ) = +∞. Applying Cramér's Theorem [ Similarly, when S is left continuous, The rest is the same as the proof of Theorem 2 in [7]. So we omit the details.

The proof of MDP
In this section, we will first prove the MDP for A n under the assumption (H). Then we provide some sufficient conditions for the assumption (H). Based on these sufficient conditions, we can directly get Corollaries 2.1 and 2.2.
The following lemma is a special presentation of Chen [3, Theorem 2] in our case.
Now we provide the proof of the MDP for A n .
Proof of Theorem 2.2 (1) By some basic theory in Markov chains, we can get (2) In this case, similarly we know that Lemma 5.1 is fulfilled for Therefore, The proof is now completed.
For the case of φ ′′ (1) = ∞, we have the following specific result.
From Theorem 2.2, Remark 5.1 and Lemma 5.2, we can get the Corollaries 2.1 and 2.2, where the fact q = γ/(1 + β) is used for the latter case. The details are omitted.

Concluding remarks
In this paper, we prove the large and the moderate deviations principle for two kinds of non-nearest neighbor random walks, that is, the left continuous and the right continuous random walks. As implied by our main results (Theorem 2.1 and Theorem 2.2), the form of the asymptotic behavior is different among the left continuous case, the right continuous case, and the nearest-neighbor case. This implies that the traditional method by utilizing the strong invariance principle and relating random walks with Brownian motions may not work for the current cases. Instead, the new approach of linking random walks to some queueing models helps to overcome the above difficulties.
One of the future direction is to extend the results to more general kinds of transition probabilities (beyond the left continuous and the right continuous setting). However, the current approach may fail since the relation with the queueing models (as displayed in Section 4) may become invalid for the more general cases. We may think about other estimating approaches in the future.
Another interesting problem is to consider the high dimensional case. This may be much more difficult. Very recently, Godrèche and Luck [9] considered the two-dimensional case, and they only got the law of large numbers for the nearest-neighbor case.