Local Probabilities for Random Walks with Negative Drift Conditioned to Stay Nonnegative

Let {Sn, n ≥ 0} with S0 = 0 be a random walk with negative drift and let τx = min {k > 0 : Sk < −x} , x ≥ 0. Assuming that the distribution of the i.i.d. increments of the random walk is absolutely continuous with subexponential density we describe the asymptotic behavior, as n → ∞, of the probabilities P (τx = n) and P(Sn ∈ [y, y + ∆), τx > n) for fixed x and various ranges of y. The case of lattice distribution of increments is considered as well.


Introduction
Let {S n , n ≥ 0} be a random walk with S 0 = 0 and S n = X 1 + X 2 + . . .+ X n for all n ≥ 1, where X 1 , X 2 , . . .are independent copies of a random variable X.For each x ≥ 0 let τ x denote the first passage time to (−∞, −x), that is, The main purpose of the present note is to investigate the asymptotic behaviour, as n → ∞, of the probabilities P(S n ∈ [y, y + ∆), τ x > n) and P(S n ∈ [y, y + ∆), τ x = n + 1) for random walks with negative drift: The driftless case attracted a lot of attention in the last decade and is well studied in the literature, see [8,9,12,13,21,22].
The study of the random walks with negative drift conditioned to stay nonnegative was apparently initiated by Iglehart [17].He has proved that if E Xe pX = 0 for some p > 0 (1.1) and E X 2 e pX < ∞, then the sequence L{S n |τ 0 > n} converges weakly to a distribution on R + .Since no scaling is needed here, one have also an information on local probabilities P(S n ∈ [y, y + ∆), τ 0 > n) for fixed y.An explicit expression for the limit of the conditional probabilities P(S n ∈ [y, y + ∆)|τ 0 > n) can be found in Theorem 1.3 by Keener [19].
Much less is known for the case when (1.1) is not valid.If the variance of X is finite and the tail P(X > x) varies regularly with index −β < −2, then, as n → ∞, This is a particular case of a conditional functional limit theorem proved by Durrett [14].In contrast to Iglehart's situation, for regularly varying tails one can not derive asymptotics for local probabilities from the integral limit theorem.
We are going to consider conditional local probabilities of the random walks having heavy-tailed increments.More precisely, we shall work with the following classes of functions and distributions.
We say that a function f : R → R + is (asymptotically) locally constant and write for any h > 0. Further, see [18], Definition 3 and [2], Appendix B, we say that a function f : R → R + belongs to the class Sd of subexponential densities if f ∈ L and A positive, measurable function f defined in a neighborhood of infinity is called O−regu- Recall, finally, that f : R → R + is called almost decreasing (see Section 2.2 of [6]) if f (x) ≥ c sup y≥x f (y) for some positive constant c.
We assume in the sequel that the distribution of X is either absolutely continuous or is supported by the integers Z (and not by a sublattice thereof).Let b(x) denote the Lebesgue density of X in the absolute continuous case or the mass function in the lattice case.
All the conditions of this theorem are taken from [2], and they are sufficient for the relation P(S n ∈ [y, y + ∆)) ∼ ∆nb(an + y) uniformly in y ≥ −(a − ε)n to be valid for every ε > 0, see Corollary 2.1 in [2].This asymptotics for unconditioned probabilities is one of the most important ingredients for the proof.Remark 1.2.Under much stronger conditions Theorem 1.1 was proved in [4].
Theorem 1.3.Assume that the conditions of Theorem 1.1 are fulfilled.Suppose additionally that, as x → ∞, P(X ≥ x) = O(xb(x)). (1.4) Then, for every fixed x ≥ 0, (1.5) The starting point in the proof of Theorem 1.1 is the Wiener-Hopf factorization.It seems, however, that this method does not work in the case when y = y n → ∞.In order to analyze this situation we use a probabilistic approach which requires stronger restrictions on the jump distribution.
We consider the algebraic decay of the tail of X.
Then, for every sequence y n → ∞ as n → ∞ and any fixed x and ∆ > 0, This theorem is a local counterpart of Durrett's result mentioned earlier.
The method we use to prove Theorem 1.4 works also for bounded values of y, but it requires stronger, compared to Theorem 1.1, conditions on the function b(x).

Proof of Theorem 1.1
Since the proofs in absolutely continuous and lattice cases are almost identical, we consider here only the first possibility.
We start with a series of auxiliary statements.The first result is Corollary 2.1 from [2].(2.1) The next lemma can be found in Embrechts and Hawkes [15] or Asmussen et al [1].
The first statement of Lemma 2.2 follows from Proposition 3 of [1].The second statement of the Lemma follows from Theorem 1 of [15] or Theorem 7 of [1].To apply the results from [1] one should take there ∆ = (0, 1] and notice that for lattice random variables subexponentiality of probability mass function is equivalent to Definition 2 of [1] with ∆ = (0, 1]. 3) is valid, then there exists a constant c ∈ (0, ∞) such that Z(x) ≤ cx 1−1/κ for all sufficiently large x.
This implies b(x) ≤ 2 k(x) b(0.5x).Applying the latter inequality iteratively we see that, for a fixed x 0 Taking logarithms from both sides gives log b(x) can be proved similarly.
The next statement immediately follows from Theorem 2.2 of [3].Lemma 2.4.If P(S n > y)/P(S n > 0) → 1 for every y > 0 and P(S n > 0)/n is a subexponential sequence then, as n → ∞, (2.3) Moreover, under the conditions of Theorem 1.1, Indeed, the conditions of Theorem 1.1 of the present paper correspond to the conditions of Theorem 2.1 of [3].Additional conditions of Lemma 2.4 correspond to the conditions of Theorem 1.2 of [3] for α = γ = 0.
We define and specify two renewal functions where and (2.8) Proof.We first check the validity of (2. (2.9) Clearly, for any h > 0, Note that according to Lemma 2.3, Z(x) ≤ cx 1−1/κ for sufficiently large x.With this in view we have by Lemma 2.1, Thus, Local probabilities for conditioned random walks By similar arguments we get and letting h ↓ 0 we get that, as n → ∞, Combining this with (2.9) and the fact that b(n) = o(n) due to the existence of the first moment, we conclude that, as n → ∞ E e λSn ; S n < 0 ∼ λ −1 nb(an).
The proof of (2.6) follows the same line by using the Baxter identity which for finite x ≥ 0 are valid for every θ ∈ R + , since the limit measures involved here have densities with respect to the Lebesgue measure.
Proof of Theorem 1.1.By the same arguments that have been used to deduce (2.12) and (2.13) from (2.5) and (2.6), we infer from (2.14) that we infer that  Furthermore, By the duality, for each l ≥ 1, and, recalling the definition of u(x), we finally get Combining (2.19)-(2.21)completes the proof.
3 Local limit theorem for the first exit moment from the positive semi-axis 3.1 Proof of (1.5) for x = 0 We write in this subsection τ for τ 0 .Setting λ = 0 in (2.11) and differentiating the result with respect to t, one can easily get Hence it follows that As a result, By Lemma 2.4, EJP 19 (2014), paper 88.
Since the random walk under consideration has a negative drift, we can select for any fixed ε > 0 a sufficiently large A to meet the inequality P(S A > 0) ≤ ε.In fact, we can assume that k ≤ n − A(n) → ∞.As a result, Now we analyze the difference Applying Lemma 2.1, we obtain Using long-tailedness, we deduce that, as i → ∞, Hence, letting A → ∞, we conclude that where X − = max(0, −X).Next, for any ε ∈ (0, a) we have P(X i ∈ dy)P(S i−1 ∈ (−y, 0]).By the insensitivity assumption (1.3), the second integral admits the estimate while for the first we have The evaluation of the third integral requires more delicate arguments based on a number of results we borrow from [2].First we note that according to Lemma 6.2 of [2], the sequence h i := i 1/κ is a truncation sequence, see formula (4) in [2] for more detail.Hence we may apply Lemma 2.5 of the mentioned article to conclude that, as i → ∞, Applying Lemma 7.1 from [2] to the centered random walk S n + na, we obtain Furthermore, using (1.3), one can get, for all sufficiently large i, It is easy to see that To bound this probability we apply estimates from [2].Applying the first display on page 1958 of [2] we obtain, Applying the third display on page 1958 of [2] gives where β = 2 −1 P(X 1 + x > 0) > 0. Noting that (2.1) yields P(S i > 0) ∼ iP(X 1 ≥ ai), we get ai−Ai 1/κ (a−ε)i P(X i ∈ dy)P(S i−1 > −y, X 1 > h i , max 2≤k<i X k ≤ h i ) = o(P(X 1 ≥ ai)).
As a result, P(S i−1 ≤ 0, S i > 0) ∼ (i − 1)b((i − 1)a)E X + + P(X 1 ≥ ai).In the second equivalence we used the second assertion of Lemma 2.4.In the third equivalence we used the subexponentiality of the tail of X.This term will be canceled with (3.2).
Finally, choosing A(n) = h n = n 1/κ (here and in what follows we agree to consider n 1/κ as n 1/κ ], i.e, as a positive integer number) and taking into account Recalling the condition (1.4), we get the desired result.

( 2 . 10 )
We know by the Baxter identity that
either belongs to Sd or is O-regularly varying, and