Curve Crossing for the Reflected Lévy Process at Zero and Infinity

Let $R_{t}=\sup_{0\leq s\leq t}X_{s}-X_{t}$ be a Levy process reflected in its maximum. We give necessary and sufficient conditions for finiteness of passage times above power law boundaries at infinity. Information as to when the expected passage time for $R_{t}$ is finite, is given. We also discuss the almost sure finiteness of $\limsup_{t\to 0}R_{t}/t^{\kappa}$, for each $\kappa\geq 0$.


Introduction
Let X = (X t , t ≥ 0) be a Lévy process starting at zero with characteristic triplet (γ, σ, Π), where γ ∈ R, σ ≥ 0 and the Lévy measure Π has the property ∞ −∞ 1 ∧ x 2 Π(dx) < ∞. We use Π(x) = y≥|x| Π(dy) to denote the two sided tail of the Lévy measure and Π (+) and Π (−) to denote the corresponding positive and negative tails. Let ψ(θ) denote the characteristic exponent of X, so that (1.1) When Ee λX 1 exists, for all λ in an open interval containing 0, we can extend Ψ analytically in some neighbourhood of the real line in the complex plane and refer to the Laplace exponent ψ, which relates to Ψ via the identity ψ(θ) = ln Ee θX 1 = −Ψ(−iθ).
For any Lévy process we can define the reflected process R = (R t ) t≥0 as follows: where X t = sup 0≤s≤t X s . We note that whenever we have the notation Y t , we mean Y t = sup s∈I∩[0,t] Y s , where I is either R + or Z + .
The reflected process plays an important role in the theory of random walks and Lévy processes, and has many applications in finance, genetics and optimal stopping. Thus, for example, the optimal time to exercise "Russian option" is the first time the reflected process crosses a fixed level (Shepp and Shiryaev [13], [14] and Asmussen, Avram and Pistorius). For more discussions and basic properties of the reflected process we refer to [5].
The first aim of this paper is to obtain necessary and sufficient conditions (NASC) for the almost sure (a.s.) finiteness of passage times of R t out of power law regions of the form [0, rt κ ] where r > 0 and κ ≥ 0, and for the finiteness of the expected value of passage times of R t from linear (κ = 1) or parabolic (κ = 1/2) regions. We also provide a NASC for the a.s. finiteness of lim sup t→0 R t /t κ , for any κ ≥ 0.
Section 2 essentially extends results for random walks in [5]. We obtain NASC for when lim sup t→∞ R t /t κ is a.s. finite or not, for any κ ∈ R + . To achieve this we rely on very useful stochastic bounds discovered recently by Doney in [7]. The Section is completed by discussing the finiteness of expected values of passage times of R t .
In Section 3 new results for the passage time of R t at 0 are obtained. The NASC are very similar to the ones at ∞. It turns out that the integrability of the Lévy measure ( see Theorem 3.1 (i) and Theorem 3.2 (ii)) plays the same role as the finiteness of particular moments of the Lévy process ( see Theorem 2.1 (b)).
The proofs are given in Section 4 and Section 5, while some technical results are collected in the Appendix.

Passage times above power law boundaries at infinity
In [5], results about the first exit time of a reflected random walk from power law regions are obtained. These include NASC for a.s. finiteness of both the the first exit time and its expectation. In this Section we extend these results to reflected Lévy processes. The main technique in proving Theorem 2.1 is the stochastic bound discovered recently by Doney, see [7]. It is possible to derive this result using a standard embedded random walk X := (X n ; n ≥ 0), where X n is the Lévy process computed at time n. We prove Theorem 2.2 by using functions of R t which define martingales on R + .
We define, for any κ ≥ 0 and r > 0: where t + 1 is used for t to avoid the case when τ κ (r) = 0 a.s. Let X + = X + = max{X, 0} and X − = X − = max{−X, 0}. We may now state our main result.
Remarks. (i) Note that τ κ (r) < ∞ a.s., for all r > 0, is equivalent to This may not seem obvious, but can be proved in the same way as in Lemma 3.1. in [5]. For an alternative proof see [12].
(ii) Also note, that for the embedded random walk X = (X n , n ≥ 0), the following inequality where R b X is the reflected process for X. This implies that for any κ ≥ 0 and r > 0.
(iii) We exclude the case of positive subordinator since then R t ≡ 0. In this case obviously τ κ (r) = ∞ a.s. (iv) For analytic conditions equivalent to lim inf t→∞ Xt t κ = −∞ a.s. we refer to [6]. The second result considers the expected value of the passage time of R t above linear and square root boundaries and extends the corresponding result in [5]. (ii) Eτ 1/2 (αr) = ∞, for all r ≥ 1.
Remarks. (i) The general approach to estimate the expectation of the first exit time is via functions of R t that define martingales on R + , see Theorem 2.2. in [5]. This constrains us to linear and square root boundaries and it is not clear how this approach could be extended for a general boundary when 1 2 < κ < 1. (ii) Despite some efforts, we have been unable to remove the restriction E(X + 1 ) 2 < ∞ in (b). Generally, it seems to be difficult to obtain results for the finiteness of the expected values of the passage times when EX 2 = ∞, not only for the reflected process, but for random walks as well. For short discussion we refer to [5].

Passage times above power law boundaries at zero
In this Section we discuss passage times of the reflected process above power law boundaries at zero. To avoid notational complications we will study lim sup t→0 R t t κ = ∞ a.s. (3.1) rather than the equivalent condition τ κ (r) = inf{t > 0 : R t > rt κ } = 0 a.s., for all r > 0.
The first theorem deals with Lévy processes with bounded variation.
Theorem 3.1. Let X be a Lévy process with bounded variation and drift d, defined by lim t→0 X t /t = d a.s. Then the following statements hold (ii) For κ ≤ 1, we have Next we deal with the unbounded variation case. We have the following result: Theorem 3.2. Let X be a Lévy process with unbounded variation.

Proofs for section 2
Proof of Theorem 2.1. We start with the proof of (a). If X is a negative subordinator, then R t = −X t and the statement that τ (r) < ∞ a.s. is clear from the fact that X t drifts to −∞. Without loss of generality, we assume that Π (−) (1) > 0. Obviously τ (r) < ℘(r) = inf{t : X has jumped [r] + 1 times with jumps less than −1}.
Since ℘(r) is a sum of independent exponentially distributed random variables it has gamma distribution and hence Ee λτ (r) ≤ Ee λ℘(r) < ∞, for λ small enough.
To show Ee λτ (r) < ∞, for some λ > 0, for a general Lévy process, we invoke Theorem 2.1 (a) in [5] for the embedded random walk defined in remark (ii) of Theorem 2.1 and use inequality (2.4).
We shall prove the forward part of both (i) and (ii) in (b) together. Assume (2.2) holds. Denote by {ζ i } i≥0 the stopping times defined recursively by We use Theorem 1.1 in [7] to construct a stochastic bound M n for X t with the following property: where S + n is a random walk with steps and m 0 = sup t≤ζ 1 X t . In fact Y i can be represented in the following useful way , and X is obtained from X by removing all jumps bigger in absolute value than 1. Then the Lévy measure of X has compact support and hence from Theorem 25.17 in [11], for example, we have that Ee λ e X < ∞, for all λ > 0.
With N (t) := max{i : ζ i ≤ t} and M N (t) = max n≤N (t) M n , we have the following inequality Recall that m 0 ≥ 0 a.s. and therefore the reflected random walk of S + has the form R + These observations enable the following useful upper bound for R t : We now show that lim sup To achieve this recall that X t , as well as ζ 1 , have finite moments of any order (recall that Theorem 25.17 in [11] implies Ee λXt < ∞, for any λ ∈ R). This easily implies that V 0 has moments of any order and hence, for any ε > 0, A simple application of the Borel-Cantelli lemma yields (4.5) and hence (4.4). Lastly we see that (2.2) and (4.3) along with the strong law of large numbers give All that remains is to apply Theorem 2.1. in [5] to the random walk S + n and deduce that either Then the definition of Y implies that EX The backward part of (b) is much simpler since we can directly use R t ≥ −X t when lim inf t→0 X t /t κ = −∞, or apply Theorem 2.1. in [5] to the embedded random walk X, when E(X − 1 ) 1/κ = ∞. We therefore see that where R b X is the reflected random walk for X and applying (2.3) we conclude the proof.
Proof of Theorem 2.2. Part (i) for both (a) and (b) follow easily from Theorem 2.2 in [5] together with inequality (2.4). We therefore concentrate on (ii), (a). Observe that since EX 1 = 0, X t is a martingale. Also note that the maximum process X t has bounded variation and therefore R t is a semimartingale. Moreover EX 2 t < ∞ implies that EX 2 t < ∞, see Theorem 25.18 in [11], which in turn gives ER 2 t < ∞. Denoting by [.] t the quadratic variation of a process and applying Itô's formula, see [9], p.71, we see that Now by virtue of the fact that X has bounded variation, it follows that For P a.e. ω in Ω, we have X t (ω) = s≤t ∆X t (ω) + G(t, w), where the function G(., ω) is nonnegative, nondecreasing and continuous. This follows from the fact that for any given ω, t → X t (ω) is a right continuous, nondecreasing and nonnegative function. Consequently we see that then (s, w). Inserting this into (4.8) above and substituting (4.8) into (4.7), we obtain Since we have we may insert these identities into (4.9) to deduce that We are ready now to conclude the proof of the theorem. First we note that t 0 R s− dX s and X 2 t − [X] t are zero mean martingales. Then we apply the optional sampling theorem to the last identity to get for any m > 0, and hence ER 2 If we assume that Eτ 1/2 (αr) < ∞, we see from Fatou's lemma and the definition of τ . (r) that lim inf m→∞ (E(R τ 1/2 (αr)∧m ) 2 ) ≥ E(R 2 τ 1/2 (αr) ) > r 2 α 2 (Eτ 1/2 (αr) + 1).
Turning now to the proof of part (b), (ii), we can assume, without loss of generality, that EX 1 = −1. From [12] and E(X + 1 ) 2 < ∞ we see that l = EX ∞ < ∞. Define the exit times, for all q ≥ 1. Assume that ET q < ∞, for each q ≥ 1. An easy application of the optional sampling theorem to X Tq∧m , followed by the monotone convergence theorem and Fatou's lemma, yields Therefore we must have ET q = ∞ when q > l. Next observe that, for each q ≥ 1, The first inequality comes from narrowing the possible values of R 1 , while the second, which reads E(T q |R 1 = x, T q > 1) ≥ ET q−x , for x ∈ (0, 1), is verified using the fact that R t is a Markov process. If we assume that X is not a negative drift, then ∃δ > 0 : and repeating this step finitely many times, we get where C is some constant. Therefore we must have ET 1 = ∞.

Proofs for Section 3
First of all, we observe that for a study of the behaviour at zero we can always assume that the Lévy measure is carried by [−1, 1]. With this in mind we proceed with the proof of Theorem 3.1. Recall that, since X has bounded variation, we can write where Y is a driftless positive subordinator and Z is a driftless negative subordinator.
To show (i), let us first suppose that Then applying Theorem 9, Chapter 3 in [2] to −Z t in (5.1), we easily get lim t→0 Z t t κ = 0 a.s.
Assume now that (5.2) fails. Then a standard argument, see Theorem 9 on page 85 of [2], gives, for any c > 1, −∆X t > ct κ i.o., which along with R t ≥ −∆X t 1 {∆Xt<0} shows that (3.1) holds. For (ii), (a), all we need to observe is that from (5.1) we have and recall lim t→0 This completes the proof of Theorem 3.1.
From now on we set σ = 0 and proceed with part (ii). Suppose first that κ = 1/2, so that (A) fails. In view of Theorem 2.2. in [3], we have that either which is exactly condition (B) and mean that ( Write X = X + + X − as a sum of two independent Lévy processes, where X + is a zero mean, spectrally positive Lévy process and X − is a zero mean, spectrally negative Lévy process. Then since Therefore we may additionally assume that X is a zero mean, spectrally positive Lévy process and continue with the proof. Let us define the functions We now assume, without loss of generality, that λ J < 1. We will often refer to Proposition 6.2 in the Appendix, where important properties for the function for all x ≥ 0, are obtained. We proceed to show that (3.1) fails when λ J < 1. First we establish some notation. We will write X t = X b t + X b t , where X b is a spectrally positive Lévy process with jumps bounded by b and X b is a compensated Poisson process of jumps bigger than b. Since X is spectrally positive, a handy bound for X b t is where by (5.8), (5.4) and Proposition 6.2 (b) we immediately obtain the bound Therefore it will suffice to show that For this purpose we use inequality (4.11) in [3], which holds for any zero mean Lévy process with σ = 0. Using this result together with (5.3) gives In order to estimate 2vV (D(v)), we use (5.4) together with (b) and (d) from Proposition 6.2 and get 2vV (D(v)) = o(v κ ). This means that, for any ε > 0 and v ≤ v(ε), we have For any a > e + ε, we have by Proposition 6.3 that where we have set ρ = a − ε − e. Choose v n = D ← (1/2 n ) (see Proposition 6.2 for definition) and use (5.15) above together with Proposition 6.2, part (c), to get where K = ln 2. Then setting q = 2 2κ−1 1−κ , we see that Choose ρ, which is the same as choosing a, such that qρ = 1 and use the fact that J(1) < ∞ (recall that λ J < 1) to get where a = 1 q + ε + e. Then the Borel-Cantelli lemma gives (5.10) over {v n }, which reads as where we have made use of the definition of v n and the fact that W (x) x ↑ ∞, as x ↓ 0. This establishes the theorem.

Appendix
The first technical result in this Section is the following proposition. Proof. Note that t 0 R s− dX s is increasing in t, so it will be sufficient to show that t 0 R s− dX s = s≤t ∆X s (ω)R s− (ω) a.s., for any fixed t. Note that X is monotone and fix a path ω. Write the process X as where G(., ω) is nondecreasing and continuous, so that G(., ω) defines a diffuse measure on R + . Thus Clearly supp G(ω) excludes the points in time, s ∈ R + , at which X s = X s+h = X s−h , for some h > 0. Then we have supp G(ω) ⊆ A ∪ B ∪ C ∪ D with : A = {s : s is an end or a start point of an excursion} B = {s : X s− = X s−h < X s , for some h > 0} C = {s : X s−h = X s < X s+δ , for some h > 0 and any δ > 0} It is immediate that A is countable, since the number of the excursion is countable. B is also countable since by its definition the maximum should be attained by a jump. Finally we see that C is countable by its definition, since it requires a neighbourhood (s − h, s), where no maximum is attained. Using the fact that G is diffuse we get  (d) Given that λ J < ∞ we have D(x) x κ → 0.