Rate of escape of conditioned Brownian motion

We study the norm of the two-dimensional Brownian motion conditioned to stay outside the unit disk at all times. By conditioning the process is changed from barely recurrent to slightly transient. We obtain sharp results on the rate of escape to infinity of the process of future minima: (i) we find an integral test on the function $g$ so that the future minima process drops beyond the barrier $\exp \{ \ln t \times g(\ln \ln t)\}$ at arbitrary large times; (ii) we show that the future minima process exceeds $K \sqrt{ t \times \ln \ln \ln t}$ at arbitrary large times with probability 0 [resp., 1] if $K$ is larger [resp., smaller] than some positive constant. For this, we introduce a renewal structure attached to record times and values. Additional results are given for the long time behavior of the norm.


Introduction
This paper is devoted to the planar Brownian motion conditioned to stay outside the unit ball B(0, 1) at all times. Besides its own appeal from its fundamental character, this process has attracted a keen interest as being the elementary brick of the twodimensional Brownian random interlacement recently introduced in [9]. By rotational symmetry, the norm R of the conditioned Brownian motion itself follows a stochastic differential equation in [1, ∞), dt + dB(t) (1.1) with B a standard Brownian motion in R, and we can -and we will -restrict the study of the conditioned process to that of R itself since the angle obeys a diffusion subordinated to it. The two-dimensional Brownian motion is critically recurrent, but conditioning it outside the unit ball turns it into (delicately) transient. A natural question is the rate at which R(t) tends to ∞ as t → ∞, this is the object of the present paper. A measure of the reluctance of R to tend to infinity is given by the future minima process M (t) = inf{R(s); s ≥ t} which is non-decreasing to ∞ a.s. The corresponding model in the discrete case, the two-dimensional simple random walk conditioned to avoid the origin at all times, has motivated many recent papers. Estimates on the future minimum distance to the origin have been obtained in [22], we will use them as benchmarks. It is also shown that two independent conditioned walkers meet infinitely often although they are transient. The range of the walk, i.e. the set of visited sites, is studied in [11]: if a finite A ⊂ Z 2 \ {0} is "big enough and well distributed in space", then the proportion of visited sites is approximately uniformly distributed on [0,1]. In [20] the explicit formula for the Green function is obtained, and a survey is given in Chapter 4 of [21].
For dimensions d ≥ 3, the random interlacement model has been introduced in [27] to describe the local picture of the visited set by a random walk at large times on a large d-dimensional torus, and similarly in [28], the Brownian random interlacement to describe the Wiener sausage around the Brownian motion on a d-dimensional torus. For dimension d = 2, the random interlacement model is the local limit of the visited set by the random walk around a point which has not been visited so far [7], and analogously, the Brownian random interlacement is the local limit of the Wiener sausage on the two-dimensional torus around a point which is outside the sausage [9]. Formally, the two-dimensional Brownian random interlacement is defined as a Poisson process of bi-infinite paths, which are rescaled instances of the so-called "Wiener moustache". The Wiener moustache is obtained by gluing two instances (for positive and negative times, see Figure 1 in [9]) of planar Brownian motion conditioned to stay outside the unit ball, which are independent except that they share the same starting point (see Lemma 3.9 in [9]). Hence, the process we consider in this paper is the building brick of Brownian random interlacement in the plane. We also recall that the complement of the sausage around the interlacement has an interesting phase transition, changing from a.s. unbounded to a.s. bounded as the Poisson intensity is increased, see Th. 2.13 in [9] and [8] for the discrete case.
With a slight abuse of terminology, we say f (t) ≤ g(t) i.o. (infinitely often) if the set {t ≥ 0 : f (t) ≤ g(t)} is unbounded, and f (t) ≤ g(t) ev . (eventually) if the set {t ≥ 0 : f (t) ≤ g(t)} is a neighborhood of ∞ in R + .
We now give a short overview of some of our results on the rate of escape of R to infinity. They are consequences of the results in section 2.1.
Theorem 1.1. For g : R + → R + non-increasing such that (ln t)g(ln ln t) is non-decreasing, P M (t) ≤ e (ln t)g(ln ln t) i.o. = 0 1 according to

Rate of escape of conditioned Brownian motion
Though we do not know the actual value of K * we can see that both theorems are much finer than the corresponding Theorem 1.2 of [22]. These two theorems together yield a precise version of the observation from [20] that the pathwise divergence of R to infinity occurs in a highly irregular way. The future minima process has been considered earlier, e.g. [16] and [17] for Bessel processes and for random walks, and [19] for positive self-similar Markov processes. Let us recall the similar result for transient Bessel processes. Denote by BES d the d-dimensional Bessel process, i.e. the solution of the stochastic differential equation An important (and beautiful) finding of our work is a renewal structure in Section 3 which allows sharp estimates. To illustrate that, let's mention that we will find a sequence of relevant random variables S n > 0 solving a random difference equation S n = α n S n−1 + β n , n ≥ 1 , (1.4) where the sequence (α n , β n ) n is i.i.d. with positive coefficients, α n < 1 and β n with logarithmic tails, P(β 1 > t) ∼ c/ ln t for large t. Although autoregressive processes AR(1) of the type (1.4) are usually addressed with exponential or power-law tail for β n , see [5], the case of logarithmic tail has been also considered, see [15], [31], [3], and also both papers [1] and [32] for a recent account. Interestingly, our model is critical in the perspective of the Markov chain S n , in the sense that the actual value of the constant c is precisely the transition from recurrence to transience for the chain.
The paper is organized as follows. We give the main results in the next section. The regeneration structure is defined in Section 3, together with the basic estimates, and ending with Remark 3.8 on the above random difference equation. In the next section we prove some results showing that R somewhat behaves at large times like the two-dimensional Bessel process. In Sections 5 and 6 we prove the two above theorems.

Main results
We first collect a few properties of the involved processes.
We start with some notations. Consider W a two-dimensional standard Brownian motion and denote by P x the law of W starting at x, W a Brownian motion conditioned to stay outside the unit ball, and denote by P x its law when starting at x, and R = | W | its Euclidean norm with P r the corresponding law (r = |x|). In this paper we are mainly interested in P = P 1 . The construction of the process starting from R(0) > 1 is standard from taboo process theory, and the one starting from R(0) = 1 is given in definition 2.2 of [9].
Denote by |·| the Euclidean norm and B(x, r) the closed ball with center x and radius r > 0. For a closed subset B of the state space of a process Y, we denote the entrance time τ (Y ; B) = inf{t ≥ 0 : Y (t) ∈ B}, and write for short τ (Y ; r) = τ (Y ; ∂B(0, r)) and also τ (r) = τ (R; r) when Y = R. The function h(x) = ln |x| is harmonic in R 2 \ {0}, positive on R 2 \ B(0, 1) and vanishes on the unit circle. Then, the law P x of the planar Brownian motion W conditioned outside B(0, 1) is given by Doob's h-transform of P x . By definition, Rate of escape of conditioned Brownian motion recalling that P x (τ (W ; r 1 ) < τ (W ; 1)) = ln |x| ln r1 since ln |x| is harmonic in R 2 \{0}. Another remarkable property is Remark 3.8 in [9]: For all x / ∈ B(1), ρ > 0, we have The scale function for the process R -that is, the unique (up to affine transformation) real function S such that S(R(t)) is a local martingale -is S(r) = −1 ln r . Then, for 1 < a < r < b, We refer to section 2.1 in [9] for more details on the many interesting properties of W and R.
Theorem 2.1. For g : R + → R + non-increasing such that (ln t)g(ln 2 t) is non-decreasing, we have: ∞ g(u)du < ∞ =⇒ a.s., M (t) ≥ e (ln t)g(ln2 t) eventually, (2.3) and ∞ g(u)du = ∞ =⇒ a.s., M (t) ≤ e (ln t)g(ln2 t) infinitely often. (2.4) (Note that the second assumption is quite natural in view of the monotonicity of M (t).) Theorem 1.1 is a direct consequence of the above theorem. This result with an integral condition is reminiscent of Kolmogorov's test (see, e.g., sect. 4.12 in [14]), but the process M here is not Markov.
These estimates are stronger than the corresponding ones in Th. 1.2 of [22]. So are the following ones: Theorem 2.2. There exist 0 < K < K < ∞ such that, almost surely, M (t) ≤ K t ln 3 t eventually, (2.5) and M (t) ≥ K t ln 3 t infinitely often.
(2.6) Theorem 1.2 is essentially a reformulation of Theorem 2.2, it will be proved below Remark 6.2.
We recall the similar result (1.3) for transient Bessel processes: a.s. for all a < √ 2 < b, the future minima process min{BES d (s); s ≥ t} is eventually smaller than b √ t ln 2 t and infinitely often larger than a √ t ln 2 t. Finally we mention that, for d > 2, min{BES d (s); s ≥ t} ≤ ε √ t ln 2 t i.o., a.s. for all ε > 0. (See [16], P.349.) Rate of escape of conditioned Brownian motion

Long time behavior of R(t)
At large times the process R behaves like BES 2 . We emphasize that this is for the marginal law, but not for the future minimum. We formulate here precise statements of these facts.
It is well known that the random variable t −1/2 BES 2 (t) converges to the Rayleigh We will prove Theorems 2.3 and 2.4 in section 4.

Regenerative structure
We fix a parameter r > 1. We construct a regenerative structure associated with the process R starting from R(0) = 1.

Renewal times
We define a random sequence (H n , A n , T n ) n≥0 by H 0 , T 0 = 0, A 0 = 1, then Since R is a continuous function with lim t→∞ R(t) = ∞ a.s., we see by induction that T n < ∞ a.s. with T n < T n+1 and lim n→∞ T n = ∞ a.s. The T n are not stopping times, but they are called renewal times for the following reasons.
has same law as R and is independent of G 1 .
This proposition is the building brick of the is independent and identically distributed with the law of (R(t); t ∈ [0, T 1 ]).
is i.i.d. and distributed as (T 1 , A 1 ). Therefore (T n , A n ) can be written using i.i.d.r.v.'s, which will be used repeatedly all through.
Proof. Proposition 3.1. Recall that P r denotes the law of the process R with R(0) = r.
Observe that H 1 is a stopping time, and denote by F H1 the sigma-field of events that occur before time H 1 . By the strong Markov property, under P 1 , (R(H 1 + t)) t≥0 is independent of F H1 and has the law P r .
Moreover, by Theorem 2.4 in [30] (see also the proof of Lemma 3.9 in [9]), conditionally has the same law as R starting from a and conditioned to R(t) ≥ a, ∀t ≥ 0. By Brownian scaling, the latter law is equal to that of aR(·/a 2 ) under P 1 ; see also Remark 2.5 in [9].
up to null events, we obtain the desired statement.
Proof. Theorem 3.2. By induction, Proposition 3.1 implies that for all n, the process As a direct consequence we have discovered a simple representation of crucial times and points of the process.

Corollary 3.3. Define
Then, (A n , T n ) n≥1 is an i.i.d. sequence with the same law as (A 1 , T 1 ), and we have the

Description of a cycle
Recall r > 1 is fixed. We will shorten the notations: (H, A, T ) = (H 1 , A 1 , T 1 ). Recall that R starts from R(0) = 1, hits r at H for the first time, and reaches its future minimum A ∈ (1, r) at time T . We also introduce its maximum B > r on the time interval [H, T ], as well as their logarithms U, V : figure 1. It was shown in [9] that U is uniform on [0,1] (see (2.2) with b → ∞), but we can even compute the joint law of U and V . For 1 < a − h < a < r < b, we have by the strong Markov property [9] and that, for R started at b, min{R(t); t ≥ 0} has density (a ln b) −1 on (1, b). Hence (A, B) has a density given by the negative of the b-derivative of the dominant term as h 0, i.e., By changing variables, it follows that (U, V ) has density We recover that U is uniform on (0,1) and that V has density It follows that for v ≥ 1, We also need information on the cycle length T . For any s ≥ 1 we consider the hitting time by R starting at s of its absolute minimum, and denote by µ s a r.v. with the same law: Recall that, under P, R(0) = 1. (ii) For u ∈ (0, 1), the conditional law of T given U ≥ u is equal to the law of an independent sum H + r 2u µ (r 1−u ) .

Rate of escape of conditioned Brownian motion
Proof. (i) directly follows from the strong Markov property for the Markov process R and the stopping time H.
For (ii), we recall Remark 2.5 in [9]: for c > 1, denoting by R c the diffusion R conditioned to stay outside (1, c], and started at r ≥ c, we have in law with R started at r/c. (Alternatively, this follows from R being the norm of conditioned Brownian motion (2.1) and from Brownian scaling.) Hence, for s ∈ R, again from the strong Markov property, which proves the result.

Tail estimates for T
We need some estimates of the upper and lower tails of T , that we derive in this section. But first we state elementary comparisons of R and Bessel processes, see (1.2), that will be used all through the paper.
(ii) For δ > 0 there exists a coupling of the processes R and BES 2+δ starting at 1 such that for σ = sup{t ≥ 0; R(t) ≤ e 2/δ }, Proof. It is well known [6] that the stochastic differential equation (1.2) has a strong solution, so we can couple the processes R and BES 2 , BES 2+δ by driving equations (1.1) and (1.2) by the same Brownian motion B. Then, with x + = max{x, 0} for x real, we have for all t > 0 and all realization of B, Integrating on t ∈ [σ, σ + s] we obtain (ii).
We are now ready to start with the upper tail of T . Proposition 3.6. As t → ∞, More precisely, there exists constants t 0 and C such that for all t ≥ t 0 , Proof. We first obtain two preliminary estimates.
Upper bound : for 0 < ε < 1, for t ≥ t 1 with t 1 > 0 not depending on ε ∈ (0, 1). Indeed, to obtain the first term we have In order to obtain the second one, we first bound R(·) ≥ BES 2 (·), with BES 2 started at 0 using Proposition 3.5, and finally that there exist positive C 0 , C 1 such that see e.g. exercise 1 p.106 in [26].
Lower bound : for 0 < ε < 1/2, for t ≥ t 2 , with t 2 > 0 not depending on ε ∈ (0, 1 2 ). In (3.6) we have used (3.2) for the first term, and we give details for the second one: for |x| = r > 1 by (2.1), we get for all t > 1, for some constants C 2 , C 3 > 0 by the moderate deviation principle for Brownian motion.
For both the upper and lower bounds, we now choose with a constant C 4 . Provided the constant C 4 is large enough, the terms are dominated by (ln t) −2 . We then get (3.4) from (3.5) and (3.6), taking any C larger than C 4 + 4 ln r 5 . Finally, (3.3) is a direct consequence of (3.4). The proof is complete.
We also need to control the lower tail of T . Rate of escape of conditioned Brownian motion (ii) For all ε > 0, there exists t 1 > 0 such that for t ≤ t 1 , and all u ∈ [0, 1), Proof. (i) Setting a = 1 + ε/2 ∈ (1, r) and using the strong Markov property for the hitting time of a by R, we obtain Recalling large deviation results for Brownian motion in small time, e.g. section 6.8 of Ch. 5 in [2], we see that the above upper bound implies (i).
(ii) Let t ≤ 1. By Proposition 3.4-(ii), and by comparing R and BES 2 from Proposition 3.5 (i), we obtain (3.10) with θ = t 2 r 2u . We estimate the first term using again large deviation for Brownian motion in small time [2]: for |x| < r, (3.11) To estimate the second term in (3.10), note that R(θ) ≥ r 1−u + √ θ and R(s) ≥ r 1−u for all s ≥ θ implies that, P r 1−u -a.s., R achieves its minimum before time θ. Hence, by Markov property and (2.2), arguing on the second line that R dominates Brownian motion by comparing the drift.

Tail estimate for U
Recall Hoeffding's inequality [13], or Th. 2.8 in [4]: for b < 1, c > 1 and i ≥ 1, and (3.13) Remark 3.8 (The random difference equation (1.4)). Introduce the sequence S n = T n A 2 n which is key in Section 6. In view of (3.1), we see that it solves the recursion The bi-dimensional sequence (α n , β n ), n ≥ 1, is i.i.d., and the sequence (S n ) falls into the usual setup of random difference equation. In our case, the following quantities exist and satisfy a < 0 (contractive case), 0 < b < ∞ (very heavy tail). Following [1] and [32], this prevents the Markov chain S n to be positive recurrent: though the contraction brings stability to the process, yet occasional large values of β n overcompensate this behavior so that positive recurrence fails to hold. In our case, we easily check from

Proofs for section 2.2
We consider the process R from (1.1) on a geometric scale, and we observe that is a standard Brownian motion by Paul Lévy's characterization. We claim that X solves the stochastic differential equation dt + dβ(t) Indeed, ds + e −t/2 B(e t −1) , Moreover, we easily check the equality in the Gaussian space generated by B. Adding up terms, we see that X solves the stochastic differential equation (4.2). Denote by b t , resp. b ∞ the drift coefficient and its limit, given for x ∈ (0, ∞) by and by X (∞) the homogeneous diffusion Following the approach of Takeyama [29], we state the following Lemma 4.1. The diffusion X(t) = e −t/2 R(e t −1) is asymptotically homogeneous with homogeneous limit X (∞) , i.e, for all continuous f with compact support in (0, ∞) and all t > 0, E f (X(t + s))|X(s) = x −→ E x f (X (∞) (t)) as s → ∞ uniformly on compact subsets of (0, ∞).
Proof. It is easier to consider X(t) = X(t) − e −t/2 which takes values in the fixed interval (0, ∞), and X (s) (t) = X(s + t). Then, the coefficients of the diffusion X (s) converge to those of X (∞) , uniformly on compact subsets of (0, ∞), and the corresponding martingale problems have a unique solution. Thus, Theorem 11.1.4 in [25] yields the desired result.
The process X (∞) is the transform X (∞) (t) = X (∞,2) (t) = e −t/2 BES 2 (e t − 1) of BES 2 by the rescaling and deterministic time-change (4.1). It is recurrent and ergodic on (0, ∞) with the Rayleigh law as invariant probability measure, A first consequence is that R marginally behaves like BES 2 .

Corollary 4.2 (Convergence in law)
. Let Z ∼ ν. As t → ∞, Proof. Denote by P s,t , P s,t (0 ≤ s ≤ t) the Markov semi-groups associated to X and X (∞) , where both terms vanish as s, t → ∞, which is our claim. Indeed, by convergence of X (∞) to equilibrium, P 0,t f − f dν → 0 uniformly on compact subsets of (0, ∞) as t → ∞ and Lemma 4.1 implies that P s,s+t f − P (∞) s,s+t f → 0 uniformly on compacts as s → ∞: thus, we only need to prove tightness, i.e. that for all x ≥ 1, But this follows from the next two bounds First recall from [9] that 1 ln R is a local martingale. Since it is positive, by Fatou's lemma it is also a super-martingale when started at r > 1 and thus, E r 1 ln R(t) ≤ 1 ln r .  Thus, for all r > 1, E r [R(t) 2 ] ≤ r 2 + 2t 1 + 1 ln r .

by (4.3). Finally we obtain that
Proof. It is easy to check that, w.l.o.g., we can assume that f : (0, ∞) → R is nondecreasing. By the comparison principles of Proposition 3.5, we can couple the processes R, BES 2 , BES 2+δ (δ > 0) starting at 1 such that, a.s., for all t ≥ ln(1 + σ) with By the pointwise ergodic theorem for X (∞,2) and X (∞,2+δ) and monotonicity of f , we 2+δ) . As δ vanishes, the two extreme members coincide, ending the proof of the first statement. The second one follows by changing variables.

Proof of Theorem 2.1
Recall the representation (3.1) from Corollary 3.3, sequence with the same law as (T 1 , A 1 ). Fix r ± with 1 < r − < r < r + < ∞. By (3.12) and (3.13), with probability one there exists some finite random k 0 such that for all k ≥ k 0 In what follows we will use the rough bounds There exists a constant c such that for all sequence (δ(k)) k tending to 0, Proof. Fix a with 1 < a < e. Letting v k = a k δ(k) and t k = kr k + v k , we note that e k δ(k) ≥ t k eventually since δ vanishes, and we have by independence and since v k → ∞ as k → ∞, we have for all large enough k, with c = 2c 1 / ln a for all large k, since δ vanishes at ∞. This ends the proof.
First, since g is non-increasing, and, in addition to (5.1), we have for large k ∈ K, Rate of escape of conditioned Brownian motion since g is non-increasing. By integrability, g is vanishing at infinity, so the function is such that f (t) ≤ ln t eventually, and also g(ln 2 t) ≤ g(ln f (t)) by monotonicity. Thus, for large k and t's, Now, define random integers k(t) = max{k ∈ K; T k ≤ t}, and note from (5.4) that a.s., for large t we have k(t) ≥ max{k ∈ K; e c2 k δ(k) ≤ t}. Then, a.s., for all large enough t, Taking c 3 = c −1 2 > 4/ ln r − , we conclude that a.s., M (t) ≥ e (ln t)g(ln2 t) eventually, ending the proof of (2.3).
We now turn to the proof of claim (2.4) of Theorem 2.1. We start with a lemma: Lemma 5.2. Let (n k ) k≥0 be a non-decreasing sequence of integers and (t k ) k≥0 be a sequence with t k > 1. Then, k≥0 n k+1 − n k ln t k+1 = ∞ =⇒ a.s., T n k ≥ t k infinitely often.
Proof. The events E k = {max i=n k +1,...,n k+1 T i ≥ t k+1 }, k ≥ 0 are independent with E k ⊂ {T n k+1 ≥ t k+1 }. Hence the conclusion holds as soon as these events occurs infinitely often a.s. By the second Borel-Cantelli lemma, it suffices to show that the assumption implies k≥0 P(E k ) = ∞. We use Proposition 3.6 and independence. The case when t k does not tend to infinity is easily considered, so we assume from now on that k is large enough so that P(T ≥ t k+1 ) ≥ c/ ln t k+1 for some fixed constant c ∈ (0, ln r).
Then, we can bound which is the general term of a divergent series.
Proof. Theorem 2.1, claim (2.4). Let us consider t k = e e k , n k = f (t k ) , f (t) = c 3 (ln t)g(ln 2 t) with c 3 > 0 to be fixed later. Note that f is non-decreasing by assumption. We have with a constant c 4 which is finite since t k is increasing fast and the truncation error is bounded. As in (5.3), k≥0 g(k) ≥ ∞ 0 g(t)dt = ∞, and n k=0 g(k + 1) − 1 e g(k) = g(n + 1) − 1 e g(0) + 1 − 1 e n k=1 g(k) .
Therefore k≥0 n k+1 −n k ln t k+1 = ∞. From Lemma 5.2 we obtain that a.s., T n k ≥ t k i.o., which shows that Taking c 3 < 1/ ln r + , we obtain the desired claim.

Proof of Theorem 2.2
We study the sequence which can be written in the form S m = S n r 2(Un+1+···+Um) + S m n+1 , where, for 1 ≤ n < m, The point is that, in (6.1), S n and S m n+1 are independent, with S m n+1 equal to S m−n in law. We study the convergence/divergence of the series n≥1 P[S n ≤ t n ], with t n of the form t n = β ln 2 n ∧ 1 (6.2) for some β > 0.

Proof of (2.5)
Let (i (n) ) n≥1 be a sequence of integers such that 1 ≤ i (n) ≤ n and (c (n) i ) i=i (n) +1,...,n,n≥1 be a doubly-indexed sequence of real parameters with c (n) i > 1, to be fixed later on.

Rate of escape of conditioned Brownian motion
Upper bound: Iterating the estimate, and so on down to i (n) + 1, we obtain  We have for i (n) + 1 ≤ i ≤ n and large n, with error terms One can check that sup n n i=i (n) +1 ε n,i,2 < ∞, so for some positive constant D, for n large and i (n) ≤ i ≤ n, Combining this with (3.12), we get for n large and i (n) + 1 ≤ i ≤ n, Thus, the series a n , with is convergent.

Choice of t n
To conclude, we need to take care of the first term in the right-hand side of (6.3). Recall t n from (6.2) (we will assume n large so that ln 2 n ≥ β), and fix an integer i 1 ≥ 1 and an ∈ (0, r − 1). For 1 ≤ i ≤ i 1 , applying (3.7) we get as n → ∞, and then, for n large, Using (6.5) we will bound where i (n) = ln 2 n . As soon as β < (r−1) 2(r+1) , there exists some integer i 1 and some and combining (6.3) with n a n < ∞, we obtain P(S n ≤ t n ) < ∞, i.e., n≥1 P[T n ≤ A 2 n t n ] < ∞ .

Proof of (2.6)
We start by proving that it suffices to show divergence of the series introduced above (6.2): Proof. For all β < β 0 , we have n P(S n ≤ β ln2 n ) < ∞ and the first Borel-Cantelli lemma shows that lim inf n S n ln 2 n ≥ β 0 . To prove the reverse inequality we proceed by steps: • First step: For any non-increasing sequence (t n ) n , n≥1 P[S n ≤ t n ] = ∞ =⇒ P(S n ≤ t n i.o.) ≥ 1 4 .
Indeed, for 1 ≤ n ≤ m, For all k large enough we have k n=1 P[S n ≤ t n ] ≥ 2, and then for all 1 ≤ n ≤ k, Kochen-Stone's theorem [18] -a variant of Borel-Cantelli's lemma -yields which concludes this step.
• Second step: Let's introduce the σ-fields By Kolmogorov 0-1 law and independence of the sequence ((A n , T n ); n ≥ 1), every element A of the tail field T has P(A) ∈ {0, 1}. Fix β ≥ 0 and introduce the events and Ω 0 = { lim n→∞ ln 2 n r 2(U1+...+Un) = 0} . Note that E = E 0 and that P(Ω 0 ) = 1. Since, by definition, , we see that the two sets E k and E k+1 coincide on Ω 0 , for all k ≥ 0. Denoting the common intersection by we see that E belongs to T and then has probability equal to 0 or 1. The similar 0-1 law holds for E which is equal to E up to a negligible set.
• Final step: For any β > β 0 , the series n P(S n ≤ t n ) with t n = β/ ln 2 n is diverging. By the first step, the probability P[S n ≤ t n i.o.] ≥ 1/4, and by the second one is equal to 1. Thus lim inf n S n ln 2 n ≤ β a.s., for all such β's. The lemma is proved. Remark 6.2. We have followed the approach of the renewal structure to get the 0-1 law, with the advantage to keep the paper self-contained. A tempting alternative would be to show that the tail σ-field of R is trivial; we mention the illuminating survey [23] on the tail σ-field of a diffusion.
Anticipating on the proof of (2.6) we now give a short proof of Theorem 1.2.
Proof. It is not difficult to check the criteria of [10] or [24] for triviality of the tail σ-field of one-dimensional diffusion (see Theorem 3 in [23]). Then, K * = lim sup t→∞ M (t) √ t ln3 t is a.s. constant, and results (2.5) and (2.6) show that K * is positive and finite.
To continue the proof of (2.6) we need an intermediate result.
Proof. Clearly, it suffices to prove that for v > 0, there exists u > 0 such that, for all large n, we have P[S n ≤ u n ] ≥ 1 e vn . (6.6) Indeed, substituting v, n in (6.6) by α −1 0 , α 0 ln 2 n shows that any β > u/α 0 fulfills the statement of the lemma.
To show (6.6), we fix some b ∈ (0, 1) (b will be chosen small later on), and we note that: Then, By Proposition 3.7, we can find t 0 > 0 and ρ > 0 such that, for t ≤ t 0 , Now, we fix some t 1 > t 0 , we will bound the factors in (6.7) as follows: For ln(t1 n u(r b −1) ) b ln r ≤ i ≤ n: With this choice, the estimate (6.7) becomes × exp −ρ n u(r b − 1) 2 .
From this we derive the claim (6.6) by taking b small, u and t 1 large. This ends the proof of the lemma.
We iterate the procedure, P[S n−1 ≤ t n − s (n) n ] ≥ P T ≤ s  ≤ D i n , for some positive constant D .
As we did for the series n a n , cf. below (6.5) except for using (3.13) instead of (3.12), we easily see that the series n a n , with a n = Observe that, by taking α 0 > 16, we have b (n) i ∈ (1/2, 1) for all large n and i ∈ [i (n) + 1, n], and also that for large n, (6.9) which tends to ∞ as n → ∞. For i (n) + 1 ≤ i ≤ n and n large, in view of (6.9) we have (using − ln(1 − u) ≤ u + u 2 for small u > 0 and 1 1−u ≤ 1 + 2u for 0 < u < One can check that sup n n i=i (n)+1 ε n,i,4 < ∞, so for some positive constant D , for large n,