Limit Theorem for sub-ballistic random walks in Dirichlet environment in dimension $d\geq 3$

We look at random walks in Dirichlet environment. It was known that in dimension $d\geq 3$, if the walk is sub-ballistic, the displacement of the walk is polynomial of order $\kappa$ for some explicit $\kappa$. We show that the walk, after renormalization, actually converges to a $\kappa$-stable completely asymmetric Levy Process.

1 Introduction and results

Introduction
Random walks in random environments (RWRE) have been studied for several decades and are now rather well understood in the one dimensional case (see Solomon [28],Kesten, Kozlov, Spitzer [17] and Sinaï [26]). Important progress has been made in higher dimension, mainly in 3 directions: under a ballisticity condition, for small perturbation of the simple random walk ( [9], [32], [6], [22], [18]) and in Dirichlet environment. The most studied ballisticity conditions come from the conditions (T ) and (T ′ ) introduced by Sznitman in [31], [29]. They have been shown to be equivalent in [20] and also to be equivalent to an effective polynomial condition [3], [10]. By assuming any of these, in the ballistic regime, directional transience, ballisticity, and a CLT have been proved. Quenched CLTs have also been proved in various cases, either by assuming an annealed CLT, uniform ellipticity and a condition introduced by Kalikow [30], or by assuming the existence of high enough moments for the renewal times (see [33] for a definition of the renewal times) and uniform ellipticity of the environment [21] and [4] in dimension d ≥ 4. All these results show limit theorems in the ballistic case, that is to say that the walk has a positive speed. In dimension 2 and higher no complete limit theorems are known for the RWRE in the sub-ballistic case. However in dimension 1 we know that a sub-ballistic regime exists, where the walk can behave like the inverse of a stable subordinator [17] [13]. This sub-ballistic regime is caused by the existence of traps where the walk spends most of its time. This trapping phenomenon appears in other models closely related to the RWRE for instance the Bouchaud Trap Model (see [1] for a precise definition and an overview of the results). The model of random walks in random conductances also exhibits a similar trapping phenomenon. Indeed an annealed limit theorem (the limit is the inverse of a stable subordinator) and an equivalent to the CLT [16] have been proved for the biased random walk in random conductances. Similar results have been obtained for the biased walk in the percolation cluster and in Galton-Watson trees, but in both cases there is no convergence to a limit law [2], [14]. In the special case of iid RWRE a trapping phenomenon that leads to sub-ballistic behaviour has been identified in [7], [8] and [15] but no limit theorem has been proved. The random walk in Dirichlet environment (RWDE) is a model where the transition probabilities are iid Dirichlet random variables (see [25] for an overview). It was first introduced because of its link to the linearly directed-edge reinforced random walk ( [19], [12]). It also has a property of invariance by time reversal that allows explicit calculations (see [23]). In particular, it gives a simple criterion for existence of absolutely continuous invariant distribution from the point of view of the particle, directional transience and ballisticity in dimension d ≥ 3 ( [34], [7], [35], [24]). In the non-ballistic case the walk is directionally transient but the limit law was still unknown ( [7]), it was only known that for some explicit κ ∈ (0, 1], log(|Xn|) log(n) → κ. In this paper we give the annealed limit law for the sub-ballistic regime (κ ≤ 1) in dimension d ≥ 3. In the case κ = 1 we have the limit law of 1 n log(n) Y n (where Y is the random walk) while for κ < 1 we have the limit law of the process. To the best of our knowledge, this is the first stable limit theorem for non reversible RWRE in iid environment, in dimension d ≥ 2. .

Definitions and statement of the results
In all the paper we set d ≥ 3. Let (e 1 , . . . , e d ) be the canonical base of Z d and for any j ∈ [[d + 1, 2d]], set e j = −e j−d . For any z ∈ Z d , let ||z|| := d i=1 |z i | be the L 1 -norm of z. For any x, y ∈ Z d we will write x ∼ y if ||y − x|| = 1. Let E = {(x, y) ∈ (Z d ) 2 , x ∼ y} be the set of directed edges of Z d and letẼ = {{x, y}, (x, y) ∈ (Z d ) 2 , x ∼ y} be the set of non-directed edges.
Let Ω be the set of environments on Z d : ω(x, x + e i ) = 1}.
Definition 3. We define the two parameters κ and κ ′ by: and For any direction j ∈ [ [1, d]] we also define the parameter κ j by: In [24], it was proved that, for d ≥ 3, when κ > 1, there exists an invariant probability measure Q (α) for the environment from the point of view of the particle, absolutely continuous with respect to P (α) . From that it is possible to show that directional transience and ballisticity are equivalent when κ > 1. Furthermore, we know for which parameter the walk is directionally transient.
Theorem A (Corollary 1 of [35]). If d ≥ 3 and d α = 0, then for P (α) almost every environment, the walk is directionally transient with asymptotic direction d α , that is to say: , P ω 0 almost surely.
However, when κ ≤ 1, such an invariant probability does not exist because of traps. But, in [7], it was proved that, by accelerating the walk, we can get an invariant probability for this accelerated walk, absolutely continuous with respect to P (α) . This lead to the following limit theorem in [7]: Proposition 1.2.2. If κ ≤ 1, d ≥ 3 and d α = 0. Let l ∈ {e 1 , . . . , e 2d } be such that d α .l > 0. then we have the following convergence in probability (for the annealed law): log(Y n .l) log(n) → κ.
We will now give a precise definition of the accelerated walk. We call directed path a sequence of vertices σ = (x 0 , . . . , x n ) such that (x i , x i+1 ) ∈ E for all i. To simplify notations, we will write ω σ := border. We will call X m t the continuous-time Markov chain whose jump rate from x to y is γ m ω (x)ω(x, y), with X m 0 = 0. This means that Y n = X m t m n and X m t = k Y k 1 t m k ≤t<t m k+1 , for t m n = n k=1 1 γ m ω (Y k ) E k , where the E i are iid exponential random variables of parameter 1. The walk X m t can be viewed as an accelerated version of the walk Y n . Now, we need to introduce an other object: the walk seen from the point of view of the particle. First, let (θ x ) x∈Z d be the shift on the environment defined by: θ x ω(y, z) := ω(x + y, x + z). We call process seen from the point of view of the particle the process defined by ω m t = θ X m t ω. Unlike the walk Y , under P (α) 0 , ω m t is a Markov process on Ω. Its generator R is given by: for all bounded measurable functions f on Ω.
Theorem B. (Theorem 2.1 of [7]) In dimension d ≥ 3, if m is large enough then the process ω m t t∈R + has a stationary distribution Q m,α . For any β > 1 there exists an m such that dQ m,α dP α is in L β . We will write Q m,α 0 (·) for Q m,α (P ω 0 (·)) To simplify the notations, we will drop the (α) from P (α) , P (α) 0 , Q m,α and Q m,α 0 when there is no ambiguity. We will also write X t , Q and Q 0 instead of X m t , Q m and Q m 0 when there is no ambiguity on m. We need a last definition to be able to state the limit theorems.
The following two theorems, which are the main results of this paper, give a full annealed limit theorem: Theorem 1. Set d ≥ 3 and α ∈ (0, ∞) 2d . Let Y n (t) be defined by: If κ < 1 and d α = 0, there exists positive constants c 1 , c 2 , c 3 such that for the J 1 topology and for P (α) for the M 1 topology and for P t → n − 1 κ inf{t ≥ 0, Y (t).e 1 ≥ nx} → c 2 S κ and for the J 1 topology and for P (α) 0 : Remark 1. We will give a quick explanation on what the M 1 and J 1 topologies are, for a precise definition see [27], [36]. They were both introduced as a generalization of the infinite norm for càdlàg functions.
It is essentially the same as the infinite norm except that you can "wiggle" the function timewise. The M 1 topology is a topology on the graphs of the functions where we add vertical segments every time there is a jump. The main difference between the M 1 and J 1 topology is that there is almost no difference between one jump and small consecutive jumps in the M 1 topology while the difference is significant in the J 1 topology. The reason why we only have a convergence in M 1 for the hitting times n − 1 κ inf{t ≥ 0, Y (t).e 1 ≥ nx} is because there are consecutive jumps. Indeed, if there is a large jump between inf{t ≥ 0, Y (t).e 1 ≥ n} and inf{t ≥ 0, Y (t).e 1 ≥ n + 1} it is likely that there is a trap with high strength close-by which means that it is likely that there also is a large jump between inf{t ≥ 0, Y (t).e 1 ≥ n + 1} and inf{t ≥ 0, Y (t).e 1 ≥ n + 2}.
Theorem 2. If d ≥ 3 and κ = 1, there exists positive constants c 1 , c 2 , c 3 such that we have the following convergences in probability (for the annealed law): Remark 2. We cannot replace the convergence in probability by an almost sure convergence. This is because if we look at a sum of iid random variables Z i with a heavy tail P(Z i ≥ t) ∼ ct −1 then we do not have an almost sure convergence. In fact, there are infinitely many i such that: A tool that will be central in the proof is the study of traps. We now give a precise definition of traps.
Definition 5. A trap is any undirected edge {x, y} such that ω(x, y) + ω(y, x) > 3 2 . The strength of a trap is the quantity 1 (1−ω(x,y))+(1−ω(y,x)) . Remark 3. 3 2 has been chosen because it ensures that ω(x, y), ω(y, x) > 1 2 which in turn means that for every point x, there is at most one point y such that (x, y) is a trap.

Sketch of the proof
The proofs for κ < 1 and κ = 1 are mostly the same and therefore we will explain both at the same time.

Only the renewal times matter
We first show that the number of points visited between two renewal times has a finite expectation (lemma 2.1.2 ). This means that the walk does not "wander far" between two renewal times. So we only have to know the renewal times and the position of the walk at the renewal times to prove both theorems (lemma 2.1.3 ). By proposition 1.2.1, the random variables (τ i+1 − τ i ) are iid which simplifies the study of the process i → τ i

The time between renewal times only depends on the strength of the traps
Then we use the stationary law of the accelerated walk to get two results: firstly, the time spent outside of traps is negligible (lemma 2.5.4 ); secondly, the number of time N the walk enters a trap has a finite moment of order κ + ε for some ε > 0 if κ < 1. If κ = 1, then N has a finite expectation (lemma 2.3.3 ). This means the time spent in a trap mostly depends on its strength. Now we want to show that the number of times the walk enters a trap and the time it stays in the trap each time are approximately independent. We get two different results in this direction:

The strength of the traps are essentially independent
The first result (lemma 2.3.1) is that in a way the time spent in traps are independent random variables. These random variables have a tail in Ct −κ where the constant C depends on where the walk enters and exits the trap and how many times it does. More precisely, we first set an environment and a path in this environment. Then we forget all the transition probabilities in the traps, this means that if {x, y} is a trap, then we only remember the "renormalized" transition probabilities: Then every time the path visits a trap we only remember where it enters the trap and where it exits the trap, we forget the number of back and forths inside the trap. Then, only knowing these information, the strength of the traps are independent.

The number of times a trap is visited and its strength are essentially independent
The second result (lemma 2.3.4 ) allows us to bound the probability that both the number of times the walk enters a trap and the strength of the trap are high. We use the fact that for an edge (x, y) if we know all the transition probabilities outside of x, y and we know the ω(x,z) 1−ω(x,y) z∼x and the ω(y,z) 1−ω(y,x) z∼y then the number of times the walk enters the trap is essentially independent of the strength of the trap (it depends mostly on 1−ω(x,y) 1−ω(y,x) and hardly on the strength of the trap). This means that it is unlikely that the traps with a high strength are visited many times.

Conclusion
Thanks to these results we get that if we set an integer A and we only look at traps that are entered less than A times then we have a good approximation of the total time spent in traps (lemma 2.4.2 ). The higher A is, the better the approximation gets. Now if we only look at the traps the walk enters less than A times, we get a finite sum of sums of iid random variables by lemma 2.3.1. This means that, after renormalization, the time spent in traps entered less than A times converges to a stable distribution if κ < 1. It converges to a constant if κ = 1 (lemma 2.4.3 ). Then the only thing left is to make A go to infinity and we get the first two results of both theorems. Finally to prove the last part of both theorems we just use basic inversion arguments.

The proof 2.1 Number of points visited between renewal times
In this section we show that the expectation of the number of point visited between two renewal times is finite. This means that only knowing the values of the renewal times will be enough to prove theorem 1 and 2.
Lemma 2.1.1. For m such that Q m exists, let (T m i ) i∈N * be the renewal times for the walk X m i.e T m i := t m τ i or to put it another way X m almost surely: LetL τ (i) be the number of renewal times before the walks travels a distance i in the direction e 1 ie: The sequence of random variables (d i+1 − d i ) i∈N * is iid by lemma 1.2.1. Therefore, if the expectation of D = d 2 − d 1 is infinite then dn n → ∞, P 0 almost surely. Now, for every i ∈ N * , we have dL τ (i) ≥ i and thereforeL . If P 0 almost surely n dn → 0 we would haveL However, there is a constant C > 0 such that every time the walk reaches a new height along e 1 , it is a renewal time with probability C (independent of the walk up to that time) so E P 0 Lτ (i) i ≥ C. Therefore we get that the expectation of the distance the walk travels in the direction e 1 between two renewal times is finite. Now we can look at the accelerated walk X m . We would like the sequence (T m i+1 − T m i ) i∈N * to be a sequence of iid random variables. Unfortunately, the definition of the accelerated random walk uses vertices in a box of size m around the vertex on which the walk currently is, so we need to wait at least 2m + 3 renewal times to be sure to be at a distance at least 2m + 1 of all the vertices visited before time T m i+1 − 1. So we only have that for any j ∈ N, the sequence We know that there exists a constant c > 0 such that P 0 almost surely And by the law of large number, P 0 almost surely: Lemma 2.1.2. The number of different points the walk visits between two renewal times has a finite expectation (Note that the number of different points visited between two renewal times is the same for the walk Y and the accelerated walks X m ).
Proof. We choose m large enough such that dQ m dP is in L γ for some γ > 1. In the following we will write T i instead of T m i to simplify the notations. Let β be such that 1 γ + 1 β = 1. Let c ∞ be the constant such that P 0 almost surely: Thus if the number of different points the walk visits between two renewal times has an infinite expectation (for P 0 ) then R i i → ∞, P 0 almost surely and therefore Q m 0 almost surely. However we have for any C > 0: Now we just have to prove that E Q m 0 (#{x, ∃t ∈ [0, 1), X t = x}) is finite. We use the fact that dQ m dP is in L γ and therefore So we just need to prove that E P #{x, ∃t ∈ [0, 1), X t = x} β is finite. This is an immediate consequence of lemma 4 of [7]. Therefore, for C large enough, we get: Therefore, the number of different points the walk visits between two renewal times has a finite expectation.
Now, we show that the trajectory of the walk cannot deviate too much from a straight line.
There exists D ∈ R d such that P 0 almost surely: is a sequence of iid random variables (for P 0 ). Let also has a finite expectation. So there exists D ∈ Z d such that P 0 almost surely: → 0, P 0 almost surely. We clearly have: So we get that P 0 almost surely: Yn L τ (n) → D.

Number of visits of traps
This section is devoted to refining some results of [7] to get an upper bound on the number of visits of traps. First we must get some results on finite graphs and then we will extend these results on Z d .
• for every vertex x ∈ V there exists a directed path from x to δ.
In this section we will only consider graphs with no multiple edges, no elementary loops (one edge starting and ending at the same point), and such that for every x, y ∈ V \{δ}, (x, y) ∈ E if and only if (y, x ∈ E). We will first extend the definition of γ m ω (x) for those graphs. Let G = (V ∪ {δ}, E) be a finite directed graph, (α(e)) e∈E be a family of real numbers, and P α be the corresponding Dirichlet distribution (independent at each site).
Definition 7. For x ∈ G and Λ ⊂ V ∪ {δ}, we define the following generalization of γ m ω : where we sum on simple paths from x to the border of Λ (i.e {y ∈ Λ, ∃z ∈ Λ, {x, y} ∈ V }) that stay in Λ.
Remark 4. We notice that, in Z d , for any m ∈ N * : We will also use the following acceleration function.
Definition 8. For any graph G and any environment ω on G we define the partial acceleration function γ ω G by: When there is no ambiguity we will write γ ω (x) instead of γ ω G (x) We have the following result, in the case of finite graphs: be a finite directed graph possessing at most n edges and such that every vertex is connected to δ by a directed path. We furthermore suppose that G has no multiple edges, no elementary loop, and that if (x, y) ∈ E and y = δ, then (y, x) ∈ E. Let (a(e)) e∈E be positive real numbers. Then, for every vertex x ∈ V , there exist real numbers C, r > 0 such that, for small ε > 0, where the value of β is explicit and given in [7] but to simplify the notations we will only use the fact that it is bigger than or equal to κ ′ in the case we will look at.
i ) 1≤i≤nr be independent Dirichlet random variables with respective parameters (α The following lemma shows that the value of the acceleration function γ m ω (x) depends mostly on the strength of the trap that contains x (if there is one). This means that the number of visits to a vertex depends mostly on the strength of the trap containing this vertex.
, for any m ≥ 2: Proof. Let m ≥ 2 be an integer. We will use the results we have on finite graphs for this lemma.
First we notice that the value of γ m ω (0) γ ω (0) β only depends on a finite amount of edges and vertices around 0. This means that we can look at this quantity on a finite graph and have the same law. The finite graph G m = (V m , E m ) we want is obtained by contracting all the points x ∈ Z d such that ||x|| 1 ≥ m in a single point δ (the cemetery vertex) and deleting all the edges going from this vertex to the rest of the environment. For any environment ω on Z d we have an equivalent environment ω m on G m : if (x, y) ∈ E and (x, y) ∈ E m then ω(x, y) =ω(x, y) and for any x ∈ V m \{δ},ω(x, δ) = y∈Z d ,||y|| 1 =m ω(x, y). Now we have: For any point y ∼ 0 and any environment ω we define Σ ω y by: by contracting the vertices 0 and x into a single vertex 0 and deleting the edges (0, x) and (x, 0). The edges (0, y) and (y, 0) stay the same for any y ∼ 0 such that x = y. However, the edges (x, y) and (y, x) become (0, y) and (y, 0) respectively, for any y ∼ x such that 0 = y. We can also define ω m x by: as a sum on simple paths, we have: x ,ω m x as a sum on simple paths σ from 0 to δ (σ 0 = 0), either the first vertex σ 1 visited by the path is such that (0, σ 1 ) ∈ E m or (x, σ 1 ) ∈ E m . We defineσ by: if (0, σ 1 ) ∈ E m thenσ := σ and we have: , and if (x, σ 1 ) ∈ E m thenσ i := σ i−1 for i ≥ 2 andσ 0 := 0 andσ 1 := x and we get: x ω m (0, x)ω m x (σ). For any environment ω, let x(ω m ) be the point that maximises (y → ω m (0, y)). We havẽ ω(0, y) ≥ 1 2d and therefore: 1 .
So we get, for any ε > 0: by definition of γ ω m G m (0): ∀y ∼ 0, γ ω m (0) G m Σ ω m y ≥ 1. Therefore: Now we can apply lemma 2.2.2 which gives, for any y ∼ 0: where underP, ω m y are independent Dirichlet random variables (on the graph G m y and the parameters of the Dirichlet are the same as in Z d ). Now, according to lemma 2.2.1 there exists two constants C ′ , r such that: This means that by changing the constant C ′ , we get: So there exists a constant D that does not depend on ε such that: We have the result we want.
Unfortunately this statement cannot be efficiently used with the invariant distribution Q m because we can visit multiple points between times 0 and 1 since the time is continuous. So we need a version of the previous lemma that takes this continuity into account. Lemma 2.2.4. Set α ∈ (0, ∞) 2d . For every β < κ+κ ′ 2 , there exists an integer m such that: Proof. Let p ∈ (1, ∞) be a constant such that βp 2 < κ+κ ′ 2 and let γ be such that 1 p + 1 γ = 1. Now let m be an integer such that dQ m dP is in L γ . This means that dQ m 0 dP 0 is also in L γ . We will only work in Z d so we will write γ ω instead of γ ω Z d .
This means we just need to show that be the random variable defined by: We have: does not depend on x and we get: And since there exists a constant C such that for every i ≥ 1 there are at most Ci d−1 points x such that ||x|| ∞ = i, we get: which is finite by lemma 4 of [7]. And by lemma 2.2.3 we get: So we get the result we want.

Independence of the traps
This section will be devoted to the precise study of traps. The notion of trap was defined in the introduction in definition5. In the previous section we have essentially shown that the total amount of time spent on a trap mostly depends on its strength. Now, we need a way to create independence between the times spent in the different traps. We will do it in two steps. First we will show that the strength of the traps are essentially independent and then we will show that the strength of a trap and the number of times it is visited are essentially independent. However, we first need to introduce a few objects to characterize this independence precisely.
Definition 9. Let T ω be the set of traps {x, y} ∈Ẽ for the environment ω.
T ω is the set of vertices x ∈ Z d such that there exist y such that {x, y} ∈ T ω . For any subset J of [ [1, d]] we define T ω J , the traps with direction in J by: For any subset J of [ [1, d]],T ω J is the set of vertices x ∈ Z d such that there exist y such that {x, y} ∈ T ω J . In the following we will omit the ω when there is no ambiguity.
Definition 10. We say that two environments ω 1 and ω 2 are trap-equivalent if: -they have the same traps: -at each vertex not in a trap, the transition probabilities are the same for both environment: -at each vertex x in a trap {x, y}, the transition probabilities conditioned on not crossing the trap are the same: .
We will denote byΩ the set of all equivalence classes for the trap-equivalence relation.
Definition 11. Setω ∈Ω. Let T be its set of trap and σ a path starting at 0 that only stays a finite amount of time every time it enters a trap. We want to define a path, with the same trajectory as σ outside the traps, which does not keep information regarding the time spent in the traps. We essentially want to erase all the back and forths inside traps. To that extent we define the sequences of integer times (t i ), (s i ) by: If σ t i is in a trap then [t i , s i ] is the interval of time spent in this trap before leaving it. The partially forgotten pathσ associated with σ in the environmentω is defined by: Similarly we can define the partially-forgotten walk (Ỹ n ) n∈N associated with (Y n ) n∈N Definition 12. For all i ∈ N * , let I i be the set defined by: And I n be defined by: Let σ be a path starting at 0 andẽ ∈Ẽ be an undirected edge. We define the sequences (t in i ) (the times when the path entersẽ) and (t out i ) (the times when the path exitsẽ) by: Since the walk is almost surely transient by theorem A, we have that for i large enough t in and y be such that {x, y} =ẽ. Let j ∈ [ [1, d]] be such that either x = y + e j or x = y − e j (j is the direction of the edge) and n be such that t in n < ∞ and t in n+1 = ∞. Now we can define N x→x , N x→y , N y→x , N y→y by: The configuration p of the edgeẽ, for the path σ, is the element of I n defined by: Remark 6. Setω ∈Ω. Let σ 1 , σ 2 be two paths starting at 0 with the same partially forgotten path inω. For any undirected edgeẽ, the configuration ofẽ is the same for σ 1 and σ 2 . Therefore we only need to know the partially forgotten path to know the configuration of an edge.

Now we can say in what way the strength of the traps are independent.
Lemma 2.3.1. For any environment ω ∈ Ω, letω ∈Ω be its equivalence class for the trapequivalent relation. Now let (Ỹ i ) be the partially forgotten walk. We will write α := 1≤i≤2d α i and for any vertex z and integer i we will use the notation α(z, z + e i ) := α i . Knowingω and (Ỹ i ), the strength of the various traps are independent. Furthermore, let {x, y} be a trap and p = (j, N x→x , N x→y , N y→x , N y→y ) its configuration. To simplify notations we will write N x := N x→x + N y→x , N y := N x→y + N y→y and N := N x + N y . Let (r, k) be defined by The density of law of (r, k) (with respect to the Lebesgue measure) knowingω andỸ is: where C p is a constant that only depends on p and α, and h p is a function that only depends on p and α and that satisfies the following bound: And for the law of the strength s of the trap, there exists a constant D that only depends on the configuration of the trap such that for any A ≥ 2: Proof. In the following, we will write α := 2d i=1 α i and if y = x + e i we will write α(x, y) := α i .
First we need to show that the strength of the traps is approximately independent of the trajectory of the walk. We will take an environment ω and letω be the set of all environments that are trap-equivalent to ω. Now for any path σ starting at 0, letσω be the set of all path that starts at 0 and that have the same partially-forgotten path as σ. We want to see how the law of the environment is changed knowing the partially-forgotten path and the equivalence class of the environment. We get that the density of the environment (we look at an environment of finite size, large enough to contain the path we look at) (for P (α) ) knowing the equivalence class of the environment is equal to: where ε x = 1 − ω(x, y) and ε y = 1 − ω(y, x). Now, knowing the environment, the probability of having the given partially-forgotten walk is the same in parts of the environment where there is no trap. The only thing that depends on the specific environment is the times when the walk crosses the traps. Let {x, y} be a trap, and for any z 1 , z 2 ∈ {x, y} letp(z 1 , z 2 ) be the probability to exit the path by z 2 , starting at z 1 , we get: So for any environment ω, we get that the probability of a partially-forgotten path (for P We define h {x,y} by: Now we get that the density probability of having a given environment knowing the equivalence class of the environment and the partially forgotten path is equal to the product of 1 and 2 up to a multiplicative constant C that depends on the partially-forgotten path: This means that for P (α) 0 , knowing the equivalence class of the environment and the partially forgotten path, the transition probabilities for each trap are independent, so we will look at each trap independently. Let's fix a trap {x, y} and to simplify notations, we will write N x = N x→x + N y→x , N y = N x→y + N y→y and N = N x + N y . We define r and k by r = εx+εy 2 and k = εx−εy εx+εy which gives ε x = r(1 + k) and ε y = r(1 − k) the law of the transition probabilities becomes: Now we want to give bounds on h {x,y} . Since for all r ≤ 1 2 , | log(1 − r)| ≤ 2r, we get: Now we want to show that there cannot be too many traps that are visited many times.
Proof. We want to show that can be bounded away from infinity by using the inequality from lemma 2.2.4: which is true for any β ∈ κ, κ+κ ′ 2 , and for any integer m such that Q m 0 exists. To that end we need to introduce the intermediate quantity S m n : where (T m i ) are the renewal times for the walk (X m t ), with the convention that T m 0 := 0. By definition of X m , the time the walk X m spends in a vertex x is a sum of ℓ x iid exponential random variables of expectation 1 where ℓ x is the number of times the walk Y visits the point x. Therefore the quantity should be close to ℓ x . Then, every time the walk Y enters the trap {x, y} is stays a time of order γ ω (x). This means that ℓx γ ω (x) should be almost equal to the number of times the trap is entered. Finally, we get that for every trap the quantities should be of the same order. Then we just need to bound the second quantity with lemma 2.2.4 and a law of large number. For depends on a box f size m around x and traps span over 2 vertices that's why we cannot consider the sequence (S m i+1 − S m i ) i≥1 ). This means that there is a positive constant C 0 that can be infinite such that E P 0 S m 2m+3 − S m 2m+2 = C 0 and 1 n S m n → C 0 P 0 a.s and therefore Q 0 a.s.
For any x ∈ Z d there is at most one integer i such that γ ω (x) 1 X m t =x dt is non-zero and therefore: By lemma 2.1.1 there is a finite constant D m such that 1 n T m n → D m P 0 and Q 0 almost surely. We get: Since β ≤ 1 we have: So C 0 is finite. Now we want to get a bound on Y from a bound on X m . For any trap {x, y} ∈ T let N {x,y} be the number of times the trap {x, y} is entered. Let T ω,n be the subset of T ω defined by: We chose a partially-forgotten path σ and we look at the law of the total time the walk X spends in a trap {x, y} ∈ T ω knowing Y τ 1 andỸ = σ, whereỸ is the partially forgotten walk. We now have two sources of randomness: the number of back and forth the walk does every time it visits a trap and the time the continuous speed-walk X m spends for every step.
Knowing the partially-forgotten walk, N {x,y} is deterministic. Let t j {x,y} be the j th time the walk Y enters the trap {x, y} andt j {x,y} be the j th time the walk Y exits the trap {x, y}. We define Knowing the partially forgotten walk, ε j x is deterministic (it is equal to 0 iff the walk enters and leaves the trap by y during the j th visit) and ε j x ∈ {0, 1}. We have: where the (E k,j m,x ) x∈Z d ,k,j∈N are independent exponential random variables of parameter γ m ω (x), they correspond to the time the accelerated walk spends on each vertex. By technical lemma 3.0.4 (the proof of which is in the annex) we get that there exists a constant C 1 > 0 such that for any integer n and any trap {x, y} ∈ T ω,n : Unfortunately, we cannot directly use this inequality to conclude because it does not behave nicely with the renewal times. Indeed if you know that a trap spans over two renewal blocks, it means that you cannot do any back and forth inside the trap and the previous inequality becomes false. Instead we will have to first consider traps in T ω,n . First, by definition of the renewal times, no trap in T ω,n can be visited before time τ 1 or after time τ n+2 since Y τ n+2 .e 1 ≥ Y τ 1 .e 1 + n + 1. Therefore: Therefore we get: This in turns gives: y} and Y j+1 ∈ {x, y}} β be the quantity we want to bound. By the law of large number, we have that P 0 a.s and therefore Q 0 a.s: Now, as a consequence of lemma 2.1.2 and the law of large number, there exists a finite constant D > 0 such that P 0 a.s and therefore Q 0 a.s, 1 n Y τn .e 1 → D. Furthermore, a trap spans over at most two renewal blocks so for any trap {x, y}: As a consequence, P 0 a.s: Finally we get: The next lemma is just a variation of the previous one, with the difference that the sum has a deterministic number of terms instead of a random one which makes it simpler to use.
be the i th trap in the direction j the walk encounters after τ 2 . Let N j i be the number of times the walk enters this trap. : φ(x)dx then there exists a constant C such that for any n ∈ N: Those results are also true if (x j i , y j i ) is the i th trap in the direction j the walk encounters after Proof. Let p > 0 be the probability, for P 0 , that there is at least one trap in the direction j between times τ 2 and τ 3 − 1. Let T j be the set of traps in the direction j. Now let the sequence (n i ) be defined by: are clearly identically distributed and we have: The sum has to go up to 2m because in the second sum some traps can appear twice if they are in between two renewal slabs. Indeed, in this case they can be visited before and after the renewal time (if they are in the direction e 1 ). We now have: Similarly, if {x i , y i } is the i th trap in the direction j the walk encounters after τ 2 such that x i .e 1 , y i .e 1 ≥ Y τ 2 .e 1 and N j i the number of times the walk enters this trap then we have: If κ = 1, by lemma 2.3.2, Therefore, by forthcoming technical lemma 3.0.1 there exists a positive, concave function φ defined on [0, ∞) such that φ(t) goes to infinity when t goes to infinity and such that, if φ(x)dx then: x is increasing and therefore, by writing So we get: are clearly identically distributed and we have: Once again, the sum has to go up to 2m because in the second sum some traps can appear twice if they are in between two renewal slabs. Indeed, in this case they can be visited before and after the renewal time (if they are in the direction e 1 ). so: Similarly, if {x i , y i } is the i th trap in the direction j the walk encounters after τ 2 such that x i .e 1 , y i .e 1 ≥ Y τ 2 .e 1 and N j i the number of times the walk enters this trap then we have: and we get the result we want.
The following lemma gives us some independence between the strength of a trap and the number of times the walk enters this trap.
any γ ∈ [0, 1], there exists a constant C that does not depend on i such that: We also have that for any positive concave function φ such that φ(0) = 1 with Φ(t) = t x=0 φ(x)dx we get: Proof. First if H is a geometric random variable of parameter p then for any γ ∈ [0, 1] we have the following three inequalities: Inequalities 4 and 5 give us that there is a constant C γ such that E((1+H) γ ) ≥ C γ 1 p γ , inequality 4 gives us the result for p ≥ 1 2 and since (1 − p) 1 p −1 converges to exp(−1) when p goes to 0, inequality 5 gives us the result for p ≤ 1 2 . By lemma 3.0.2 we get that there is a constant C φ such that: Let t ∈ N be an integer. In the following we will call renewal hyperplan the set of vertices {x, x.e 1 = Y t .e 1 }. We look at the n th time, after time t, that the walk encounters a vertex that touches a trap {x, y} in the direction j that has never been visited before and such that x.e 1 , y.e 1 ≥ Y t .e 1 . We want to show that the strength of the trap is basically independent from the number of times the walk leaves the trap and from the random variable 1 τ 2 =t . Let x, y be the corresponding trap with x being the first vertex visited. Now we look at the trap {x, y}. Let i be such that y = x + e i , we will write α x := α i , α y := α i+d and α := 2d k=1 α k . The density probability (for P (α) ) for the transition probabilities ω(x, y) and ω(y, x), knowing all the transition probabilities (ω(z 1 , z 2 )) z 1 ∈Z d \{x,y} , the renormalized transition probabilities ( ω(x,z) 1−ω(x,y) ) z =y , ( ω(y,z) 1−ω(y,x) ) z =x and that {x, y} is a trap is: Now we make the change of variables: which gives a density probability of: Let h(r, k) be defined by: For 0 ≤ r ≤ 1 4 and −1 ≤ k ≤ 1 we have: So for 0 ≤ r ≤ 1 4 and −1 ≤ k ≤ 1 we have: Now the density probability is: Now we look at a specific environment ω and an edge {x ′ , y ′ } in that environment. To simplify the notation we will write ε x ′ = 1 − ω(x ′ , y ′ ) and ε y ′ = 1 − ω(y ′ , x ′ ). When the walk leaves the trap there are three possibilities: -the walk goes to infinity before going back to the trap or the renewal hyperplan -the walk goes to the renewal hyperplan before it goes back to the trap (this does not necessarily mean that the walk will go back to the trap after going to the renewal hyperplan) -the walk goes back to the trap before it goes to the renewal hyperplan (this does not necessarily mean that the walk will eventually go to the renewal hyperplan).
If the walk is in x ′ let β ∞ x ′ be the probability, knowing that the next step isn't crossing the trap, that the walk goes to infinity without going to the renewal hyperplan or the trap. Similarly, let β 0 x ′ be the probability, knowing that the next step isn't crossing the trap, that the walk goes to the renewal hyperplan before it goes back to the trap (this does not mean that the walk necessarily goes back to the trap). We will also define β x ′ by β Now, if the walk is in x ′ , the probability that when the walk leaves the trap it either never comes back to the trap or goes to the renewal hyperplan before it goes back to the trap is: Similarly, if the walk is in y ′ , this probability is: Now we want to show that that both these quantities are almost equal to: We will only show it for the first quantity, the proof is the same for the second one. We recall that ε x ′ , ε y ′ ≤ 1 2 , therefore: So we get: Similarly, if the walk is in x ′ , the probability that the walk goes to infinity knowing that the walk either goes to infinity or the renewal hyperplan before coming to the trap is: And if it is in y ′ this probability is: We want to show that both these probabilities are almost equal to We will only show it for the first one: And we also get, the same way: Now we get back to the trap {x, y}. Let N be the number of times the walks leaves the trap {x, y} before going to the renewal hyperplan (so if the walk never goes to the renewal hyperplan, N is just the number of times the walk leaves the trap {x, y}). We get that knowing ε x , ε y and N , the probability (for P ω 0 ) that the walk never goes to the renewal hyperplan is between 1 2 εxβ ∞ x +εyβ ∞ y εxβx+εyβy and 2 εxβ ∞ x +εyβ ∞ y εxβx+εyβy . We also have that there exists two geometric random variables N − and N + respectively of parameter 1 2 εxβx+εyβy εx+εy and 2 εxβx+εyβy εx+εy such that P ω 0 almost surely: Therefore, by equations 4, 5, 6 and 7 there exists two positive constants C 1 and C 2 (that depend on γ and Φ) such that for f equal to either x → x γ or Φ: Now let f be either x → x γ or Φ. We need to show that N is almost independent from 1 τ 2 =t . Let t xy be the first time the walk is in x or y and let B be the event that "τ 2 can be equal to t" ie there exists t ′ < t (t ′ plays the role of τ 1 ) such that: We have that if B isn't true then τ 2 cannot be equal to t. If B is true the τ 2 = t iff the walk never crosses the renewal hyperplan after time t xy . So, for any environment ω: To simplify notations we will write We have (in the following, the constant C will depend on the line): Now we use the fact that the various β only depend on the trajectory of the walk up to the time it encounters the n th trap in the direction j after time t, the transition probabilities (ω(z 1 , z 2 )) z 1 ∈Z d \{x,y} , the renormalized transition probabilities ( ω(x,z) 1−ω(x,y) ) z =y , ( ω(y,z) 1−ω(y,x) ) z =x and that {x, y} is a trap. But the law of (ω(x, y), ω(x, y)) is independent of this so we get: Then, by summing on all t we get the result.

The time the walk spends in trap
Now that we have some independence, we can start to look at the precise behaviour of the time spent in the traps. First we want to show that the number of times the walk enters a trap times the strength of said trap is a good approximation of the total time spent in this trap.
] be a direction. Now let {x j i , y j i } be the i th trap in the direction j entered after time τ 2 and such that x j i .e 1 , y j i .e 1 ≥ Y τ 2 .e 1 . Let s j i be the strength of this trap, N j i the number of times the walk enters this trap and ℓ j i = #{n, Y n ∈ {x j i , y j i }} the time spent in the trap. We have for any environment ω, for any A, B ≥ 0, for any integer m and for any C ∈ R + ∪ {∞}: Proof. Let ω be an environment, (Ỹ i ) i∈N be the partially forgotten walk on this environment. Let p j i = ω(x j i , y j i )ω(y j i , x j i ). Now the number of back and forths inside the trap (x j i , y j i ) during its k th visit is equal to H j i,k where H j i,k is a geometric random variable of parameter p j i . Knowing the partially-forgotten walk and p j i , the H j i,k are independent and we get for any j: So we get: The actual value of ℓ j i can be slightly larger than Now we want to show that we can neglect the time spent in traps in directions such that κ j = κ and in traps that are visited a lot of times. This will allow us to have traps that are rather similar so that the time spent in those traps are almost identically distributed.
Lemma 2.4.2. Let j ∈ [|1, d|] be an integer that represents the direction of the trap we will consider. Let {x i , y i } be the i th trap in the direction j visited by the walk after time τ 2 and such that x i .e 1 ≥ Y τ 2 .e 1 and y i .e 1 ≥ Y τ 2 .e 1 . Let κ j = 2α − α j − α j+d ≥ κ.
If κ < 1 there are two cases: If κ j = κ, for any ε > 0 there exists an integer m ε such that for n large enough: If κ j > κ, for any ε > 0 there exists an integer n ε such that for n ≥ n ε : If κ = 1 there are two cases: If κ j = κ, for any ε > 0 there exists an integer m ε such that for n large enough: If κ j > 1, for any ε > 0 there exists an integer n ε such that for n ≥ n ε : Proof. For all i ≥ 0 let t i be the time at which the walk Y enters its i th trap ({x i , y i }) in the direction j after τ 2 and such that x i .e 1 ≥ Y τ 2 .e 1 and y i .e 1 ≥ Y τ 2 .e 1 . We will write x i the vertex such that x i = Y t i . Let s j i be the strength of the trap {x j i , y j i }. For any A, B > 0: We will first look at the case κ < 1. Now, we want to show that we can neglect traps with a high N j i or a low s j i . We get that for any positive integer M , any real A ≥ 2 and any β ∈ [κ, 1] and η > 0 such that β +η ≤ min κ+κ ′ 2 , 1 :

By lemma 2.3.4, there exists a constant c such that
for t ≥ 2 β so: Now for κ j = κ if we take β ∈ (κ, 1] such that β < κ+κ ′ 2 , η = 0 and A = bn 1 κ we get: Now, we get by lemma 2.4.1 that for any positive constants A, B and any positive integer m: So for any ε > 0, for any a > 0, by taking B = ε 2 n 1 κ and A = εn 1 κ in 13, we have for any positive integer m: And we have for any b > 0: We have by 10: And by 12, taking b = ε 2κ+1 β−κ : So for n large enough: which means that for n large enough and m ε such that m ε ε 2κ+1 β−κ ≥ ε − 1 κ we have: And we have the result we want. If κ j > κ there exists β ∈ (κ, κ j ) such that β ≤ 1 and β ≤ κ+κ ′ 2 we get by taking M = 1 and A = ∞ in 11: And then lemma 2.4.1 gives us the result we want. Now we can look at the case κ = 1. Let φ be a positive concave function such that φ(t) goes to infinity when t goes to infinity. We x , we clearly have that f (x) ≥ f (0) and we have for any y > x > 0: We get that for any positive integer M and any real A ≥ 2: Now, by lemma 2.3.4 we get: If κ j = 1, we get, by taking A = n 2 (for n ≥ 2) in 14: .
And by taking A = n 2 and B = 1 in equation 10 we have for some constant c: So for any ε > 0 we get, by taking m ε such that f (m ε ) ≥ 1 ε 3 and using lemma 2.4.1: So there exists a constant C such that for any ε > 0 there exists m ε such that: If κ j > 1, we take M = 0 and A = ∞ in 14 we get for some constant C: And therefore by lemma 2.4.1, for any ε > 0 .

So we have the result we want
Now we have all the tools to get a first limit theorem on the time spent in traps.
Lemma 2.4.3. Set α ∈ (0, ∞) 2d and let α : andT j be the set of vertices x such that there exists j ∈ J such that either (x, x + e j ) ∈ T or (x, x − e j ) ∈ T . Let {x j i , y j i } be the i th trap in the direction j encountered after time τ 2 . For κ < 1, for any m there exists a constant C m such that: For κ = 1, for any m there exists a constant C m > 0 such that: Proof. For every configuration p ∈ I n let C p be the expectation of the number of traps of configuration p encountered between times τ 2 and τ 3 −1 (it is also the expectation of the number of traps of configuration p encountered between times τ i and τ i+1 − 1 for any i ≥ 2). We clearly have: Once we know that a trap is in a direction j ∈ J and has a configuration p for some partially forgotten random walk, the exact number of back and forth the walk does in this trap is still random, because the exact number of back and forths knowing the transition probabilities of the trap is random and because the transition probabilities of the trap are still random, following the law (cf lemma 2.3.1): where ε x := 1 − ω(x, y), ε y := 1 − ω(y, x) and the value of p x , p y , p s are explicit but irrelevant, except for the fact that p x + p y − p s = κ − 2. Let N be such that p ∈ I N (ie the walks exits the trap N times) we also have that there exists a constant C α that only depends on α such that: Now if we make the change of variable 2r = ε x + ε y , k = εx−εy εx+εy , we get that the law of the transition probabilities becomes: The number of back and forths is the sum of N iid geometric random variable (H 1 , . . . , . This gives us the following bound: For r ∈ 2κn log(a) a , 1 2 we have −2r + r 2 ≤ −r and Now let ℓ − be equal to twice the number of back-and-forths: a and r ≤ 2κN log(a) a , we want to show that it is equivalent to Ca −κ for some constant C. First we want to have a good approximation of P 2 N i=1 H i ≥ a|q for large q. Now let H 1 , . . . ,H n be iid exponential random variables of parameter − log(q) such that for every i, i . Now it is easy to show by induction on n that: (−a log(q)) j j! exp(log(q)a). Now we clearly have: We want to show that P l − ≥ a|q and P l − ≥ a − 2N |q are more or less equal. We clearly have: and we also have: First we want to show that we can replace log(q) by −2r. We clearly have log(q) ≤ −2r + r 2 .
We clearly have that g + (a, r) is increasing in r while g − (a, r) is decreasing in r and g + (a, 0) = 1 and g − (a, 0) = 1 − N a N .
So, for any c > 0, we have the following 2 inequalities: If we take c = a − 3 4 we clearly get when a → ∞, g − (a, a − 3 4 ) → 1 and g + (a, a − 3 4 ) → 1. Furthermore, for any constant c ′ : Therefore we get: So there exist a constant C that only depends on α such that: So we get for some constant C ′ : Now let ℓ be the total time spent in the trap. It is equal to ℓ − plus the number of time the walk enters and exits the trap by the same vertex plus twice the number of times the walk enters and exits the trap by different vertices. This means there exists a constant δ p that only depends on the configuration such that ℓ = ℓ − + δ p . This, in turn, means that we have also the asymptotic equality: Now, let ℓ p i be the time spent in the i th trap with configuration p. First, if κ < 1, by Theorem 3.7.2 of [11] we get that for some constant c p : Now we use the fact that the number of trap of configuration p between two renewal times has a finite expectation C p to show that we have the convergence we want. Let M n,p be the number of traps of configuration p the walk has entered before the n th renewal time. For any ε > 0 and any p we have: P 0 (M n,p ∈ [(C p − ε)n, (C p + ε)n]) → 1.
Therefore for any configuration p: And for any m ∈ N: We write I m (J) all the configuration of I m that are in a direction j ∈ J. Now, using the fact that the ℓ i p are non negative, for any n ∈ N and any ε > 0 small enough, we have: Since it is true for all ε, we get that And since we get: Now if κ = 1, we first want to show that we can neglect the values larger than n log(n). Let p be a configuration, ℓ p i the total time spent in the i th trap in the configuration p encountered, C p the constant such that the number of trap encountered before time τ n+1 − 1 is equivalent to C p n, M n,p the number of traps in the configuration p encountered before the time τ n+1 − 1 and c p the constant such that P 0 (ℓ p i ≥ t) ∼ c p n −1 . We get: =o (1) Now we can compute the expectation and variance of ℓ p i ∧ n log(n): Now for the variance we get: So for n large enough: Var P 0 (ℓ p i ∧ n log(n)) ≤ 4c p n log(n). First, for any constant c, for n big enough: εn log(n) for n big enough, by 2.4 ≤cn 4Var P 0 (ℓ p 1 ∧ n log(n)) (εn log(n)) 2 ≤cn 16c p n log(n) (εn log(n)) 2 This means that we have the following results: Then, by definition of C p we get, for any ε ≥ 0: Then, using the fact that n i=1 ℓ p i ∧ a is increasing in n for any a, we get: Similarly, we have: Therefore, Now we just have to sum on all configurations p ∈ I m that are in a direction j ∈ J to get the result we want.

Only the time spent in traps matter
Now to properly show the result we want, we have to show that some quantities and some events are negligible, this is what this section is devoted to.
Let M (n, j) be the number of traps in the direction j encountered between times τ 2 and τ n − 1.
If κ < 1 and κ j = κ, for any ε > 0 there exists ε ′ > 0 such that for n large enough: Proof. Let γ ∈ κ, κ+κ ′ 2 be such that γ ≤ 1. Let β be a positive real. Let {x j i , y j i } be the i th trap visited by the walk in the direction j after time τ 2 such that {x j i .e 1 , y j i .e 1 ≥ Y τ 2 .e 1 . Let s j i be its strength ℓ j i the time spent in this trap and N j i the number of times the trap is visited. By lemma 2.1.2 the number of traps encountered between 2 renewal times has a finite expectation and since the (M (2i + 1, j) − M (2i, j)) i∈N * are iid and so are the (M (2i + 2, j) − M (2i + 1, j)) i∈N * , there exists a constant C j such that P 0 almost surely: So for any ε > 0, for n large enough: We have for n large enough: for n large enough .
Then by lemma 2.4.1 we have: And finally we have: Then by lemma 2.3.4 we get, for some constant c that does not depend on β: And by lemma 2.3.3 there exists a constant c that does not depend on β such that: So by taking β small enough we get the result we wanted. If κ = 1 there exists a constant C such that P 0 almost surely: If κ < 1 there exists a constant C > 0 and a constant γ ∈ (κ, 1] such that P 0 almost surely: Proof. For any j ∈ J we define κ j = 2 in the direction j the walk enters after time τ 2 and such that x j i .e 1 , y j i .e 1 ≥ Y τ 2 .e 1 . Let N j i be the number of times the walk exits {x j i , y j i } and ℓ j i the time the walk spends in this trap. Let M (i, j) be the number of traps in the direction j entered before time τ i . The (M (2i + 2, j) − M (2i+1, j)) i∈N * are iid and so are the (M (2i+1, j)−M (2i, j)) i∈N * , they also all have the same law (the only issue is that since a trap span over two vertices, there might be a slight overlap between traps of two different 'renewal slabs'). Now, since the number of different vertices the walk encounters between two renewal times has a finite expectation, the (M (i + 1, j) − M (i, j)) have a finite expectation and therefore there exists a constant C j such that P 0 almost surely: Now letỸ be the partially forgotten walk associated with Y . We get that knowing the environment, the partially forgotten walk and the renewal position Y τ 2 the time spend in the {x j i , y j i }, the k th time the walk enters this trap is equal to ε j i,k + 2H j i,k where ε j i,k is 1 if the walk enters the trap by the same vertex it leaves it and 2 otherwise and H j i,k is a geometric random variable that counts the number of back and forths. The parameter of H j i,k is p j i := ω(x j i , y j i )ω(y j i , x j i ).
First, lets look at the case κ = 1. Since the are iid and so are the , we just have to prove that their expectation is not infinite to have the result we want. If their expectation were infinite, then we would have that P 0 almost surely: Therefore we would have P 0 almost surely: where s j i is the strength of the trap {x j i , y j i }. Now we get: Now by lemma 2.3.4 we know that there exists a constant C such that for any t ≥ 2: So there exists a constant C ′ (the value of this constant will change depending on the line) such that: This means that we cannot have 1 n j∈J N j i → ∞ P 0 almost surely. Therefore the random variables have finite expectation and so have the random variables . So we have the result we want.
If κ < 1, we will basically use the same method. First there exists γ ∈ (κ, 1] such that γ < κ+κ ′ 2 and for every j ∈ J, γ < κ j . We have that: And since: we also have:

Now, since the random variables
are iid and so are the random variables we have that there exists a constant C ∞ ∈ [0, ∞] such that P 0 almost surely: Now, by definition of the C j and since (a + b) γ ≤ a γ + b γ we have that if C ∞ = ∞ then P 0 almost surely: However we have (using the same techniques and notations as in the case κ = 1): Now by the same method as the one for κ = 1, by using lemma 2.3.4 and lemma 2.3.3 we get: This means that C ∞ < ∞ and therefore: Lemma 2.5.3. Let A i 1 ,i 2 ε,n (i) be the event that the walk visits at least two trap of strength at least εn 1 κ between times τ i and τ i+i 1 − 1 and that it enters these traps at most i 2 times. We have that for any i 1 ≥ 1: Proof. Let α := Now let M i 2 (i) be the number of traps visited at most i 2 times before time τ i . We know that: Now, for any η > 0 we have: =o(1) since M (i + i 1 ) − M (i) has a finite expectation. Now let A i be the event "the i th trap visited by the walk is of strength at least εn 1 κ and that the walk enters this trap at most i 2 times". We have: Now let (Ỹ n ) n∈N be the partially forgotten walk, by lemma 2.3.1 if s j is the strength of the j th trap visited and N j is the number of times the walk enters the j th trap, there exists a constant D j that only depends on its configuration such that for any B > 2, Let D i 2 be the maximum value of D j exp 5(Z i +2α) 2 we can get for configuration of traps entered at most i 2 times. We get that for any j: We also know that the strength of the traps are independent, knowing the partially forgotten walk and the equivalence class of the environment for the trap-equivalent relation. Therefore we have, for any η > 0: Now, by taking a sequence (η n ) n∈N * of positive reals such that η n → 0 and such that: we get: Therefore: Lemma 2.5.4. If κ = 1 there exists a constant C such that P 0 almost surely: If κ < 1, there exists a constant C > 0 and a constant β < 1 κ such that P 0 almost surely, for n large enough: Proof. Let m be such that Q m is well defined. Let (t m i ) i∈N be the times at which X m changes position, with t 0 := 0. We have X m . By definition of X and Y , (E i ) i∈N is a sequence of iid exponential random variables of parameter 1, independent of the walk and the environment.
We will first look at the case κ = 1.
then we have the result we want. On the other hand, if an infinite expectation then, since the random variables are non negative, By the law of large number, we get that P 0 almost surely: For any point x, if x is not in a trap then, by definition of traps: This yields: And by writing T m n = t m τn we have: We know by lemma 2.1.1 that there exists a constant d m such that P 0 almost surely: We get: dt.
Finally, if P 0 almost surely: Then P 0 almost surely: And therefore, since Q m 0 is absolutely continuous with respect to P 0 we get that Q m 0 almost surely: So we would have, since which would mean, since Q m 0 is a stationary law: Which is false by lemma 2.2.4 so we get the result we want. Now for the case κ < 1.
Let β ∈ κ, κ+κ ′ 2 be a real such that β ≤ 1. If are iid, we would have that P 0 almost surely: By lemma 3.0.5 we get that there exists a constant C > 0 such that P 0 almost surely: We also have, by writing T m n = t m τn : We know by lemma 2.1.1 that there exists a constant d m such that P 0 almost surely: T m n − d m n → −∞. We get: Finally, if P 0 almost surely And therefore, since Q m 0 is absolutely continuous with respect to P 0 we get that Q m 0 almost surely: So we would have: And therefore: This would mean, since Q m 0 is a stationary law that which is false by lemma 2.2.4. Therefore there exists a constant C > 0 such that P 0 almost surely: So P 0 almost surely for n large enough: And therefore:

Proof of the theorems
Now we can finally prove both theorems.
Theorem 1. Set d ≥ 3 and α ∈ (0, ∞) 2d . Let Y n (t) be defined by: If κ < 1 and d α = 0, there exists positive constants c 1 , c 2 , c 3 such that for the J 1 topology and for P (α) for the M 1 topology and for P (α) 0 : and for the J 1 topology and for P Proof. The proof will be divided in three parts, one for each result. The second part and the third one rely on the first part. However, the second part and the third part are independent from one another.

First Part
First we will prove that there exists a constant c such that for any t ∈ R + and any increasing sequence (x n ) such that x n → ∞, we have the following convergence in law, for P 0 : The result is obvious for t = 0. For t > 0, lemmas 2.5.4 and 2.5.2 tell us that we only have to consider the time spent in traps in directions j such that κ j = κ. Then lemma 2.4.2 tells us that with probability larger than 1 − ε the time spent in such traps is not more than the time spent in traps where the walks come back at most m ε times (for some m ε ) plus at most εx 1 κ n . We also know by lemma 2.4.3 that for any m ε there exists a constant c ε such that the time spent in traps where the walks come back at most m ε times renormalized by x − 1 κ n converges in law (for P 0 ) to c ε t 1 κ S κ so we get the result we want by having ε go to 0 since c ε is increasing and cannot go to infinity. Since the (τ i+1 − τ i ) i≥1 are iid (for P 0 ) by proposition 1.2.1, we also get that for any sequence (n i ) i∈N * with n i ≥ 1, (ii) for each ε > 0 and η > 0, there exist a δ, 0 < δ < T , and an integer n 0 such that: ∀n ≥ n 0 , P(w fn (δ) ≥ η) ≤ ε and ∀n ≥ n 0 , P(v fn (0, δ) ≥ η) ≤ ε and P(v fn (T, δ) ≥ η) ≤ ε, where w f and v f are defined by: For a sequence of non-decreasing processes (W n ) defined on [0, T ], this characterization is implied by the following: (i) for each positive ε there exist C such that P(W n (T ) ≥ C) ≤ ε, for n ≥ 1, (ii) for each ε > 0 there exist a δ ∈ (0, T ), such that for n ≥ 1 For the first property, since we know that the sequence x is tight and therefore for any ε > 0 there exists B ε such that: So: Now we will prove the two side conditions (ii.b and ii.c). For (ii.b), we first choose δ such that This proves the result for n large enough and then, since the processes we consider are càdlàg, we decrease δ up to the point where we have the result for n small and we get the result we want.
For (ii.c), the proof will be essentially the same. Since the increments are iid (except for the first one of which we do not know the law) the law of x j ∈ J such that the walk enters at least m ε times the trap is lower than 1 3 εx 1 κ n with probability at least 1 − 1 3 ε. And finally, there exists β ε such that for n large enough, by lemma 2.5.1, with probability at least 1 − 1 3 ε the time spent in traps in direction j ∈ J such that their strength is at most β ε x So now we just have to prove that for δ small enough, with high probability there is no i such that there are at least two traps of strength at least β ε x 1 κ n visited at most m ε times between times τ i and τ i+2δxn − 1. By lemma 2.5.3 we have that for any m ∈ N the probability that there exists i ≤ x n such that there are two traps of strength at least β ε x 1 κ n between times τ i and τ i+m − 1 goes to 0 when n goes to infinity. So let B i be the event: "there exists a trap of strength at least β ε x 1 κ n visited at most m ε times between times τ i and τ i+1 − 1". We define the finite sequence (n i ) by: We also defineñ i byñ i = sup{j, n j ≤ x i }. First we want to prove thatñ i cannot be too large. We know that there exists a constant C such that if M (n) is the number of different traps in a direction j visited before time τ n then for n large enough: P 0 (M (x n ) ≥ Cx n ) ≤ ε and by lemma 2.3.4 we clearly have that E(ñ n 1 M (xn)≤Cxn ) ≤ cC β κ . Therefore if we take B ≥ cC εβ κ we get that for n large enough, P 0 (ñ n ≥ B) ≤ 2ε. Now we want to show that for δ > 0 small enough, P 0 (∃i ≤ B, n i+1 − n i ≤ 2δx n ) ≤ ε which would yields the desired result. For any i, we have, by proposition 1.2.1: And therefore: We have that there is a constant C such that for n large enough, P 0 (M (2δx n ) ≥ 2Cδx n ) ≤ ε B . And then by lemma 2.3.4 we have that the expectation of the number of traps of strength at least βx 1 κ n among the first 2δx n traps is lower than 2δx n c β κ xn and therefore for δ small enough, P 0 (∃i ≤ñ n , n i+1 − n i ≤ 2δx n ) ≤ ε. So we have that the sequence of processes is tight. Now we want to show that its limit is c 1 S κ . Let m be an integer and (x i ) 0≤i≤n be reals such that 0 = y 0 < y 1 < · · · < y m−1 < y m = 1. We have, since the (τ i+1 − τ i ) i≥1 are iid and independent from τ 1 : So we have convergence in the J 1 topology for any increasing sequence x i that goes to infinity.

Second Part
Let L be defined by: And let L n be the renormalized L: We have, by definition of τ and L: ∀n ∈ N * , L(Y τn .e 1 ) = τ n .
Where w f and v f are defined by: First we have: which is smaller than ε for all n, for c large enough. Next, since H n is non-decreasing, we have: Then, we first use the fact that: v Ln (0, δ) ≤ n − 1 κ τ nδ to get that for δ small enough: The bound for v Ln (T, δ) is similar but slightly trickier. We know that for c = (E(Y τ 2 − Y τ 1 ).e 1 ) −1 , P 0 almost surely: 1 n (Y τ cn(T −2δ) .e 1 , Y τ cn(T +δ) .e 1 ) → (T − 2δ, T + δ).
Therefore, using the fact that L n is increasing, with probability going to 1: And we have the result we want for δ small enough and n large enough. So we have that the sequence (L n ) n∈N * is tight. Now we just have to show that its limit is CS κ for some constant C. Set c = (E(Y τ 2 − Y τ 1 ).e 1 ) −1 . We will show that L n (x) is almost equal to τ n (cx) which will yield the result. Set ε > 0 and x ∈ [0, ∞). We want to show that P 0 (|L n (x) − τ n (cx)| ≥ ε) → 0. We will use the following inequality: We clearly have, for any δ > 0 lim sup n→∞ P 0 (L n (x) ≥ τ n (cx + δ)) = 0.
Similarly we get: Therefore the limit of L n is t →CS κ (ct) which is equal to CS κ for some constant C.

Third Part
We will look at a sequence of processes t → τ n (t) such that the law of τ n is the same as n τ ⌊xnt⌋ and such that almost surely τ n → τ in the J 1 topology with the law of τ being that of S κ . We want to show that the law of the inverse of τ n converges to that of the inverse of S κ . This is a direct consequence of lemmas 3.0.6 and 3.0.7. Now if we define L τ (t) by L τ (t) = min{n ∈ N, τ n ≥ t}, we have that in J 1 topology: for any increasing sequence x n such that x n → ∞. Therefore, for any increasing sequence x n such that x n → ∞: Now by lemma 2.1.3 there exists v ∈ R d such that P 0 almost surely: This means that in the J 1 topology, we have the following convergence (in law): And therefore, in the J 1 topology, Now we will look at (τ n , d n ) where for any n the law of (τ n , d n ) is the same as the law of Yτ ⌊xnt⌋ xn and such that almost surely: Let τ be such that almost surely τ n → τ . Let ∆ [0,A] be the distance associated with the infinite norm on [0,A].
If we look at d τ −1 n (t) where τ −1 n (t) = inf{x, τ n (x) ≥ t} we get: So for any B, ε > 0: We clearly have that when B goes to infinity, P 0 (τ (B) < A) goes to 0 so we have that in the J 1 topology: d n (τ −1 n (t)) → τ −1 (t)v. Since we have that in law (in the following we will write τ (x) instead of τ x for the formulas to stay readable): we get that in the J 1 topology for any increasing sequence x n : Now we only have to show that Y τ (⌊L τ (xnt)⌋) and Y t are almost equal. For every i > 0 let R i be the number of different points visited between times τ i and τ i+1 − 1 and let R 0 be the number of different points visited before time τ i − 1 (0 if τ i = 0). The (R i ) i∈N are independent and the (R i ) i∈N * are iid with finite expectation by lemma 2.1.2. Let ε > 0 be a constant and let B > 0 be such that for x large enough, . We get that for x large enough: So for any ε > 0 we have that for x large enough: So we get that in the J 1 topology: Since v and d α are collinear, we get the result we want.
Theorem 2. If d ≥ 3 and κ = 1, there exists positive constants c 1 , c 2 , c 3 such that we have the following convergences in probability (for P 0 ): Proof. Let J = {j ∈ [ [1, d]], κ j = κ}. By lemma 2.5.4 we get that there exists a constant C such that P 0 almost surely: So we only have to look at the time spent in the traps. By lemma 2.5.2 we get that for any ε > 0, for n large enough: Therefore we only have to look at the time spent in traps in a direction j ∈ J. For any trap {x, y} letÑ x be the number of times the walks exits the trap {x, y}, we haveÑ w =Ñ y . Let ε > 0 be a positive constant. By lemma 2.4.2 there exists a m ε such that: And by lemma 2.4.3 we get that there is a constant C mε such that: So for n large enough: This means that there exists a constant C ∞ such that: And therefore: 1 n log(n) τ n+1 → C ∞ in probability.
So we have proved the first part of the theorem. Now, by lemma 2.1.3 we have for some C > 0, P 0 almost surely: Y τn .e 1 n → C.
So we get the second result. Now for the last result, we define L τ (n) = min{i, τ i ≥ n} so τ L τ (n)−1 < n ≤ τ L τ (n) . We get, for n big enough: ≤ n .
And therefore, using the result of part one: So we get that: ≤ n → 0.
And therefore: The proof of the lower bound is exactly the same: But we have: And therefore: log(n) n L τ (n) → C −1 ∞ .
Now, by lemma 2.1.3 Y i L τ (i) → D, P 0 almost surely so we get: log(n) n Y n → C −1 ∞ D.

Annex
Lemma 3.0.1. Let X be a non-negative random variable such that E(X) < ∞. There exists an increasing, positive, concave function φ such that φ(t) goes to infinity when t goes to infinity and: Proof. First we show that there exists a non-decreasing, positive function f : R + → R + such that f (t) goes to infinity when t goes to infinity and: E(Xf (X)) < ∞.
To do that we first define the sequence (t i ) by: Now we define f by: We clearly have that f is non-decreasing, positive (f (t) ≥ 2) and that f (t) goes to infinity when t goes to infinity. As for the expectation we have: Now we want to find an increasing concave function φ lower than f such that φ(t) goes to infinity when t goes to infinity. To that effect we will define the sequences (a i ) and (b i ) by: and we define φ by: The function φ is continuous and its slope is decreasing so it is clearly concave.
We now have to prove that lim t→∞ φ(t) = ∞ . First we want to show that for every i ∈ N, a i ≤ i+1.
It is obvious for i ∈ {0, 1} and for i > 0 we have: Now for the upper bound, we will first look at the case where p ≤ 1 2 : Now we use the fact that φ is concave, this gives us, for t ≥ 1: .
Since φ is positive, we get: .
So we get: .
If p ≥ 1 2 we can couple X with a geometric random variable Y of parameter 1 2 such that almost surely Y ≥ X and since Φ is increasing: We get the upper bound we wanted. Now we just have to prove that for any x ≥ 0, 1 2 xφ(x) ≤ Φ(x) ≤ xφ(x). For the upper bound we have: And for the lower bound we have: This function is convex and f x (1) = 1 + x and f ′ x (1) = (1 + x) log(1 + x) so: By Jensen inequality, we have: E(X γ ) ≤ a γ .
Let (E i,j ) i,j∈N be a sequence of iid random variables , independent of (H i ) and following an exponential law of parameter p. Now let Z be defined by: There exists a constant C such that if N ≥ 1: We also have that there are two constant c 1 , c 2 > 0 that do not depend on γ such that: Proof. First we look at the expectation of Z, we get: Now we will look at the variance but first we need a small result to simplify the notations, for this result, M will be a non negative random variable and (X i ) i∈N a sequence of iid real random variables, independent of M . We get: Now we can compute the variance of Z. First we have: Then we have: So we get, by summing these two equalities: We have assumed that h ≥ 1 4 and 1 2 ≤ q(1 − h) ≤ 1 therefore we have: For the lower bound, we will use Holder inequality: This yields: Now we have E(Z 2 ) = Var(Z) + E(Z) 2 since Var(Z) ≤ 80E(Z) and E(Z) ≥ 1 4 we have Var(Z) ≤ 320E(Z) 2 and therefore: E(Z 2 ) ≤ 321E(Z) 2 which yields: Lemma 3.0.5. Let β ∈ [0, 1]. Let (N i ) i∈N * be a sequence of random positive integers and (A i ) i∈N be a sequence of random finite subsets of N with the following two properties: Let (Z i ) i∈N be independent exponential random variables of parameter 1 independent of (A i ), (N i ).
Then there exists a constant C > 0 such that almost surely: Proof. Let C be such that 2C − 2 1−β > 0 Let (n i ) i∈N be the sequence defined by: We have that if and M is such an m then for every n ≥ n M , if j is the integer that satisfies n j ≤ n < n j+1 , we have: By lemma 3.0.3, for any i ∈ N * : And by Hölder: