Survival time of random walk in random environment among soft obstacles

We consider a Random Walk in Random Environment (RWRE) moving in an i.i.d.\ random field of obstacles. When the particle hits an obstacle, it disappears with a positive probability. We obtain quenched and annealed bounds on the tails of the survival time in the general $d$-dimensional case. We then consider a simplified one-dimensional model (where transition probabilities and obstacles are independent and the RWRE only moves to neighbour sites), and obtain finer results for the tail of the survival time. In addition, we study also the"mixed"probability measures (quenched with respect to the obstacles and annealed with respect to the transition probabilities and vice-versa) and give results for tails of the survival time with respect to these probability measures. Further, we apply the same methods to obtain bounds for the tails of hitting times of Branching Random Walks in Random Environment (BRWRE).


Introduction and main results
Random walk and Brownian motion among random obstacles have been investigated intensively in the last three decades.For an introduction to the subject, its connections with other areas and an exposition of the techniques used, we refer to the book [8].Usually, one distinguishes hard obstacles, where the particle is killed upon hitting them, and soft obstacles where the particle is only killed with a certain probability.A typical model treated extensively in [8] is Brownian motion in a Poissonian field of obstacles.The following questions arise for this model: what is the asymptotic behaviour of the survival time?What is the best strategy of survival, i.e. what is the conditioned behaviour of the particle, given that it has survived until time n?An important role in answering these questions has been played by the concept of "pockets of low local eigenvalues" (again, we refer to [8] for explanations).A key distinction in random media is the difference between the quenched probability measure (where one fixes the environment) and the annealed probability measure (where one averages over the environment).
In this paper, we are considering a discrete model with soft obstacles where there are two sources of randomness in the environment: the particle has random transition probabilities (which are assigned to the sites of the lattice in an i.i.d.way and then fixed for all times) and the obstacles are also placed randomly on the lattice and then their positions remain unchanged.We investigate the tails of the survival time.Similar questions have been asked in [1] for simple random walk.The "pockets of low local eigenvalues" are in our case "traps free of obstacles": these are regions without obstacles, where the transition probabilities are such that the particle tends to spend a long time there before escaping.These regions are responsible for the possibility of large survival time.We assume that the environments (transition probabilities and obstacles) in all sites are independent and obtain quenched and annealed bounds on the tails of the survival time in the general d-dimensional case.We then consider a simplified one-dimensional model (where transition probabilities and obstacles are independent and the RWRE only moves to neighbour sites), and obtain finer results for the tail of the survival time.Having two sources of randomness in the environment, we study also the "mixed" probability measures (quenched with respect to the obstacles and annealed with respect to the transition probabilities and viceversa) and give results for the tails of the survival time with respect to these probability measures.Further, we develop the analogy with the branching random walks in random environment (BRWRE) [4,5], and provide quenched and annealed bounds for hitting times in the BRWRE model.Now we define the model formally.Denote by e 1 , . . ., e d the coordinate vectors, and let • 1 and • 2 stand for the L 1 and L 2 norms in Z d respectively.The environment consists of the set of transition probabilities ω = (ω x (y), x, y ∈ Z d ), and the set of variables indicating the locations of the obstacles θ = (θ x , x ∈ Z d ), where θ x = 1{there is an obstacle in x}.Let us denote by σ x = (ω x (•), θ x ) the environment in x ∈ Z d , and σ = (ω, θ) = (σ x , x ∈ Z d ) stands for the (global) environment.We suppose that jumps are uniformly bounded by some constant m, which means that ω x (y) = 0 if x − y 1 > m.Let us denote by M the environment space where the σ x are defined, i.e.We assume that (σ x , x ∈ Z d ) is a collection of i.i.d.random variables.We denote by P the corresponding product measure, and by E its expectation.
In some cases these assumptions may be relaxed, see Remark 1.3 below.Let p = P[θ 0 = 1].
Having fixed the realization of the random environment σ, we now define the random walk ξ and the random time τ as follows.The discrete time random walk ξ n starts from some z 0 ∈ Z d and moves according to the transition probabilities Here P z 0 σ stands for the so-called quenched probability (i.e., with fixed environment σ) and we denote by E z 0 σ the corresponding expectation.Usually, we shall assume that the random walk starts from the origin, so that z 0 = 0; in this case we use the simplified notations P σ , E σ .
Fix r ∈ (0, 1) and let Z 1 , Z 2 , . . .be a sequence of i.i.d.Bernoulli random variables with P σ [Z i = 1] = r.Denote by Θ = {x ∈ Z d : θ x = 1} the set of sites where the obstacles are placed.Let Intuitively, when the RWRE hits an obstacle, it "disappears" with probability r, and τ is the survival time of the particle.
We shall also consider the annealed probability law P z 0 = P × P z 0 σ [•], and the corresponding expectation E z 0 = EE z 0 σ .Again, when the random walk starts from the origin, we use the simplified notations P, E.
Throughout this paper we suppose that that the environment σ satisfies the following two conditions.Condition E. There exists ε 0 > 0 such that ω x (e) ≥ ε 0 for all e ∈ {±e i , i = 1, . . ., d}, P-a.s.
be the drift of ω.
Condition N. We have Condition E is a natural (uniform) ellipticity condition; Condition N is a standard condition for RWRE and assures that the environment has "traps" (i.e., pieces of the environment free from obstacles from where it takes a long time to escape).Let us emphasize that in this paper the term "trap" does not refer to the disappearing of the particle; on the contrary, by "the particle is trapped in some place" we usually mean that the particle stays alive but is likely to remain in that place for a long time.Observe that Condition N implies that the RWRE is (strictly) nestling, i.e. the origin is in the interior of the convex hull of the support of ∆ ω .
Our goal is to study the quenched and annealed tails of the distribution of τ : P σ [τ > n] and P[τ > n].
First, we formulate the results on the tails of τ in the d-dimensional case under the above assumptions: and Theorem 1.2 For d ≥ 1 there exist K q i (d) > 0, i = 1, 2, 3, 4 (also with the property K q j (1) < 1 for j = 2, 4), such that for P-almost all σ there exists n 0 (σ) such that for all n ≥ n 0 (σ) we have and In fact (as it will be discussed in Section 2) the original motivation for the model of this paper came from the study of the hitting times for branching random walks in random environment (BRWRE), see [4].The above Theorem 1.1 has direct counterparts in [4], namely, Theorems 1.8 and 1.9.However, the problem of finding upper and lower bounds on the quenched tails of the hitting times was left open in that paper (except for the case d = 1).Now, Theorem 1.2 of the present paper allows us to obtain the analogous bounds also for the model of [4].The model of [4] can be described as follows.Particles live in Z d and evolve in discrete time.At each time, every particle in a site is substituted by (possibly more than one but at least one) offspring which are placed in neighbour sites, independently of the other particles.The rules of offspring generation (similarly to the notation of the present paper, they are given by ω x at site x) depend only on the location of the particle.Similarly to the situation of this paper, the collection ω of those rules (the environment) is itself random, it is chosen in an i.i.d.way before starting the process, and then it is kept fixed during all the subsequent evolution of the particle system.We denote by ω a generic element of the set of all possible environments at a given point, and we distinguish ω with branching (the particle can be replaced with several particles) and ω without branching (the particle can only be replaced with exactly one particle).The BRWRE is called recurrent if for almost all environments in the process (starting with one particle at the origin), all sites are visited (by some particle) infinitely often a.s.Using the notations of [4] (in particular, P ω stands for the quenched probability law of the BRWRE), we have the following Proposition 1.1 Suppose that the BRWRE is recurrent and uniformly elliptic.Let T (0, x 0 ) be the hitting time of x 0 for the BRWRE starting from 0. Then, there exist K q i (d) > 0, i = 1, 2, 3, 4, such that for almost all environments ω, there exists n 0 (ω) such that for all n ≥ n 0 (ω) we have Now, let G be the set of all ω without branching.Suppose that it has positive probability and the origin belongs to the interior of the convex hull of {∆ ω : ω ∈ G ∩ supp P}, where ∆ ω is the drift from a site with environment ω.Suppose also that there is ε0 such that i.e., for almost all environments, in any site the particle does not branch with uniformly positive probability.Then Now, we go back to the random walk among soft obstacles.In the onedimensional case, we are able to obtain finer results.We assume now that the transition probabilities and the obstacles are independent, in other words, P = µ ⊗ ν, where µ, ν are two product measures governing, respectively, the transition probabilities and the obstacles.We further assume m = 1 and we denote ω i.e., the RWRE is strictly nestling.Let Define κ ℓ = κ ℓ (p) such that and κ r = κ r (p) such that Due to Condition N, since 0 < p < 1, κ ℓ and κ r are well-defined, strictly positive and finite.Indeed, to see this for κ r , observe that for the function , and f is convex, so the equation f (x) = u has a unique solution for any u > 1.A similar argument implies that κ ℓ is well-defined.
We now are able to characterize the quenched and annealed tails of τ in the following way: In our situation, besides the quenched and the annealed probabilities, one can also consider two "mixed" ones: the probability measure P z ω = ν × P z σ which is quenched in ω and annealed in θ and the probability measure P z θ = µ×P z σ which is quenched in θ and annealed in ω.Again, we use the simplified notations P ω = P 0 ω , P θ = P 0 θ .Let Due to (8) we have β 0 < 1/2, β 1 < 1/2.Then, we have the following results about the "mixed" probabilities of survival: where Theorem 1.6 For d = 1, we have: (i) If E(ln ρ 0 ) = 0, then, for each ε > 0, there exist sequences of positive random variables R ε n (ω), R n (ω) and constants K 1 , K 2 such that for µ-almost all ω, These random variables have the following properties: there exists a family of nondegenerate random variables in law as n → ∞.Also, we have Ξ (ε) → Ξ (0) in law as ε → 0.
where κ is such that Remark 1.1 In fact, a comparison with Theorems 1.3 and 1.4 of [7] suggests that and, in particular, for µ-almost all ω and some positive constants lim inf However, the proof of ( 19)-( 21) would require a lengthy analysis of fine properties of the potential V (see Definition 3.1 below), so we decided that it would be unnatural to include it in this paper.
Remark 1.2 It is interesting to note that r does only enter the constants, but not the exponents in all these results.
Remark 1.3 In fact, the proofs of ( 2) and ( 4) do not really use independence (and can be readily extended to the finitely dependent case), but we will use independence for the proofs of the lower bounds in (30).However, if one modifies Condition N in a suitable way, we conjecture that Theorem 1.1 and Theorem 1.2 remain true if the environment is not i.i.d.but finitely dependent (for BRWRE, one can find generalizations of this kind in [5]).
2 Proofs: multi-dimensional case In this section, we prove Theorems 1.1 and 1.2.In fact, the ideas we need to prove these results are similar to those in the proofs of Theorems 1.8 and 1.9 of [4].In the following, we explain the relationship of the discussion in [4] with our model, and give the proof of Theorems 1.1 and 1.2, sometimes referring to [4] for a more detailed account.
Proof of Theorems 1.1 and 1.2.The proof of (2) follows essentially the proof of Theorem 1.8 of [4], where it is shown that the tail of the first hitting time of some fixed site x 0 (one may think also of the first return time to the origin) can be bounded from above as in (2).The main idea is that, as a general fact, for any recurrent BRWRE there are the so-called recurrent seeds.These are simply finite configurations of the environment, where, with positive probability, the number of particles grows exponentially without help from outside (i.e., suppose that all particles that step outside this finite piece are killed; then, the number of particles in the seed dominates a supercritical Galton-Watson process, which explodes with positive probability).Then, we consider an embedded RWRE, until it hits a recurrent seed and the supercritical Galton-Watson process there explodes (afterwards, the particles created by this explosion are used to find the site x 0 , but here this part is only needed for Proposition 1.1).
So, going back to the model of this paper, obstacles play the role of recurrent seeds, and the moment τ when the event {ξ n ∈ Θ, Z n = 1} happens for the first time is analogous to the moment of the first explosion of the Galton-Watson process in the recurrent seed.To explain better this analogy, consider the following situation.Suppose that, outside the recurrent seeds there is typically a strong drift in one direction and the branching is very weak or absent.Then, the qualitative behaviour of the process is quite different before and after the first explosion.Before, we typically observe very few (possibly even one) particles with more or less ballistic behaviour; after, the cloud of particles starts to grow exponentially in one place (in the recurrent seed where the explosion occurs), and so the cloud of particles expands linearly in all directions.So, the first explosion of one of the Galton-Watson processes in recurrent seeds marks the transition between qualitatively different behaviours of the BRWRE, and thus it is analogous to the moment τ of the model of the present paper.
First, we prove (2) for d ≥ 2. For any a ∈ Z, define and define the event M n = {σ : for any y ∈ K e mn there exists z ∈ Θ such that y − z 1 ≤ α ln n} (recall that m is a constant such that ω x (y) = 0 if x − y > m, introduced in Section 1).Clearly, we have for d ≥ 2 Now, suppose that σ ∈ M n .So, for any possible location of the particle up to time n, we can find a site with an obstacle which is not more than α ln n away from that location (in the sense of L 1 -distance).This means that, on any time interval of length α ln n, the particle will disappear (i.e., τ is in this interval if the particle has not disappeared before) with probability at least rε α ln n 0 , where ε 0 is the constant from the uniform ellipticity condition.There are n α ln n such (disjoint) intervals in the interval [0, n], so Then, from ( 22) and (23) we obtain (recall that α ln ε −1 0 < 1) and hence (2).
Let us now prove (4), again for the case d ≥ 2. Abbreviate by the volume of the unit sphere in R d with respect to the L 1 norm, and let q = P[θ 0 = 0] = 1 − p. Choose a large enough α in such a way that ℓ d α d ln q −1 > d + 1, and define By a straightforward calculation, we obtain for d ≥ 2 Using the Borel-Cantelli lemma, (24) implies that for P-almost all σ, there exists n 0 (σ) such that σ ∈ M n for all n ≥ n 0 (σ).
Consider now an environment σ ∈ M n .In such an environment, in the L 1 -sphere of size n around the origin, any L 1 -ball of radius α ln 1/d n contains at least one obstacle (i.e., a point from Θ).This means that, in any time interval of length α ln 1/d n, the particle will disappear with probability at least rε b α ln 1/d n 0 , where, as before, ε 0 is the constant from the uniform ellipticity Condition E. There are Now, we obtain (2) and ( 4) in the one-dimensional case.Since the environment is i.i.d., there exist γ 1 , γ 2 > 0 such that for any interval I ⊂ Z, We say that an interval I is nice, if it contains at least γ 1 |I| sites from Θ. Define It is straightforward to obtain from (25) that there exists C 5 > 0 such that In particular, h(σ) is finite for P-almost all σ.Now, define the event the random walk completely covers a space interval of the same length with probability at least Assume that h(σ) < ln n , and consider such an interval successful if the random walk completely covers a space interval of the same length: we then have 2n ln ε −1 0 ln n independent trials with success probability at least n −1/2 , and then one can use Chernoff's bound for Binomial distribution (see e.g.inequality (34) of [4]).Hence we obtain for such σ Let us define the sequence of stopping times t k , k = 0, 1, 2, . . .as follows: t 0 = 0 and Defining also the sequence of events by Condition E we have where F k is the sigma-algebra generated by D 0 , . . ., D k .
Observe that, for σ with h(σ) < ln n m 2 n a .Thus, by (28), and we obtain (4) from ( 27) and (29) (notice that in the one-dimensional case, the right-hand side of ( 4) is of the form exp(−K q 1 (1)n 1−K q 2 (1) )).Then, the annealed upper bound (2) for d = 1 follows from ( 4) and (26).Now, let us prove the lower bound (3).This time, we proceed as in the proof of Theorem 1.9 from [4].Denote by S d−1 = {x ∈ R d : x 2 = 1} the unit sphere in R d , and, recalling (1), let ∆ ω be the drift at the point (ω, θ) ∈ M. One can split the sphere S d−1 into a finite number (say, m 0 ) of nonintersecting subsets U 1 , . . ., U m 0 and find a finite collection Γ 1 , . . ., Γ m 0 ⊆ M having the following properties: for all i = 1, . . ., m 0 , (i) θ = 0 for all σ = (ω, θ) ∈ Γ i , (ii) there exists p 1 > 0 (depending only on the law of the environment) (iii) there exists a 1 > 0 such that for any z ∈ U i and any σ = (ω, θ) Intuitively, this collection will be used to construct (large) pieces of the environment which are free of obstacles (item (i)) and have the drift pointing towards the center of the corresponding region (item (iii)).The cost of constructing piece of environment of size N (i.e., containing N sites) with such properties does not exceed p N 1 (item (ii)).Consider any z ∈ Z d , B ⊂ Z d and a collection H = (H x ⊆ M, x ∈ B); let us define S(z, B, H) = {σ : σ z+x ∈ H x for all x ∈ B}.
In [4], on S(z, B, H) we said that there is an (B, H)-seed in z; for the model of this paper, however, we prefer not to use the term "seed", since the role seeds play in [4] is quite different from the use of environments belonging to S(z, B, H) here.Take G (n) = {y ∈ Z d : y 2 ≤ u ln n}, where u is a (large) constant to be chosen later.Let us define the sets H As in [4] (see the derivation of (42) there), we obtain that there exist C 9 , C 10 such that for all σ ∈ S(0, G (n) , H (n) ) we have So, choose u > 1 C 10 , then, on the event that σ ∈ S(0, (if the random walk hits the origin at least n times before hitting is free of obstacles, we obtain (3) from ( 30) and (32).Now, it remains to prove (5).Define and let x , x ∈ G (n) ) be defined in the same way as H (n) above, but with G (n) instead of G (n) .Analogously to (30), we have Choose v in such a way that b 0 := (2v) . Then, it is not difficult to obtain (by dividing Using the Borel-Cantelli Lemma, P-a.s. for all n large enough we have Denote by T B the first hitting time of a set B ⊂ Z d : and write T a = T {a} for one-point sets.Next, for σ ∈ S(0, G (n) , H (n) ) we are going to obtain an upper bound for q x := P To do this, note that there are positive constants C 13 , C 14 such that, abbreviating B 0 = {x ∈ Z d : x 2 ≤ C 13 }, the process exp(C 14 ξ m∧T B 0 2 ) is a supermartingale (cf. the proof of Theorem 1.9 in [4]), i.e., For any x ∈ G (n) and y ∈ Z d \ G (n) , we have x 2 ≤ y 2 − 1, so e C 14 x 2 ≤ e −C 14 e C 14 y 2 .Keeping this in mind, we apply the Optional Stopping Theorem to obtain that, for any σ ∈ S(0, G (n) , H (n) ), so q x ≤ e −C 14 for all x ∈ G. Now, from any y ∈ G (n) \ G (n) the particle can come to G (n) in a fixed number of steps (at most √ d + 1) with uniformly positive probability.This means that, on S(0, G (n) , H (n) ), there exists a positive constant C 15 > 0 such that for all x ∈ G (n) Then, analogously to (31), on S(0, G (n) , H (n) ) we obtain that, for all y such that y 1 ≤ m So, using (35) on S(0, G (n) , H (n) ) we obtain that there are C 17 and C 18 such that for all x ∈ G (n) Then, we use the following survival strategy of the particle (see Figure 1): provided that the event S(z, G (n) , H (n) ) occurs for some z ∈ K √ n , first the particle walks there (using, for instance, the shortest possible path) without , so , and this gives us (5).
Proof of Proposition 1.1.Now, we explain how to obtain Proposition 1.1.
To prove (6), we proceed as in the proof of (4).As noted in the beginning of this section, the disappearing of the particle in an obstacle is analogous to starting an exploding Galton-Watson process in a recurrent seed.Denote by T the moment when this happens, i.e., at least e C 19 k new particles are created in this recurrent seed by time T + k.Thus, one can obtain a bound of the form Then, using the uniform ellipticity, it is straightforward to obtain that, waiting C 22 n time units more (with large enough C 22 ), one of the newly created (in this recurrent seed) particles will hit x 0 with probability at least 1−e −C 23 n , and this implies (6).
To show (7), we note that, analogously to the proof of (5) that we are able to create a seed which is free of branching sites of diameter C 24 ln 1/d n, which lies at distance O( √ n) from the origin.Then, the same idea works: the initial particle goes straight to the seed without creating new particles, and then stays there up to time n.The detailed proof goes along the lines of the proof of (5) only with notational adjustments.

Preliminaries
We define the potential, which is a function of the transition probabilities.Under our assumptions it is a sum of i.i.d.random variables.Recall (9).Definition 3.1 Given the realization of the random environment, the potential V is defined by Definition 3.2 We say that there is a trap of depth h located at with the bottom at x if V (y) Note that we actually require the depth of the trap to be at least h.We say that the trap is free of obstacles if in addition Proof of Lemma 3.1.Note that the part (b 1 +b 2 ) ln(1−p) in (40) corresponds to the probability that the interval [x − b 1 ln n, x + b 2 ln n] is obstacle free.
For notational convenience, we often omit integer parts and write, e.g., b 1 ln n instead of its integer part.Using Chebyshev's inequality, we have for λ > 0 Thus, we obtain So, and To show (39) and (40), we have to obtain now the corresponding lower bounds.To this end, note first that, by Cramér's Theorem, (recall that we treat b 2 k as an integer).Define S ℓ = ℓ i=1 ln ρ i , and, for j We have (recall that h > 0) Then, one obtains (39) and ( 40) from ( 41) and the corresponding statement with b 1 instead of b 2 and 1/ρ i instead of ρ i .

Lemma 3.2 We have
Proof.By (37) and ( 38), it holds that In the same way, one proves To show (43), note that taking 1  1−p > 0 and an elementary calculation shows that for all b ∈ (0, ∞) the function g b is concave (indeed, by the Cauchy-Schwarz inequality, for any positive λ 1 , λ 2 we obtain ln E ρ and so Lemma 3.2 is proved.
Proof.Recall (37) and Lemma 3.1, and keep in mind that the obstacles are independent from the transition probabilities.We will show that inf By (39), this implies that Take ε < γ/κ and chose b 1 , b 2 such that for all n large enough (here we use (45)).Divide the interval [0, Lemma 3.3 now follows from the Borel-Cantelli lemma.Now, to prove (44), assume that E(ln ρ 0 ) < 0, hence κ is such that E(ρ κ 0 ) = 1 (the case E(ln ρ 0 ) > 0 follows by symmetry).Using Jensen's inequality, ln E 1 but here we can follow verbatim the proof of (43) from Lemma 3.2 with p = 0.
Next, we need to recall some results about hitting and confinement (quenched) probabilities for one-dimensional random walks in random environment.Obstacles play no role in the rest of this section.For the proof of these results, see [3] (Sections 3.2 and 3.3) and [6] (Section 4).
Let I = [a, c] be a finite interval of Z with a potential V defined as in Definition 3.1 and without obstacles.Let b the first point with minimum potential, i.e., Let us introduce the following quantities (which depend on ω) First, we need an upper bound on the probability of confinement in an interval up to a certain time: Proof.See Proposition 4.1 of [6] (in fact, Lemma 3.4 is a simplified version of that proposition, since here, due to Condition E, the potential has bounded increments).
Next, we obtain a lower bound on the confinement probability in the following lemma.Then, there exist Υ 3 , Υ 4 > 0, such that for all u ≥ 1 and x ∈ (a, c) Proof.See Proposition 4.3 of [6].
Let us emphasize that the estimates in Lemmas 3.4 and 3.5 are valid for all environments satisfying the uniform ellipticity Condition E. An additional remark is due about the usage of Lemma 3.5 (lower bound for the quenched probability of confinement).Suppose that there is a trap of depth H on interval [a, c], being b the point with the lowest value of the potential.Suppose also that a ′ has maximal potential on [a, b] and c ′ has maximal potential on [b, c].Then, for any x ∈ (a ′ , c ′ ), it is straightforward to obtain a lower bound for the probability of confinement in the following way: write , and then use Lemma 3.5 for the second term.This reasoning will usually be left implicit when we use Lemma 3.5 in the rest of this paper.

Proofs of Theorems 1.3-1.6.
Proof of Theorem 1.3.By Lemma 3.1 and Lemma 3.5, we have for all b 1 , b 2 ∈ (0, ∞) and any ε > 0, for all n large enough.Thus, recalling that Let us now obtain an upper bound on P[τ > n].Fix n, β > 0, 0 < δ < 1.We say that the environment σ is good, if the maximal (obstacle free) trap depth is less than For any ε > 0 we obtain that for all large enough n Thus, for such an interval [a, b], on the event {σ is good}, Lemma 3.4 (with u = n δ/2 ) implies that for any x ∈ [a, b] we have Then, by (48), on the event {σ is good}, we have (denoting X a random variable with Binomial n δ/2 , 1 − exp − To explain the last term in the second line of (49), for the event {τ > n}∩G c , split the time into n δ/2 intervals of length n 1−δ/2 .By (48), on each such interval, we have a probability of at least 1 − exp − 16Υ 1 ln 4+4β n to hit an obstacle.Let X ′ count the number of time intervals where that happened.Then, clearly, X ′ dominates X.
So, by ( 47) and (49), we have Together with (46) and Lemma 3.2, this implies (12).Let us show that such b 1 , b 2 actually exist, that is, the infimum in (38) is attained.For that, one may reason as follows.First, since ψ ≥ 0, for any M 0 there is Then, it is clear that for any fixed h > 0 we have lim b 2 ↓0 sup λ>0 (λh−b 2 Eρ λ 0 ) = +∞, so (also with the analogous fact for 1 ρ 0 ) we obtain that for any M 0 there exists ε > 0 such that if min{b 1 , b 2 } < ε then (50) holds.Thus, one may suppose that in (38) b 1 and b 2 vary over a compact set, and so the infimum is attained.
Proof of Theorem 1.5.Denote where F e was defined in (15).We shall see that F e k is the maximal possible depth of a trap located at [a, c] with c − a ≤ k.Let B n,α be the event that in the interval [−n γ , n γ ] there is (at least) one interval of length at least α ln n which is free of obstacles and let I n (θ) be the biggest such interval.For any α < γ| ln(1 − p)| −1 , the event B n,α happens a.s.for n large enough.Take an interval I = [a, c] ⊂ I n (θ) and such that c − a = ⌊α ln n⌋.For small δ denote So, U δ implies that on the interval [a, c] there is a trap of depth (F e −δ ′ )α ln n, free of obstacles, where δ ′ → 0, when δ → 0. Note that For ω ∈ U δ , using Lemma 3.5, we obtain Now, To obtain the other bound, we fix α > γ| ln(1 − p)| −1 .On the event B c n,α , in each of the intervals [−n γ , 0), [0, n γ ] there are at least n γ (α ln n) −1 obstacles.Since F e k is the maximal depth of a trap on an interval of length k, the environment ω in the interval [−n γ , n γ ] satisfies a.s.for large n that the depth of any trap free of obstacles is at most F e α ln n.Thus, by Lemma 3.4, for any δ > 0, the probability that the random walk stays in a trap of depth F e α ln n at least for the time exp (F e α + δ) ln n is at most e −C 1 n δ/2 .We proceed similarly to (53).Consider the event A = {ξ t / ∈ [−n γ , n γ ] for some t ≤ n}.

Proof of (i).
To obtain a lower bound, we just observe that by Lemma 3.5, when the random walk is in a trap of depth ln n, it will stay there up to time n with a probability bounded away from 0 (say, by C 1 ).Further, with νprobability (1 −p) Rn(ω)+1 there will be no obstacles in the interval [0, R n (ω)].Thus, by the uniform ellipticity, To obtain an upper bound, we say that θ is k-good, if Θ ∩ [−j, 0] ≥ pj 2 and Θ ∩ [0, j] ≥ pj 2 for all j ≥ k.Then, By Lemma 3.4, for all large enough n ] for all t ≤ n] ≤ e −C 3 n ε/2 , and so on the event {θ is R ε n (ω)-good}, we have Together with (58), this proves part (i), since it is elementary to obtain that R ε n is subpolynomial for almost all ω.Indeed, as discussed in Remark 1.1, a comparison to [7] suggests that, for C 4 large enough, R ε n (ω) ≤ C 4 ln 2 n ln ln ln n for all but finitely many n, µ-a.s.Anyhow, one can obtain a weaker result which is still enough for our needs: R ε n (ω) ≤ ln 4 n µ-a.s. for all but finitely many n, with the following reasoning: any interval of length ln 2 n contains a trap of depth at least ln n with constant probability.So, dividing [− ln 4 n, ln 4 n] into subintervals of length ln 2 n, one can see that µ[ω : R ε n (ω) ≥ ln 4 n] ≤ e −C 5 ln 2 n and use the Borel-Cantelli lemma.Proof of (ii).Take a = κ κ+1 and 0 < ε < a/κ.In this case, by Lemma 3.3 there is a trap of depth a κ − ε ln n on the interval [0, n a ), a.s.for all n large enough.Using Lemma 3.5, when the random walk enters this trap, it stays there up to time n with probability at least 1 2n a exp − Υ 3 ln(2n a )n 1− a κ +ε .Further, the probability that the interval [0, n a ) is free of obstacles is (1−p) n a , and we obtain To prove the corresponding upper bound, we proceed analogously to the proof of (12).We say that the obstacle environment θ is good (for fixed n), This concludes the proof of Theorem 1.6.

where a = γ 2 4
during any time interval of length ln n 2 ln ε −1 0 so that all the intervals of length at least ln n 2 ln ε −1 0 intersecting with [−n a , n a ] are nice.On the event F , the random walk visits the set Θ at least O( n 1/2 ln n ) times with probability at least 1 − exp(−C 6 n 1/2 ln n ).Indeed, split the time into 2n ln ε −1 0 ln n intervals of length ln n 2 ln ε −1 0

Figure 1 :
Figure1: The (quenched) strategy of survival used in the proof of(5) the event that there is a trap of depth h ln n, located at [x − b 1 ln n, x + b 2 ln n] with the bottom at x. Let also A x (h, b 1 , b 2 , n) be the event that there is a trap of depth h ln n, free of obstacles, located at [x − b 1 ln n, x + b 2 ln n] with the bottom at x.For any j) ℓ ≥ 0 for all ℓ = 1, . . ., b 2 k ≥ P S b 2 k ≥ hk, there exists j such that S (j) ℓ ≥ 0 for all ℓ = 1, . . ., b 2 k= P S b 2 k ≥ hk , since if b 2 ki=1 ln ρ i ≥ hk, then choosing j in such a way that S j ≤ S ℓ for all ℓ = 1, . . ., b 2 k, it is straightforward to obtain that S (j) ℓ ≥ 0 for all ℓ = 1, . . ., b 2 k.Hence P S b 2 k ≥ hk, S ℓ ≥ 0 for all ℓ = 1, . . ., k ≥ 1 b 2 k P S b 2 k ≥ hk , which permits us to obtain a lower bound on

Lemma 3 . 5
Suppose that a < b < c and that c has maximum potential on [b, c] and a has maximal potential on [a, b].