On the Domination of Random Walk on a Discrete Cylinder by Random Interlacements

We consider simple random walk on a discrete cylinder with base a large d-dimensional torus of side-length N, when d is two or more. We develop a stochastic domination control on the local picture left by the random walk in boxes of side-length almost of order N, at certain random times comparable to the square of the number of sites in the base. We show a domination control in terms of the trace left in similar boxes by random interlacements in the infinite (d+1)-dimensional cubic lattice at a suitably adjusted level. As an application we derive a lower bound on the disconnection time of the discrete cylinder, which as a by-product shows the tightness of the laws of the ratio of the square of the number of sites in the base to the disconnection time. This fact had previously only been established when d is at least 17, in arXiv: math/0701414.


Introduction
The present article relates random walk on a discrete cylinder with base a d-dimensional torus, d ≥ 2, of large side-length N to the model of random interlacements recently introduced in [13].It develops a stochastic domination control on the trace left by the random walk in boxes of side-length of order N 1−ε in the cylinder at times which are comparable to N 2d , in terms of the trace left by random interlacements at a suitably adjusted level in a box of Z d+1 with same side-length.As an application of this stochastic domination control and of estimates from [11] on the percolative character of the vacant set left by random interlacements at a small level u, we derive a lower bound on the disconnection time T N of the discrete cylinder by simple random walk.In particular our bounds imply that the laws of the variables N 2d /T N are tight, for all d ≥ 2. This result was previously only known to hold when d ≥ 17, cf.[3].Combined with the upper bounds of [15], this shows that for all d ≥ 2, "T N lives in scale N 2d ".
We will now present the objects of study more precisely.For d ≥ 2 and N ≥ 1, we consider the discrete cylinder (0.1) E = T × Z, where T = (Z/NZ) d . For x in E we denote with P x , resp.P , the canonical law on the space T of nearestneighbor E-valued trajectories, of the simple random walk on E starting at x, resp.with the uniform distribution on T × {0}.We write X .for the canonical process and Y .and Z .for its respective T and Z components.
Another important ingredient are the so-called random interlacements at level u ≥ 0 introduced in [13].They describe the trace on Z d+1 (where d + 1 in the present article plays the role of d in [13]) left by a cloud of paths constituting a Poisson point process on the space of doubly infinite trajectories on Z d+1 modulo time-shift, tending to infinity at positive and negative infinite times.We refer to Section 1 for precise definitions.The non-negative parameter u essentially corresponds to a multiplicative factor of the intensity measure of this point process.In a standard fashion one constructs on the same space (Ω, A, P), see (1.14), (1.20), the family I u , u ≥ 0, of random interlacements at level u.They are the traces on Z d+1 of the trajectories modulo time-shift in the cloud, which have labels at most u.The random subsets I u increase with u, and for u > 0 constitute infinite random connected subsets of Z d+1 , ergodic under space translations, cf.Theorem 2.1 and Corollary 2.3 of [13].The complement V u of I u in Z d+1 is the so-called vacant set at level u.
Our main result establishes a stochastic domination control on scales of order N 1−ε , 0 < ε < 1, of the local picture left by simple random walk on the cylinder E at certain random times, in terms of the corresponding trace of a random interlacement I v , at a suitably adjusted level v.More precisely, given a height z ∈ Z in the cylinder, we consider the sequence R z k , D z k , k ≥ 1, of successive return times of the vertical component of the walk to an interval of length of order N centered at z and departures from a concentric interval of length of order N(log N) 2 , cf. (1.10).We show in the main Theorem 1.1, that for 0 < ε < 1, α > 0, v > (d + 1) α, for large N, given any x = (y, z) in E, we can construct a probability Q on some auxiliary space coupling the simple random walk on E under P , with the random interlacements on Z d+1 under P, so that, cf.(1.24) where K has order αN d−1 (log N) −2 , A is a box centered at the origin with side-length of order N 1−ε , (viewed both as subset of E and Z d+1 ), and c a dimension dependent constant.
When z has size of order at most N d , the random times D z K , which appear in (0.2) have typical order of magnitude N 2d , cf.Proposition 7.1 and Remark 7.2.As an application of the main Theorem 1.1 we derive a lower bound on the disconnection time T N of the discrete cylinder by simple random walk, cf.(7.1).Namely we show in Theorem 7.3 that (0.3) lim > γ , for all γ > 0 , where v is a suitably small number, W stands for the Wiener measure, and with L(a, t) a jointly continuous version of the local time of the canonical Brownian motion.In particular this implies that for d ≥ 2, (0.5) the laws of N 2d /T N under P , N ≥ 2, are tight, a property previously only established when d ≥ 17, cf.[3].
In this respect an additional interest of Theorem 1.1 stems from the fact that it enables to improve the value v in (0.3) once quantitative controls on the percolative properties of the vacant set V u , with u < u * , are derived.We refer to Remark 7.5 2) for further discussion of this matter.As a direct consequence of (0.5) and of the upper bound (0.7) of [15], one thus finds that for all d ≥ 2, (0.8) the laws on (0, ∞) of T N /N 2d under P , with N ≥ 2, are tight, i.e. "T N lives in scale N 2d ".
We will now give some comments on the proofs of the main results.The derivation of Theorem 1.1 involves a sequence of steps which combine some of the techniques which have been developed in [13], [14] and [15].A more detailed outline of these steps appears in Section 1 after the statement of Theorem 1.1.For the time being we only discuss the rough strategy of the proof, and for simplicity assume that x = 0 in (0.2).We also write A key identity proved in . , 1≤ k ≤ K with the ranges of a collection of iid "special excursions" X ′k . , 1 ≤ k ≤ K ′ , (with K ′ slightly bigger than K, as mentioned above).This step is achieved by constructing a suitable coupling in Section 3, and using large deviation estimates under P for the pair empirical distribution , and for a similar object attached to the iid "special excursions", with K ′ in place of K.The above pair empirical distribution attached to Z R k , Z D k , k ≥ 1, under P , can be controlled with the pair empirical distribution recording consecutive values of a Markov chain on {1, −1}, with N-dependent transition probability, governing the evolution of sign(Z D k ) = sign(Z R k+1 ), P -a.s..The crucial domination estimate appears in Proposition 3.1.
As already pointed out, once true excursions are replaced with "special excursions", one is quickly reduced to the consideration of the trace on A of the paths in the support of a Poisson point measure µ ′ with state space the set of excursions starting on the surface of A and stopped at the boundary of B. However these excursions live on a slice of the cylinder E and not on Z d+1 .To correct this feature and enable a comparison with random interlacements, we employ truncation as well as the "sprinkling technique" of [13].Namely we only retain the part of the excursions going from their starting point on the surface of A up to their first exit from a box C of side-length of order N 2 centered at the origin, cf.(1.27).This is the truncation.We also slightly increase the intensity of the Poisson measure.This slight increase of the intensity, the "sprinkling", is meant to compensate for the truncation of the original excursions as they exit C, and ensure that the trace on A of the trajectories in the support of this new Poisson point measure µ typically dominates the corresponding trace on A of paths in the support of µ ′ .The key control appears in Proposition 5.1.This result is similar up to some modifications to Theorem 3.1 of [15], where truncation and sprinkling is carried out on Z d+1 -valued trajectories instead of E-valued trajectories here.The interest of the step we just described is that paths in the support of the Poisson point measure µ live in C ∪ ∂ C, which can both be viewed as a subset of E and Z d+1 .The intensity measure of µ, cf.(5.4), can easily be compared to the intensity of the Poisson point measure µ A,v , cf. (1.18), which contains the information of the trace on A left by random interlacements at level v.This is the essence of the comparison which appears in Proposition 6.1 and leads to the conclusion of the proof of Theorem 1.1.
The lower bound on the disconnection time T N , cf. (0.3) or Theorem 7.3, now follows rather straightforwardly.It relies on the one hand on estimates for the random times D z K which relate them to the random variable ζ of (0.4), see Proposition 7.1 and Remark 7.2, and on the other hand on the fact, see (7.16), that (0.9) lim when the parameter α entering the definition of K, cf.(1.24) is chosen small enough.To prove (0.9) one uses Theorem 1.1 as well as controls from [11] on the rarity of long planar * -paths in I v , when v is small, see (1.23) below.The point is that the occurrence of the disconnection before time γN 2d forces the presence somewhere in the cylinder at height in absolute value at most N 2d+1 , of a long planar * -path in X [0,T N ] , cf.Lemma 7.4.Let us mention that being able to prove (0.9) for all α < u * d+1 would yield (0.3) with v = u * , and thus bring one closer to a proof of (0.6), see also Remark 7.5 2).
We will now describe the organization of this article.
Section 1 introduces further notation and recalls various useful facts concerning random walks and random interlacements.The main Theorem 1.1 is stated and an outline of the main steps of its proof is provided.
In Section 2 we construct the excursions X k . ,k ≥ 1, mentioned in the above discussion.
The main result appears in Proposition 2.2.
Section 3 shows how one can dominate the ranges terms of the ranges of an iid collection of "special excursions" X ′k . , 1≤ k ≤ K ′ , where K ′ is slightly bigger than K.The key control appears in Proposition 3.1.Section 4 contains a Poissonization step where the Poisson point measure µ ′ is introduced.
In Section 5 truncation and sprinkling enable to dominate the trace on A of the paths in the support of µ ′ in terms of the corresponding trace of the truncated paths in the support of the Poisson point measure µ.The main step is Proposition 5.1.The Proposition 5.4 comes as a direct consequence and encapsulates what is needed for the next section.Section 6 develops the final comparison between random walk on E and random interlacements on Z d+1 , so as to complete the proof of Theorem 1.1.
In Section 7 we give an application to the derivation of a lower bound on the disconnection time in Theorem 7. 3. Some open problems are mentioned in Remark 7.5.
Let us comment on the convention we use for constants.Throughout the text c or c ′ denote positive constants solely depending on d, with values changing from place to place.The numbered constants c 0 , c 1 , . . .are fixed and refer to the value at their first appearance in the text.Dependence of constants on additional parameters appear in the notation.For instance c(α) stands for a positive constant depending on d and α.
Finally some pointers to the literature on random interlacements might be useful to the reader.Random interlacements on Z d+1 have been introduced in [13], where the investigation of the percolative properties of the vacant set was initiated.The uniqueness of the infinite cluster of the vacant set has been shown in [16], and the positivity of u * in full generality in [11], (in [13] this had only been shown when d ≥ 6).The stretched exponential decay of the connectivity function for u > u * * , is proved in [12], and quantitative controls on the rarity of large finite clusters in the vacant set, when d ≥ 4 and u sufficiently small, are developed in [18].Random interlacements on transient weighted graphs are discussed in [17].The fact that random interlacements describe the microscopic structure left by random walks on discrete cylinders at times comparable to the square of the number of points of the base is the object of [14].Similar results for the random walk on the torus, and generalizations to cylinders with more general bases can respectively be found in [19], and [20].Applications of random interlacements to the control of the disconnection time of discrete cylinders are the main theme of [15], where an upper bound on the disconnection time is derived, and of the present article, where a lower bound on the disconnection time is obtained.

Some notation and the main result
In this section we introduce additional notation and recall some useful results concerning random walks and random interlacements.In particular a key identity from Lemma 1.1 of [15] for the hitting distribution of the walk on the cylinder lies at the heart of the comparison with random interlacements.We recall it below in (1.13).We then state the main Theorem 1.1 and outline the key steps of its proof.
We write N = {0, 1, 2, . . .} for the set of natural numbers.Given a non-negative real number a, we write [a] for the integer part of a, and for real numbers b, c we write b ∧ c and b ∨ c for the respective minimum and maximum of b and c.We write e i , 1 ≤ i ≤ d + 1, for the canonical basis of R d+1 .We let | • | and | • | ∞ respectively stand for the Euclidean and ℓ ∞ -distances on Z d+1 or for the corresponding distances induced on E. Throughout the article we assume d ≥ 2. We say that two points on Z d+1 or E are neighbors, respectively * -neighbors, if their | • |-distance, respectively | • | ∞ -distance equals 1.By finite path, respectively finite * -path, we mean a finite sequence x 0 , x 1 , . . ., x n on Z d+1 or E, n ≥ 0, such that for each 0 ≤ i < n, x i and x i+1 are neighbors, respectively *neighbors.Sometimes, when this causes no confusion, we simply write path or * -path, in place of finite path or finite * -path.We denote the closed | • | ∞ -ball and the | • | ∞ -sphere with radius r ≥ 0 and center x in Z d+1 or E with B(x, r) and S(x, r).For A, B subsets of Z d+1 or E we write A + B for the set of elements x + y with x in A and y in B. We also write U ⊂⊂ Z d+1 or U ⊂⊂ E to indicate that U is a finite subset of Z d+1 or E. Given U subset of Z d+1 or E, we denote with |U| the cardinality of U, with ∂U the boundary of U and ∂ int U the interior boundary of U: We write π T and π Z for the respective canonical projections from E = T × Z onto T and Z.
We let T stand for the set of nearest neighbor E-valued trajectories with time indexed by N, see below (0.1).When F is a subset of E, or of Z d+1 , we denote with T F the countable set of nearest neighbor (F ∪ ∂F )-valued trajectories which remain constant after a finite time.The canonical shift on T is denoted with (θ n ) n≥0 and the canonical filtration with (F n ) n≥0 .Further notation concerning the canonical process on T appears below (0.1).Given a subset U of E we denote with H U , H U and T U , the respective entrance time of U, hitting time of U, and exit time from U: In the case of a singleton U = {x}, we simply write H x or H x .
We denote with P Z d+1 x the canonical law of simple random walk on Z d+1 starting at x and with E Z d+1 x the corresponding expectation.We otherwise keep the same notation as for the walk on E concerning the canonical process, the canonical shift and natural objects such as in (1.2).Given K ⊂⊂ Z d+1 and U ⊇ K, a subset of Z d+1 , the equilibrium measure and the capacity of K relative to U are defined by: The Green function of the walk killed outside U is defined as x n≥0 1{X n = x ′ , n < T U } , for x, x ′ in Z d+1 .
When U = Z d+1 , we drop U from the notation in (1.3) -(1.5).The Green function is symmetric in its two variables and the probability to enter K before exiting U can be expressed as: (1.6) One also has the bounds, (see for instance (1.7) of [15]): (1.7) In the case of the discrete cylinder E, when U E is a strict subset of E, we define the corresponding objects just as in (1.3) -(1.5), with P x and E x in place of P Z d+1 x and E Z d+1 x .We then have similar identities and bounds as in (1.6), (1.7).When ρ is a measure on E or Z d+1 , we write P ρ or P Z d+1 ρ in place of x∈E ρ(x) P x or x∈Z d+1 ρ(x) P Z d+1 x .
As mentioned above (0.2) the main Theorem 1.1 involves measuring time in terms of excursions of the random walk in and out of certain concentric boxes in the cylinder E.More specifically we introduce the vertical scales and the boxes in E centered at level z ∈ Z: (1.9) When z = 0, we simply write B and B. The sequence of successive returns of X .to B(z) and departure from B(z), R z k , D z k , k ≥ 1, is then defined via: (1.10) , and these inequalities except maybe for the first one are P -a.s.strict.When z = 0, we simply write Certain initial distributions of the walk on E will be useful in what follows.Namely, we will consider for z ∈ Z: As a result of Lemma 1.1 of [15], the initial distribution q plays a central role in linking random walk on E and random interlacements, see also Remark 1.2 of [15].Indeed for K ⊆ T × (−r N , r N ), one has: and with the application of the strong Markov property, We will now recall some notation and results from [13] concerning random interlacements.We denote with W the space of doubly infinite nearest neighbor Z d+1 -valued trajectories which tend to infinity at positive and negative infinite times, and with W * the space of equivalence classes of trajectories in W modulo time-shift.The canonical projection from W onto W * is denoted by π * .We endow W with its canonical σ-algebra W, and denote by X n , n ∈ Z, the canonical coordinates.
We endow W * with W * = {A ⊆ W * ; (π * ) −1 (A) ∈ W}, the largest σ-algebra on W * for which π * : (W, W) → (W * , W * ) is measurable.We also consider W + the space of nearest neighbor Z d+1 -valued trajectories defined for non-negative times and tending to infinity.We write W + and X n , n ≥ 0, for the canonical σ-algebra and the canonical process on W + .Since d ≥ 2, the simple random walk on Z d+1 is transient and W + has full measure for any P Z d+1 x , x ∈ Z d+1 , see above (1.3), and we view whenever convenient the law of simple random walk on Z d+1 starting from x, as a probability on (W + , W + ).We consider the space of point measures on W * × R + : where for K ⊂⊂ Z d+1 , W * K ⊆ W * is the subset of trajectories modulo time-shift, which enter K: We endow Ω with the σ-algebra A generated by the evaluation maps ω → ω(D), where D runs over the product σ-algebra W * ×B(R + ).We denote with P the probability on (Ω, A) under which ω becomes a Poisson point measure on W * × R + with intensity ν(dw * )du, giving finite mass to the sets Here ν stands for the unique σ-finite measure on (W * , W * ) such that for every K ⊂⊂ Z d+1 , cf.Theorem 1.1 of [13]: with Q K the finite measure on W 0 K , the subset of W K of trajectories which enter K for the first time at time 0, such that for A, B in W + , x ∈ Z d+1 : where e K , cf. (1.3) and below (1.5), stands for the equilibrium measure of K, and is concentrated on the points of Given K ⊂⊂ Z d+1 , u ≥ 0, one further defines on (Ω, A) the random point process with state space the set of finite point measures on (W + , W + ): where (w * ) K,+ stands for the trajectory in W + which follows step by step w * ∈ W * K from the first time it enters K.One then has, cf.Proposition 1.3 of [13], for K ⊂⊂ Z d+1 , u ≥ 0: (1. 19) µ K,u is a Poisson point process on (W + , W + ) with intensity measure u P Z d+1 e K , where we used the notation introduced below (1.7) .
Given ω ∈ Ω, the interlacement at level u ≥ 0, is the subset of Z d+1 : where for w * ∈ W * , range (w * ) = w(Z) for any w ∈ W with π * (w) = w * .The vacant set at level u is then defined as: One has the identity and this property leads to a characterization of the law Q u on {0, 1} Z d+1 of the random subset V u , cf.Remark 2.2 2) of [13].
As a result of Theorem 3.5 of [13] and 3.4 of [11], it follows that there exists a nondegenerate critical value u * ∈ (0, ∞), such that for u > u * , P-a.s., V u has only finite connected components, whereas for u < u * , P-a.s., V u has an infinite connected component.It is also known, cf.[16], that for each u ≥ 0, there is P-a.s. at most one infinite connected component in V u .The existence or absence of such a component when u = u * is presently an open problem.In Section 7, when applying Theorem 1.1 to the study of disconnection time we will also need the following estimate, cf.(3.28) of [11]: for any ρ > 0, there exists u(ρ) > 0 such that for u ≤ u(ρ) lim where we use the notation from the beginning of this section, and any e i , e j , i = j, could of course replace e 1 and e d+1 in (1.23).
We can now state the main result of this article.It deals with the trace left in a neighborhood of size N 1−ε of some point x of the cylinder by the random walk at time D z K , where z = π Z (x) and K has order N d /h N , cf. (1.8).Theorem 1.1 shows that with high probability this trace is dominated by the corresponding trace of a random interlacement at a suitably adjusted level.When |z| remains of order at most N d , D z K typically corresponds to time scales of order N 2d , cf.Remark 7.2.
For N ≥ c(α, v, ε) and x = (y, z) ∈ E one can construct a coupling Q on some auxiliary space of the simple random walk X .on E under P and of the Poisson point measure ω under P so that The proof of Theorem 1.1 involves several steps, which we now outline.
a) This first step reduces the proof to the case where x = 0 and the initial distribution of the walk is 10), with respective laws which coincide with that of X •∧T e B under P Z R k ,Z D k , where we use the notation: and are such that, cf.Proposition 2.2, c) This steps constructs a coupling Q 2 of the above processes with a sequence of iid E-valued processes X ′k . ,k ≥ 1, with same law as X •∧T e B under P q , cf. (1.11), in such a fashion that, cf.Proposition 3.1, d) This is a Poissonization step taking advantage of the special property of the distribution q, cf.(1.12), (1.13).With Q 3 one couples the above processes with an independent Poisson variable J ′ of intensity (1 + 3 5 δ) α N d h N , and defines the Poisson point measure on T e B , cf. below (1.1), as well as the random subset of A where Supp µ ′ denotes the support (in T e B ) of the point measure µ ′ , so that, cf.Proposition 4.1: e) In this step one constructs using truncation and sprinkling a coupling Q 4 of X .
and I ′ under Q 3 with a Poisson point measure µ on T e C , with intensity measure where .
this coupling is such that, cf.(5.44) in the proof of Proposition 5.4, f) In this last step one constructs a coupling Q ′ of X . ,I ′ , I under Q 4 with ω under P so that cf.(6.5), and this enables to complete the proof of Theorem 1.1.
Remark 1.2.As it will be clear from the proof of Theorem 1.1, the exponent −3d in the right-hand side of (1.24) can be replaced by an arbitrary negative exponent by simply adjusting constants in Theorem 1.1.The specific choice of the exponent in (1.24) will be sufficient for the application to the lower bound on the disconnection time we give in Section 7.
2 Reduction to the case x = 0 x = 0 x = 0 and a first coupling This section takes care of steps a) an b)) in the above outline following the statement of Theorem 1.1.We first show in Proposition 2.1 that it suffices to prove Theorem 1.1 when x = 0 in (1.24), and the initial distribution of the walk is q z 0 , with z 0 an arbitrary point on I, see (1.11) and (1.9).This is step a).Then we turn to step b) and construct, very much in the spirit of Proposition 3.3 of [14], a coupling of X .with a sequence of and respectively distributed as This construction is carried out in Proposition 2.2.It uses the fact that h N in (1.8) is sufficiently large to provide ample time to the T-component of the walk to "homogenize" before reaching B, when the starting point of the walk lies outside B.
We keep the notation of Theorem 1.1 and begin with the reduction to the case x = 0.
Proposition 2.1.If for N ≥ c 0 (ε, α, v) and any z 0 ∈ I one can construct a coupling Q ′ of X .under P qz 0 with ω under P so that ∩ A , and moreover that X . is distributed as P qz 0 , where z 0 coincides with −z, when z ∈ I, and otherwise with r N or −r N .
With the coupling Q ′ mentioned in Proposition 2.1 we can construct a conditional distribution under Q ′ of ω ∈ Ω given X [0,D K ] ∩ A, which only takes finitely many values, and has same distribution under ∩ A, under P .This conditional distribution and this identity in law enable to construct a coupling Q of X under P with ω under P so that (1.24) holds as a result of (2.1).
We will now carry out step b) of the outline below Theorem 1.1.With Lemma 3.1 and Remark 3.2 of [15], we know that for N ≥ 1, As mentioned in Remark 3.2 of [14] the exponent −5d in the right-hand side of (2.2) can be replaced by an arbitrarily large negative exponent by adjusting constants.
The following proposition is simpler but has a similar spirit to Proposition 3.3 of [14].It will complete step b).
One can construct on some auxiliary space ) (2.4) when k ≥ 2, are independent with same law as (2.5) Proof.It follows from (2.2) that for x ∈ ∂ B the total variation distance of the law of X R 1 under P x and q z(x) , where |z(x)| = r N and π Z (x) • z(x) > 0, is at most With Theorem 5.2, p. 19 of [10], we can construct for any x ∈ ∂ B a probability ρ x (dx ′ , d x) on {(x ′ , x) ∈ E 2 ; π Z (x ′ ) = π Z ( x) = z(x)}, such that under ρ x the first marginal has same law as X R 1 under P x , (2.6) the second marginal is q z(x) -distributed, (2.7) We define the spaces W Z , W T of respectively Z-and T-valued trajectories with jumps of | • |-size at most 1, as well as W f Z and W f T the countable subsets of W Z and W T of trajectories which are constant after a finite time.We pick the auxiliary space [2,∞) endowed with its natural product σ-algebra A 1 .We write Y . ,Z .and Y k . ,k ≥ 2, for the canonical coordinate processes on Ω 1 , as well as X .= (Y . ,Z . ).
The probability Q 1 is constructed as follows.

The law of X
With (2.9), (2.10) the law of (X •∧R 2 ), Y 2 0 under Q 1 is specified.We then proceed as follows.

Conditionally on (X
The above steps specify the law of (X . ) under Q 1 .We then proceed using the kernel of the last line of (2.10) with X D 2 in place of X D 1 to specify the conditional law under . and so on and so forth to construct the full law With this construction the claim (2.3) follows directly from (2.6).Then (2.5) follows from (2.8) and the statements (2.10), (2.12) and their iteration for arbitrary k ≥ 2. The proof of (2.4) is similar to the proof of (3.22) in Proposition 3.3 of [14].
Remark 2.3.As a direct consequence of Proposition 2.2 we see that for α > 0, N ≥ 1, K as below (1.24) and z 0 ∈ I, This estimate will be used in the next section.

Domination by iid excursions
In this section we carry out step c) of the outline below Theorem 1.1.We construct a coupling Q 2 of X . ,X k . ,k ≥ 1, see the previous section, with a collection X ′k . ,k ≥ 1, of iid excursions having same distribution as X •∧T e B under P q , (the "special" excursions), in such a fashion that the trace on A, cf.(1.24), of X [0,D K ] is with high probability dominated by the trace on A of the union of the ranges of X ′k . ,with k ≤ K ′ and K ′ "slightly" bigger than K, cf.(1.26).This is carried out in Proposition 3.1.As mentioned in the introduction the interest of this coupling is that, roughly speaking, the iid "special" excursions X ′k . ,k ≥ 1, is realized by selecting for each k an excursion of type (z 1 , z 2 ) with , is an independent iid sequence with same law as (Z R 1 , Z D 1 ) under P q .The domination of the union of the ranges of the X k . ,k ≤ K, in terms of the union of the ranges of the X ′k . ,k ≤ K ′ , then relies on large deviation estimates for the empirical measure of the (Z R k , Z D k ) under P qz 0 and of the empirical measure of the iid variables (Z ′ R,k , Z ′ D,k ).The excursion X 1 . requires a special treatment due to its atypical starting height z 0 ∈ I, which possibly differs from ±r N .
The notation T F for F ⊆ E has been introduced below (1.1), and K ′ is defined in (1.26).
B -valued variables with same distribution as X •∧T e B under P q , so that: Proof.We introduce the space Γ of "excursion types": and for γ = (z 1 , z 2 ) ∈ Γ write P γ in place of P z 1 ,z 2 , cf. (1.25).
We consider an auxiliary probability space (Σ, B, M) endowed with the following collection of variables and processes: and in Γ, when k ≥ 2, distributed as We then introduce the Γ-valued processes γ k , k ≥ 1, and γ ′ k , k ≥ 1, via: The definition of γ 1 in (3.8) is somewhat arbitrary as a consequence of the special role of the starting point z 0 ∈ I of the walk.We also consider the counting functions: We will now introduce processes X .k , k ≥ 1, on (Σ, B, M) which have same law as X k . ,k ≥ 1, under Q 1 , cf.Proposition 2.2.To this effect we define: where we note that since P q [H T×{z 0 } < T e B and Z T e B = z] > 0, for z = ±h N , one has i 0 < ∞, M-a.s., thanks to (3.6), (3.7).Further we observe that conditionally on (Z R,k , Z D,k ), k ≥ 1, the processes ζ i 0 (H T×{z 0 } + •) and (3.12) , are independent and respectively distributed as Taking into account (2.4) and (3.3) we have thus obtained that defining one finds that (X .k ) k≥1 under M has same distribution as ( X k . )k≥1 under Q 1 . (3.14) In a similar fashion we also define the processes Observe that conditionally on γ ′ k , k ≥ 1, the X k . ,k ≥ 1, are independent with respective distribution that of X •∧T e B under P γ ′ k or equivalently under , are iid Γ-valued variables with same distributions as (Z R 1 , Z D 1 ) under P q , cf. (3.4), (3.9), it follows that X k . ,k ≥ 1, are iid T e B -valued with same distribution as X •∧T e B under P q , (3. 16) and they are independent from the collection We recall the definition of δ in (1.26) and then set αN d /h N , as well as (3.17) . ,k ≥ 1, are iid with same distribution as X •∧T e B under P q .We then introduce the "good event": The interest of this definition stems from the fact that on G range X . 1 (3.13)   ⊆ range ζ i 0 As a result we see that . ≥ M(G) .
We will now explain why Proposition 3.1 follows once we show that For this purpose we note that (Ω 1 , A 1 ), see above (2.9), is a standard measurable space, cf.[6], p. 13.The ( X .k ) k≥1 after modification on a Q 1 -negligible set can be viewed as ( Ω, A)-valued variables where Ω stands for T [1,∞) E , with T E the countable space defined below (1.1), and where A denotes the canonical product σ-algebra, so that ( Ω, A) is also a standard measurable space.With Theorem 3.3, p. 15 of [6] and its corollary we can find a probability kernel q( ω, dω 1 ) from ( Ω, A) to (Ω 1 , A 1 ) such that for any bounded ] such that for a.e.ω relative to the A 1 -law of ( X .k ) k≥1 , q( ω, •) is supported on the fiber {ω 1 ∈ Ω 1 ; ( X .k ) k≥1 (ω 1 ) = ω}.
We now turn to the proof of (3.22).We begin with an upper bound on For z ∈ {h N , −h N }, we have the identity (with hopefully obvious notation): (3.23) As a result of (3.6), (3.11), we thus find that for N ≥ c(α, v), The next step in the proof of (3.22) is the derivation of an upper bound on for γ ∈ Γ.We introduce the probabilities (3.25) With (3.4), we see that for γ = (z 1 , z 2 ) ∈ Γ, and k ≥ 1, Then with the help of a Cramer-type exponential bound it follows that for ρ > 0, γ ∈ Γ, .
Hence for N ≥ c(α, v), (ensuring in particular p(γ) ≥ 1 4 − δ 200 , for all γ ∈ Γ), and ρ = c ′ (α, v) small enough, the above inequality yields that The last (and main) step in the proof of (3.22) is the derivation of an upper bound on We will rely on large deviation estimates for the empirical measure of (Z R,k , Z D,k ), k ≥ 2, cf.(3.3).In essence, as we will see below, this boils down to large deviation estimates on the pair empirical distribution of a Markov chain on {1, −1}, which at each step remains at the same location with probability p N , (close to 1  2 , cf. (3.25)), and changes location with probability q N = 1−p N .The transition probabilities of this Markov chain depend on N, and to derive the relevant large deviation estimates with uniformity over N, we rely on super-multiplicativity, cf.Lemma 6.3.1 of [4], p. 273.
In view of (3.3), M .-a.s., for k ≥ 2, Z D,k−1 and Z R,k have same sign.We denote with φ the bijective map from Γ onto Γ def = {1, −1} 2 , defined by φ(γ) = (sign(z 1 ), sign(z 2 ), for γ ∈ Γ.We consider the Γ-valued stochastic process, cf.(3.9), Note that under M, (sign(Z D,k )) k≥1 , has the same law as (Z D k /h N ) k≥1 , under P qz 0 , cf. (3.3), which is a Markov chain on {1, −1}, which at each step has a transition probability p N to remain at the same location and q N to change location, as well as an initial distribution (at time 1) to be at 1 and h N −z 0 2h N to be at −1.This chain on {1, −1} induces a Markov chain on Γ = {1, −1} 2 by looking at consecutive positions of the original chain, so that when located in γ = ( γ 1 , γ 2 ) ∈ Γ, the induced chain jumps with probability p N to ( γ 2 , γ 2 ) and q N to ( γ 2 , − γ 2 ).We denote with R e γ , for γ ∈ Γ, the canonical law on Γ N of this chain starting at γ, and with U m , m ≥ 0, its canonical process.This is an irreducible chain on Γ and , where (3.28) κ stands for the initial distribution Using sub-additivity, see [4], p. 273 and 275, we see that for N ≥ 1, where thanks to the fact that the chain on Γ describes the evolution of pairs of consecutive positions of the chain on {1, −1} mentioned above, see Theorem 3.1.13,p. 79 of [4], we have set for γ ∈ Γ, 0 < v < 1, (3.30) Ψ N ( γ, v) = inf{H 2,N (µ); µ probability on Γ with µ({ γ}) ≥ v} , and for µ probability on Γ H 2,N (µ) = ∞, when the two marginals of µ are different , where we wrote µ(i, j) in place of µ({i, j}), for i, j ∈ {1, −1}, and µ(j|i) for the µconditional probability that the second coordinates equals j given that the first coordinate equals i.
We then introduce Ψ ∞ and H 2,∞ as in (3.30), (3.31) replacing p N and q N with 1 2 .In view of the last line of (3.25) we see that for N ≥ c, for any probability µ on Γ (3.32) the finiteness of H 2,N (µ) and H 2,∞ (µ) are equivalent and when this holds, The non-negative function H 2,∞ is lower semi-continuous relative to weak convergence, cf.
Since φ is a bijection between Γ and Γ and γ k = φ(γ k ), we can now deduce from (3.28) and (3.34) with n = K − 1 that for N ≥ c(α, v), For large N, one has ( 14 + δ 100 )K < ( 1 4 − δ 100 ) K, cf.(3.17), and hence with (3.27), (3.35), for N ≥ c(α, v): Remark 3.2.Although we will not need this fact let us mention that H 2,N in (3.31) is a non-negative lower continuous function for the weak convergence.Moreover it vanishes at the unique probability on Γ, for which the first coordinate is equidistributed and conditionally on the first coordinate the second coordinate coincides with the first coordinate with probability p N (and differs with probability q N ).This last feature follows from the relative entropy interpretation of H 2,N , cf. [4], p. 79.

Poissonization
This section carries out step d) of the outline below Theorem 1.1.We construct a coupling . , k ≥ 1, under Q 2 with an independent Poisson variable J ′ of parameter For N ≥ c(α, v, ε) and z 0 ∈ I, the random point measure on T e B defined by (4.1) is Poisson with intensity measure λ ′ κ ′ on T e B , where Moreover if one defines the random subset of A (4.3) then one has Proof.Since the X ′k . are iid T e B -valued variables, the Poissonian character of µ ′ is immediate.It then follows from (1.13) and the fact that the X ′k .have same distribution as X •∧T e B under P q that µ ′ has intensity measure λ ′ κ ′ with λ ′ and κ ′ as in (4.2).Finally note that on {J ′ ≥ K ′ }, I ′ contains 1≤k≤K ′ range X ′k .∩A. Moreover choosing a = a(α, v) small enough one has the exponential bound: Combined with (3.1) and the fact that

Truncation
This section is devoted to step e) of the outline below Theorem 1.1.We construct a coupling Q 4 of X . ,X ′k . ,k ≥ 1, I ′ under Q 3 with a random subset I which is the union of the ranges of the trajectories in the support of a suitable Poisson point measure µ on T e C , cf. (5.4), where C = B(0, N 4 ).This coupling is such that with high probability I contains the trace on A of X [0,D K ] .For large N one can view C ∪ ∂ C both as a subset of E and Z d+1 , and this makes I more convenient than I ′ for the purpose of comparison with random interlacements on Z d+1 , see next section.The main result of this section appears in Proposition 5.1.In the proof we employ the technique of sprinkling introduced in [13], and throw in additional trajectories so as to compensate for truncation.This result is very similar to Theorem 3.1 of [15], except that in this reference the non-truncated trajectories are Z d+1 -valued whereas they are B ∪∂ B-valued in the present setting.This induces some changes in the proof but the overall spirit remains the same.The main Proposition 5.1 then leads to the construction of the desired coupling in Proposition 5.4.
We recall the definition of C in (1.27), and keep the notation of Section 4. We consider an auxiliary probability space (Ω 0 , A 0 , Q 0 ) endowed with an iid sequence X k . ,k ≥ 1, of T e B -valued variables with same distributions as (5.1) an independent Poisson variable J with intensity 1 + For N ≥ c(α, v, ε), z 0 ∈ I, there exist random subsets I * and I of A, defined on (Ω 3 , A 3 , Q 3 ) of Proposition 4.1, such that I * and I are independent under Q 3 , (5.7) I * is stochastically dominated by I ∩ A . (5.9) Proof.The proof with some modifications follows the same pattern as that of Theorem 3.1 of [15] and we detail it for the reader's convenience.We consider (5.10) M = exp{ log N } + 1 and C = B 0, and from now on assume N ≥ c(α, v, ε) so that Proposition 4.1 holds true and (5.11) We write R k and D k , k ≥ 1, for the successive return times to A and departures from C of a trajectory belonging to T e B , just as in (1.10) with B(z) and B(z) replaced by A and C. We then introduce the integer (5.12) as well as the decomposition, see (4.1) for the notation: Similarly considering the last return to A before exiting C, we write Q 0 -a.s.: (5.14) Observe that: (5.15) µ ′ ℓ , 1 ≤ ℓ ≤ r, and µ are independent Poisson measures under Q 3 , and their respective intensity measures on T e B are in the notation of (4.2): In a similar fashion one sees that (5.17) and their respective intensity measures on T e C are (5.18) We then define Note as well that Q 0 -a.s., (5.21) The next lemma deviates from the proof of Theorem 3.1 of [15], as a consequence of the fact that we work here with simple random walk on E in place of simple random walk on Z d+1 .
Lemma 5.2.(N ≥ c(ε)) Proof.Note that with (5.10), The probability that the walk starting in S reaches B(0, 1 2 [ N 4M ]) before hitting S and then enters A before entering S, using standard estimates on the one-dimensional walk and on the Green function, cf.[8], p. 31, combined with the right-hand of (1.7), satisfies (5.23) sup On the other hand using estimates on the one-dimensional simple random walk to bound from below the probability to move at distance [c N M ] of C ∪ S = B(0, [ N 4M ] + 1) without hitting S, then estimates on the Green function together with the right-hand inequality of (1.7) to bound from below the probability to reach ∂B(0, N 4 ) without entering S and the invariance principle to reach T × {[ N  4 ] + N} without entering S, and then estimates on the simple random walk to bound from below the probability to reach T × {h N } before level [ N  4 ], we see that for N ≥ c(ε): With the same argument as in (4.20) of [15], (5.22) follows from (5.23) and (5.24).
We now resume the proof of Proposition 5.1 and assume N ≥ c(α, v, ε) so that the tacit assumption above (5.11)as well as (5.22) hold.We can now bound the total mass of ν in (5.16) with the help of the strong Markov property as follows: ≤ cap e C (A) sup whence with standard estimates on the capacity of A, cf.(2.16), p. 53 of [8], we find: Coming back to (5.25) we see with (5.22) that: (5.28) As a result we find that (5.29) Then for ℓ ≥ 1, we introduce the map The following lemma corresponds in the present context to Lemma 3.2 of [15].It will be used when comparing ξ ′ ℓ and ξ ℓ , see (5.37) below.Lemma 5.3.(N ≥ c(ε)) Proof.We implicitly assume (5.11).The same argument leading to (5.22), see (5.23), (5.24), and see also (4.17), (4.18) and (4.20) of [15], now yields: (5.36) sup Now for y ∈ ∂ int A we find that Observe that the function z ] is harmonic and positive on B\A and hence on C\A as well.Note that C ∪ ∂ C can be identified with a subset of Z d+1 .With the Harnack inequality, cf.[8], p. 42, and a standard covering argument we find that: and therefore Assuming N ≥ c(ε) so that c ′ (log N) 2 M −(d−1) ≤ 1 2 , we find that for x ∈ ∂C and y ∈ ∂ int A: and this proves (5.35).
We now continue the proof of Proposition 5.1 and will show that for N ≥ c(α, v, ε), where we refer to (4.2), (5.4), (5.33) and (5.34) for the notation.
Given w ∈ W f , we write w s and w ℓ for the respective starting point and endpoint of w.When w 1 , . . ., w ℓ ∈ W f we have (5.38) and using the strong Markov property this equals This concludes the proof of (5.37).
Proposition 5.4.(α > 0, v > (d + 1) α, 0 < ε < 1) For N ≥ c(α, v, ε) and z 0 ∈ I, one can construct on an auxiliary space (Ω 4 , A 4 ) a coupling Q 4 of X . ,X ′k . ,k ≥ 1, I ′ under Q 3 with µ, I under Q 0 so that Proof.With N ≥ c(α, v, ε) as in Proposition 5.1, we chose Ω 4 = Ω 3 × Ω 0 , A 4 = A 3 ⊗ A 0 , and consider the conditional probabilities for B * , B ⊆ A: Letting P(A) stand for the collection of subsets of A, we can construct with (5.9) and Theorem 2.4, p. 72 of [9], a probability p on P(A) 2 coupling the distribution of I * under Q 3 and that of I ∩ A under Q 0 , such that p-a.s., the first coordinate on P(A) 2 , (which is distributed as I * under Q 3 ), is a subset of the second coordinate, (which is distributed as This probability yields a coupling of X . ,X ′k . ,k ≥ 1, I ′ under Q 3 with µ, I under Q 0 . Moreover in view of (5.8) and (5.6) we find that (5.44) Together with (4.4) this yields (5.42).

Comparison with random interlacements
In this section we complete the proof of Theorem 1.1, (cf.step f) of the outline following Theorem 1.1).We can view C ∪ ∂ C as a subset of Z d+1 , and the main ingredient is to stochastically dominate I ∩ A, which is the trace on A of the ranges of trajectories in the support of the Poisson point measure µ on T e C with intensity measure λκ, cf.(5.4), with the trace on A of random interlacements at level v.In view of (1.18), (1.19) it suffices for this purpose to dominate the equilibrium measure e A, e B which appears in (5.4), with a multiple slightly bigger than 1 of the equilibrium measure e A of A relative to Z d+1 .This is carried out in Proposition 6.1.The claim (6.1) will thus follow as soon as we show that for N ≥ c(α, v, ε), To this end we note with similar arguments as above ( using the right-hand inequality of (1.7) and standard bounds on the Green function, cf.[8], p. 31.We thus find that for N ≥ c(ε), This is more than enough to show that (6.2) holds and this concludes the proof of Proposition 6.1.
We now turn to the Proof of Theorem 1.1: We assume N ≥ c(α, v, ε) and z 0 ∈ I as in Proposition 6.1.We consider the space Ω ′ = Ω 4 × Ω, cf.(1.14), endowed with the product σ-algebra A ′ = A 4 ⊗ A. We endow (Ω ′ , A ′ ) with a probability Q ′ as follows.Using a similar construction as in (5.43) we consider a probability p ′ on P(A) 2 coupling the law of I ∩ A under Q 4 with the law of I v ∩ A under P, such that p ′ -a.s. the first coordinate is a subset of the second coordinate.We then define the probability where we use a similar convention as below (5.42) to define the conditional probabilities appearing in (6.4) when either As a result of (5.42) we thus find that: The coupling Q ′ satisfies the estimate (2.1) and enables with Proposition 2.1 to complete the proof of Theorem 1.1.

Lower bound on the disconnection time
In this section we apply Theorem 1.1 together with the controls of [11] recalled in (1.23) to prove a lower bound on the disconnection time T N of the discrete cylinder, see (7.1) for the definition of T N .We derive in Theorem 7.3 a lower bound on T N , which in particular shows that under P the laws of N 2d /T N , N ≥ 2, are tight when d ≥ 2. This had previously only been proved when d ≥ 17, cf.[3].Together with Corollary 4.6 of [15] this shows that for all d ≥ 2, "T N lives in scale N 2d ".An additional interest of Theorem 1.1 stems from the fact that better controls on the percolative properties of the vacant set of random interlacements V u when u < u * , should lead to an improvement of the lower bound on T N we derive here, cf.Remark 7.5 2).
We begin with some terminology and notation.A finite subset S of E, cf.(0.1), is said to disconnect E when for large M, E × (−∞, −M] and E × [M, ∞) belong to distinct connected components of E\S.The disconnection time of E by the simple random walk X . is then defined as (7.1) T N = inf{n ≥ 0; X [0,n] disconnects E} .
It is convenient to introduce the sequence ρ m , m ≥ 0 of successive displacements of the vertical component Z . of X .: Note that under P , (see below (0.1) for the notation), Z . is distributed as a simple random walk on Z starting at the origin.We further introduce the random times: (7.4) γ z u = inf{ρ k ; k ≥ 0, L z k ≥ u}, for u ≥ 0, z ∈ Z .We recall the notation for K below (1.24), and for D z k in (1.10).In the next proposition we will show that inf z∈Z D z K happens at least in scale N 2d .More is true, see Remark 7.2, but the controls in Proposition 7.1 will be sufficient for our purpose.We let W stand for the canonical Wiener measure and consider, cf.(0.4) Proof.We begin with the proof of (7.7) which constitutes an intermediary step in the proof of (7.8).Consider z ∈ Z, and observe that under any P x , when π Z (x) = z, the number of visits of Z .to z before exiting z + I, see (1.9) for the definition of I, almost surely equals m≥0 1{ Z m = z, ρ m < T e B(z) }, and is distributed as a geometric random variable with success probability h −1 N .Applying the strong Markov property at the times R z k ′ , 1 ≤ k ′ ≤ k, we see that (7.9) under P , m≥0 1{ Z m = z, ρ m < D z K } stochastically dominates the sum of K independent variables distributed as UV , where U is a Bernoulli variable with success probability h N −r N h N , and V an independent geometric variable of parameter h −1 N , (in fact when |z| ≥ r N there is an equality of distribution).It then follows that for a > 0 and z ∈ Z, with Chebishev's inequality: (7.10) .
For large N the second term inside the exponential in the last member of (7.10) is equivalent to α N d h N log( 1 1+a ), and since α ′ < α, the claim (7.7) follows from (7.10) by choosing a > 0 small enough.
We now turn to the proof of (7.8).With (2.20) of [2], we know that we can construct an auxiliary space (Ω, A, P ) coupling L z k , z ∈ Z, k ≥ 0, under P with L(a, t), a ∈ R, t ≥ 0, under W so that Note that when N ≥ 3, the sequence ρ m , m ≥ 0, under P has a distribution independent of N, namely the law of the successive partial sums of independent geometric variables with success probability (d + 1) −1 .Thus for γ ′ > γ > 0 and α > α ′ > α ′′ > 0, we see with (7.7) and the law of large number, that Letting γ ′ decrease to γ, α ′′ increase to α, as well as scaling we find (7.8).
One can also argue in a direct fashion with the help of the invariance principle that (7.19) holds as well when d = 1.
2) It is an open problem, cf.Remark 4.7 2) of [15], whether for d ≥ 2, under P ≥ γ , with u * * ∈ [u * , ∞) a certain critical value introduced in (0.6) of [15], above which there is a power decay in L of finding a path in V u from B(0, L) to S(0, 2L).
Showing that u * = u * * and that one can choose v = u * in (7.14) would yield a proof of (7.20).One interest of Theorem 1.1 is that this last statement will follow if one can derive some suitable quantitative estimates on the presence of the infinite cluster in V u , when u < u * , see also [18].In a similar fashion the identity u * = u * * will follow if one can prove quantitative controls on the rarity of large finite clusters in V u when u > u * .

. , k ≥ 1 ,
bring us closer to random interlacements (especially once we carry out a Poissonization step in the next section).The idea for the construction of the coupling is to introduce iid sequences of excursions ζ(z 1 ,z 2 ) i , i ≥ 1, where (z 1 , z 2 ) varies over {r N , −r N } × {h N , −h N }and classifies the possible entrance and exit levels of the excursions respectively distributed as X •∧T e B under P z 1 ,z 2 , cf. (1.25).The sequence X k . ,k ≥ 1, is in essence realized by picking for each k an excursion of type (z 1 , z 2 ) with z 1 = Z R k and z 2 = Z D k , whereas the sequence X ′k with same distribution as X •∧T e B under P q , (3.6) and so that the above collections in (3.3) -(3.6) are mutually independent.(3.7)
This enables to define a Poisson point measure µ ′ on T e B , cf. (4.1) and below (1.1) for the definition of T e B , such that the union of the ranges of trajectories in the support of µ ′ with high probability contains the trace on A of X [0,D K ] .We thus consider, cf.Proposition 3.1, N ≥ c 1 (α, v), z 0 ∈ I and Ω 3 = Ω 2 × N endowed with the product σ-algebra A 3 = A 2 × P(N), where P(N) stands for the collection of subets of N, and the probability Q 3 product of Q 2 with the Poisson law of parameter (1 + 3 5 δ) α N d h N .We denote with J ′ the N-valued coordinate which is Poisson distributed.The definition of A appears below (1.24).Proposition 4.1.
+•)∧T e C 1{range X k .∩ A = φ} ,and the same argument as in Proposition 4.1 now leads to the fact that for N ≥ c(ε),(5.4)µ has intensity measure λκ on T e C , where λ = 1 + 4 5 δ α(d + 1) 1 − r N h N and κ is the law of X •∧T e C under P e A, e B .

Proposition 6 . 1 .P
(α > 0, v > (d + 1)α, 0 < ε < 1) For N ≥ c(α, v, ε) and z 0 ∈ I, (6.1)I ∩ A under Q 4 is stochastically dominated by I v ∩ A under P .Proof.The random set I ∩ A is the trace on A of the ranges of trajectories in the support of the Poisson point measure µ on T e C with intensity measure, cf.(5.4), e A, e B [X •∧T e C ∈ dw] .On the other hand I v ∩ A is the trace on A of the ranges of trajectories in the support of the Poisson point measure µ A,v on W + with intensity measure, cf.(1.19): v P e A [X .∈ dw] .

( 7 . 20 )+ 1 ,
T N /N 2d converges in law towards ζ u * √ d as N → ∞ , with u * the non-degenerate critical value for the percolation of the vacant set of random interlacements, see below (1.22).It has been shown in Corollary 4.6 of [15] that when d ≥ 2, (7.21) for γ > 0, lim N P [T N ≥ γ N 2d ] ≤ W ζ u * * √ d + 1 W ×ℓ f , where W f denotes the countable collection of finite nearest neighbor paths with values in C ∪∂ C, as well as the mapφ ℓ from { D ℓ < T e C < R ℓ+1 } ⊆ T e C into W ×ℓ f defined by: