Logarithmic components of the vacant set for random walk on a discrete torus

This work continues the investigation, initiated in a recent work by Benjamini and Sznitman, of percolative properties of the set of points not visited by a random walk on the discrete torus (Z/NZ)^d up to time uN^d in high dimension d. If u>0 is chosen sufficiently small it has been shown that with overwhelming probability this vacant set contains a unique giant component containing segments of length c_0 log N for some constant c_0>0, and this component occupies a non-degenerate fraction of the total volume as N tends to infinity. Within the same setup, we investigate here the complement of the giant component in the vacant set and show that some components consist of segments of logarithmic size. In particular, this shows that the choice of a sufficiently large constant c_0>0 is crucial in the definition of the giant component.


Introduction
In a recent work by Benjamini and Sznitman [1], the authors consider a simple random walk on the d-dimensional integer torus E = (Z/NZ) d for a sufficiently large dimension d and investigate properties of the set of points in the torus not visited by the walk after [uN d ] steps for a sufficiently small parameter u > 0 and large N.Among other properties of this so-called vacant set, the authors of [1] find that for a suitably defined dimension-dependent constant c 0 > 0, there is a unique component of the vacant set containing segments of length at least [c 0 log N] with probability tending to 1 as N tends to infinity, provided u > 0 is chosen small enough.This component is referred to as the giant component.It is shown in [1] that with overwhelming probability, the giant component is at |.| ∞ -distance of at most N β from any point and occupies at least a constant fraction γ of the total volume of the torus for arbitrary β, γ ∈ (0, 1), when u > 0 is chosen sufficiently small.One of the many natural questions that arise from the study of the giant component is whether there exist also other components in the vacant set containing segments of logarithmic size.In this work, we give an affirmative answer to this question.In particular, we show that for small u > 0, there exists some component consisting of a single segment of length [c 1 log N] for a dimension-dependent constant c 1 > 0 with probability tending to 1 as N tends to infinity.
In order to give a precise statement of this result, we introduce some notation and recall some results of [1].Throughout this article, we denote the d-dimensional integer torus of side-length N by E = (Z/NZ) d , where the dimension d ≥ d 0 is a sufficiently large integer (see (1.1)).E is equipped with the canonical graph structure, where any two vertices at Euclidean distance 1 are linked by an edge.We write P , resp.P x , for x ∈ E, for the law on E N endowed with the product σ-algebra F , of the simple random walk on E started with the uniform distribution, resp.at x.We let (X n ) n≥0 stand for the canonical process on E N .By X [s,t] , we denote the set of sites visited by the walk between times [s] and [t]: , for s, t ≥ 0.
We use the notation e 1 , . . ., e d for the canonical basis of R d , and denote the segment of length l ≥ 0 in the e i -direction at x ∈ E by where the addition is naturally understood as addition modulo N. The authors of [1] introduce a dimension-dependent constant c 0 > 0 (cf.[1], (2.47)) and for any β ∈ (0, 1) define an event G β,t for t ≥ 0 (cf.[1], (2.52) and Corollary 2.6 in [1]), on which there exists a unique component where q(d) denotes the probability that the simple random walk on Z d returns to its starting point.Note that d 0 is well-defined, since q(d) ↓ 0 as d → ∞ (see [4], (5.4), for precise asymptotics of q(d)).Among other properties of the vacant set, it is shown in [1], Corollary 4.6, that for any dimension d ≥ d 0 and any β, γ ∈ (0, 1), Our main result is: For any sufficiently small u > 0, the vacant set left by the random walk on (Z/NZ) d up to time uN d contains some segment of length which does not belong to the giant component with probability tending to 1 as N → ∞.That is, for any β ∈ (0, 1), We now comment on the strategy of the proof of Theorem 1.1.We show that for l as in (1.3), for some ν > 0 and u > 0 chosen sufficiently small, the vacant set at time N 2− 1 10 contains at least [N ν ] components consisting of a (1.5) single segment of length l (cf.Section 3), with high probability some of these segments remain unvisited until time [uN d ] (1.6) (cf.Section 5).
Note that these logarithmic components are distinct from the giant component with overwhelming probability in view of (1.2).
Let us explain the main ideas in the proofs of the claims (1.5) and (1.6).The argument showing (1.5) consists of two steps.The first step is Lemma 3.2, which proves that with high probability, at any two times until N 2− 1 10 separated by at least N 4 3 , the random walk is at distinct locations.Here, the fact that d ≥ 5 plays an important role.
In the second step, we partition the time interval 0, . We show in Lemma 3.3 that with high probability, there are at least [N ν ] such subintervals during which the following phenomenon occurs: the random walk visits every point on the boundary of an unvisited segment of length l without hitting the segment itself, and thereafter also does not visit the segment for a time longer than N 4 3 .It then follows with the help of the previous Lemma 3.2 that the random walk does not visit the surrounded segments at all.Similarly, the segments surrounded in the [N ν ] different subintervals are seen to be distinct, and claim (1.5) is shown (cf.Lemma 3.4).The proof of Lemma 3.3 uses a result on the ubiquity of segments of logarithmic size in the vacant set from [1].From this ubiquity result, we know that for any β > 0, with overwhelming probability, there is a segment of length l in the vacant set left until the beginning of every considered subinterval (in fact even until [uN d ] for small u > 0) in the N β -neighborhood of any point.Hence, to show Lemma 3.3, it essentially suffices to find a lower bound on the probability that for some β > 0, the random walk surrounds, but does not visit, a fixed segment in the N β -neighborhood of its starting point until time N The rough idea behind the proof of claim (1.6) is to use a lower bound on the probability that one fixed segment of length l survives (i.e.remains unvisited) for a time of at least [uN d ].With estimates on hitting probabilities mentioned in Section 2, it can be shown that this probability is at least e −const ul .Since this is much larger than 1 for u > 0 sufficiently small, cf.(1.3), it should be expected that with high probability, at least one of the [N ν ] unvisited segments survives until time [uN d ].This conclusion does not follow immediately, because of the dependence between the events that different segments survive.However, the desired conclusion does follow by an application of a technique, developed in [1], for bounding the variance of the total number of segments which survive.
The article is organized as follows: Section 2 contains some estimates on hitting probabilities and exit times recurrently used throughout this work.In Section 3, we prove claim (1.5).In Section 4, we prove a crucial ingredient for the derivation of claim (1.6).In Section 5, we prove (1.6) and conclude that these two ingredients do yield Theorem 1.1.
Finally, we use the following convention concerning constants: Throughout the text, c or c ′ denote positive constants which only depend on the dimension d, with values changing from place to place.The numbered constants c 0 , c 1 , c 2 , c 3 , c 4 are fixed and refer to their first place of appearance in the text.
Acknowledgments.The author is grateful to Alain-Sol Sznitman for proposing the problem and for helpful advice.

Some definitions and useful results
In this section, we introduce some more standard notation and some preliminary estimates on hitting probabilities and exit and return times to be frequently used later on.By (F n ) n≥0 and (θ n ) n≥0 we denote the canonical filtration and shift operators on E N .For any set A ⊆ E, we often consider the entrance time H A and the exit time T A , defined as For any set B E, we denote the Green function of the random walk killed when exiting B as We write |.| ∞ for the l ∞ -distance on E, B(x, r) for the |.| ∞ -closed ball of radius r > 0 centered at x ∈ E, and denote the induced mutual distance of subsets A, B of E with For any set A ⊆ E, the boundary ∂A of A is defined as the set of points in E \ A having neighbors in A and the number of points in A is denoted by |A|.For sequences a N and b N , we write a N ≪ b N to mean that a N /b N tends to 0 as N tends to infinity.
Throughout the proof, we often use the following estimate on hitting probabilities:

Proof. Apply the strong Markov property at H
Moreover, we use the following exit-time estimates: Proof.We may assume that 2a ≤ b, for otherwise there is nothing to prove.To show (2.3), one uses the Chebychev inequality with λ > 0 and obtains By Khaśminskii's Lemma (see [5], Lemma 1.1, p. 292, and also [2]), this last expectation is bounded from above by 2 for a certain constant λ > 0, and (2.3) follows.As for (2.4), we define the stopping times (U n ) n≥1 as the times of successive displacements of the walk at distance a, i.e. .
By the invariance principle, the last expectation is bounded from above by 1 − c for some constant c > 0, from which (2.4) follows.
The following positive constants remain fixed throughout the article, We are now ready to begin the proof of the two crucial claims (1.5) and (1.6), starting with (1.5).

Profusion of logarithmic components until time a 1
In this section, we show the claim (1.5).To this end, we define the F [t] -measurable random subset J t of E for t ≥ 0, as the set of all x ∈ E such that the segment [x, x + le 1 ] forms a component of the vacant set left until time [t], where l was defined in (1.3): We then show that for small ν > 0, at least N ν segments of length l occur as components in the vacant set until time a 1 with overwhelming probability: Proof.The proof of Proposition 3.1 will be split into Lemmas 3.2, 3.3 and 3.4, which we now state.Lemma 3.2 asserts that when d ≥ 5, on an event of probability tending to 1 as N tends to infinity, X I ∩ X J = ∅, for all subintervals I, J of [0, a 1 ] with mutual distance at least a 0 .
, each of length b 1 , larger than a 0 , cf. (2.6).By A i,S , S ⊆ E, we denote the event that, during the first half of the i-th time interval, the random walk produces a component consisting of a segment of length l (cf.(1.3)) at some point x ∈ S, and does not visit the same component until the end of the i-th time interval: } is then defined as the set of indices i for which A i,S occurs, i.e.
The next lemma then asserts that at least [N ν ] of the events Finally, Lemma 3.4 shows that Lemmas 3.2 and 3.3 together do yield Proposition 3.1.
We now prove these three Lemmas.
Proof of Lemma 3.2.We start by observing that by the simple Markov property and translation invariance, the probability of the complement of the event in (3.3) is bounded by The remaining task is to find an upper bound on this last probability via the exit-time estimates (2.3) and (2.4).We put a * = N * ≪ a 0 and a 1 ≪ N 2 .By the exit-time estimates (2.3) and (2.4), we can therefore assume that the random walk exits the ball B(0, a * ) before time a 0 , but remains in B(0, N 4 ) until time a 1 .More precisely, one has where P 1 , P 2 and P 3 is abbreviated notation for three terms in the previous line.By the exit-time estimate (2.3) applied with a = a * and b = √ a 0 , one has Moreover, the estimate (2.4) with a = √ a 1 and b = N 4 implies that It thus remains to bound P 1 .We obtain by the strong Markov property applied at time T B(0,a * ) , that The standard Green function estimate from [3], Theorem 1.5.4.implies that for any x ∈ E with |x| ∞ = a * + 1, (2.1) Inserted into (3.12),this yields Substituting the bounds (3.10), (3.11) and (3.13) into (3.9),one then finds that Inserting this estimate into (3.8),one finally obtains and the proof of Lemma 3.2 is complete with (3.14).
Proof of Lemma 3.3.The following result on the ubiquity of segments of logarithmic size from [1] will be used: Define for any constants K > 0, 0 < β < 1 and time t ≥ 0, the event Then for dimension d ≥ 4 and some constant c > 0, one has lim sup N 1 N c log P V c c 1 ,β 0 ,uN d < 0, for small u > 0, (3.16) see the end of the proof of Theorem 1.2 in [1] and note the bounds (1.11), (1.49), (1.56) in [1].With this last estimate we will be able to assume that at the beginning of every time interval [(i − 1)b 1 , ib 1 ], i = 1, . . ., [a 1 /b 1 ], there is an unvisited segment of length l in the b 0 -neighborhood of the current position of the random walk.This will reduce the proof of Lemma 3.3 to the derivation of a lower bound on P 0 A 1,{x} for an x in the b 0 -neighborhood of 0.
We denote with I the set of indices, i.e.I = {1, . . ., [a 1 /b 1 ]} .A rough counting argument yields the following bound on the probability of the complement of the event in (3.6): For any set I considered in the last supremum, we label its elements in increasing order as 1 ≤ i 1 < . . .< i |I| .Note that the events V c 1 ,β 0 ,t defined in (3.15) decrease with t.Applying (3.16), one obtains that Again with monotonicity of V c 1 ,β 0 ,t in t, one finds Before proving (3.20), we note that if one uses (3.20) in (3.19) and proceeding inductively, one has for 0 < ν < (α 1 − β 1 )/2 (cf.(2.5)) and N ≥ c, .
As a result, (3.17), (3.18) and (3.21) together yield for 0 < ν < (α 1 − β 1 )/2 and N ≥ c, hence (3.6).It therefore only remains to show (3.20).To this end, we first find a suitable unvisited segment of length l to be surrounded during the i-th time interval.We thus define the F (i−1)b 1 -measurable random subsets (K S ) S⊆E of E of points x ∈ S ⊆ E such that the segment of length l at site X (i−1)b 1 + x is vacant at time (i − 1)b 1 : For N ≥ c, on the event V c 1 ,β 0 ,(i−1)b 1 , for any y ∈ E there is an integer 0 ≤ m ≤ b 0 such that the segment y + me 1 + [0, le 1 ] is contained in the vacant set left until time (i − 1)b 1 .This implies in particular that with y = X (i−1)b 1 (and necessarily m > 0): Since the event B in (3.20) is a subset of V c 1 ,β 0 ,(i−1)b 1 , it follows that S =∅ Observe that for any Note that K [e 1 ,b 0 e 1 ] and B are both F (i−1)b 1 -measurable.Applying the simple Markov property at time (i − 1)b 1 to the probability in this last expression and using translation invariance, it follows that We now bound each of the above factors from below.Beginning with L 1 , we fix (so that b 0 ≪ b * and b 2 * ≪ b 1 ).We then observe that With (2.3), where a = b * and b = b 1 4 , we infer with (2.5) that We then use the left-hand estimate of (2.2) to find that With the Green function estimate of [3], Proposition 1.5.9 (for the numerator) and transience of the simple random walk in dimension d−1 (for the denominator), the right-hand side is bounded from below by clb The lower bound we need on L 2 in (3.24) is straightforward: We simply calculate the probability that the random walk follows a suitable fixed path in ∂[0, le 1 ], starting at y ∈ ∂[0, le 1 ] and covering ∂[0, le 1 ] in at most d(2l + 8) ≤ 3dl steps (for N ≥ c ′ ).Such a path can for instance be found by considering the paths P i , i = 2, . . ., d, surrounding the segment [0, le 1 ] in the (e 1 , e i )-hyperplane, i.e.P i =(−1e 1 + 0e i , −1e 1 + 1e i , 0e 1 + 1e i , 1e 1 + 1e i , . . ., (l + 1)e 1 + 1e i , (l + 1)e 1 + 0e i , (l + 1)e 1 − 1e i , le 1 − 1e i , . . ., −1e 1 − 1e i , −1e 1 + 0e i ), i = 2, . . ., d.The paths P i visit only points in ∂[0, le 1 ] and their concatenation forms a path starting at −e 1 and covering ∂[0, le 1 ] in (d − 1)(2l + 8) steps.Finally, any starting point y ∈ ∂[0, le 1 ] is linked to −e 1 in ≤ 2l + 8 steps via one of the paths P i .Therefore, we have For L 3 in (3.24), we note that for any y ∈ ∂[0, le 1 ], Note that the d − 1-dimensional projection of X obtained by omitting the first coordinate is a d −1-dimensional random walk with a geometric delay of constant parameter.Hence, one finds that for y ∈ ∂[0, le 1 ], where q(.) is as below (1.1) and we have used (d−1)/d to bound from below the probability that the projected random walk, if starting from 0, leaves 0 in its first step.By translation invariance, for N ≥ c, the second probability on the the right-hand side of (3.28) is bounded from above by Proof of Lemma 3.4.We denote the events on the left-hand side of (3.7) by A and B, i.e.
We need to show that, if A ∩ B occurs, then we can find [N ν ] segments of length l as components of the vacant set left until time a 1 .Informally, the reasoning goes as follows: for any of the [N ν ] events A i,E occurring on A, cf.(3.4), the random walk produces in the time interval (i − 1)b 1 + [0, b 1 /2] a component of the vacant set consisting of a segment of length l and this segment remains unvisited for a further duration of [b 1 /2], much larger than a 0 , cf. (2.6).However, when B occurs, after a time interval of length a 0 has elapsed, the random walk does not revisit any point on the visited boundary of the segment appearing in any of the occurring events A i,E .It follows that the segments appearing in the [N ν ] different occurring events A i,E are distinct, unvisited and have a completely visited boundary.More precisely, we fix any N ≥ c such that and assume that the events A and B both occur.We pick such that the events A i j ,E occur, and denote one of the segments of the form [x, x + le 1 ] appearing in the definition of A i j ,E by S j , cf. (3.4).The proof will be complete once we have shown that X [0,a 1 ] ⊇ ∂S j , X [0,a 1 ] ∩ S j = ∅ and S j = S j ′ for any j, j ′ ∈ {1, . . ., [N ν ]}, j < j ′ .
That X [0,a 1 ] ⊇ ∂S j follows directly from the occurrence of the event A i j ,E on A, cf.(3.4).To see that X [0,a 1 ] ∩ S j = ∅, note first that by definition of A i j ,E , In particular, this implies that X [i j b 1 ,a 1 ] S j and that for any x ∈ S j , there is a point Moreover, one has on A i j ,E that ∂S j ⊆ X [0,i j b 1 −b 1 /2] , and by (3.31) and hence by (3.33), S j ∩ X [i j b 1 ,a 1 ] = ∅.With (3.32) we deduce that X [0,a 1 ] ∩ S j = ∅, as required.Finally, we need to show that S j = S j ′ for j < j ′ .To this end, note that on Hence (3.7) is proved and the proof of Lemma 3.4 is complete.
The statement (3.2) is now a direct consequence of (3.3), (3.6) and (3.7), so that the proof of Proposition 3.1 is finished.

Survival of a logarithmic segment
This section is devoted to the preparation of the second part of the proof of Theorem 1.1, that is claim (1.6).We show that at least one of the [N ν ] isolated segments produced until time a 1 remains unvisited by the walk until time uN d .As mentioned in the introduction, the strategy is to use a lower bound of e −cul on the probability that one fixed segment remains unvisited until a (random) time larger than uN d .The desired statement (1.6) would then be an easy consequence if the events X [0,uN d ] ∩ [x, x + le 1 ] = ∅ were independent for different x ∈ E, but this is obviously not the case.However, a technique developed in [1] allows to bound the covariance between such events for sufficiently distant points x and x ′ and with uN d replaced by the random time D x l * (u) .Here, D x k is defined as the end of the k-th excursion in and out of concentric boxes of suitable size centered at x ∈ E, and l * (u) is chosen such that with high probability, D x l * (u) ≥ uN d , see (4.6) and (4.7) below.The variance bounds from [1] and the above-mentioned estimates yield the desired claim in Proposition 4.1.In order to state this proposition, we introduce the integer-valued random variable Γ J [s,t] for s, t ≥ 0 and J ⊆ E, counting the number of sites x in J such that the segment The following proposition asserts that for ν > 0 and an arbitrary set J of size at least [N ν ], when u > 0 is chosen small enough, Γ J [0,uN d ] is not zero with P 0 -probability tending to 1 as N tends to infinity.Combined with the application of the Markov property at time a 1 , it will play a crucial role in the proof of (1.6), cf.(5.2) below.
Proof.Throughout the proof, we say that a statement applies "for large N" if the statement applies for all N larger than a constant depending only on d and ν.The central part of the proof is an application of a technique for estimating the covariance of "local functions" of distant subsets of points in the torus, developed in [1].In order to apply the corresponding result from [1], we set L = (log N) 2 Note that L and r then satisfy (3.1) of [1].We then define the nested boxes C(x) = B(x, L), and C(x) = B(x, r).(4.5)Finally, we consider the stopping times (R x k , D x k ) k≥1 , the successive returns to C(x) and departures from C(x), defined as in [1], (4.8), by , and for n ≥ 2, (4.6) so that 0 ≤ R 1 < D 1 < . . .< R k < D k < . .., P -a.s.The following estimate from [1] on these returns and departures will be used: Proof of Lemma 4.2.The statement is the same as (4.9) in [1], except that we have here replaced P by P 0 and added an extra factor of N d on the right-hand side of (4.7).It therefore suffices to note that P R x l * (u) ≤ uN d ≥ 1 We now control the complement of the event in (4.2).To this end, fix any J ⊆ E such that |J| = N ν and note that = x∈J h(x), (4.9) and l * (u) was defined in (4.7).In order to bound the probability in (4.8), we need an estimate on the variance of Γu .This estimate can be obtained by using the bound on the covariance of h(x) and h(y) for x and y sufficiently far apart, derived in [1].To this end, one first notes that var P 0 Γu = var P 0 x∈J h(x) ≤ c N ν r d + r 2d + N 2ν sup x,y∈E |x−y|∞≥2r+3 x,y / ∈ C(0) cov P 0 (h(x), h(y)) .
In the proof of Proposition 4.2 in [1], the covariance in the last supremum is bounded from above by cu L d r (cf.[1], above (4.44)).Since r d ≤ N ν (cf.For u > 0 chosen sufficiently small, the right-hand side tends to 0 as N → ∞.With (4.8) and monotonicity of Γ J . in J, this proves (4.2).There only remains to show (4.11).

4 3 + 1 100 / 2
and does not visit the same segment until time N s., hence by the Chebychev inequality and the strong Markov property applied inductively at the times U [ b a ]−1 , . . ., U 1 , that then a 2

x∈[e 1 ,b 0 e 1 ]
P 0 A 1,{x} P [B] .(3.23)In the remainder of this proof, we find a lower bound on inf x∈[e 1 ,b 0 e 1 ] P 0 A 1,{x} in three steps.First, for arbitrary x ∈ [e 1 , b 0 e 1 ], we bound from below the probability that the random walk reaches the boundary ∂[x, x + le 1 ] within time at most b 1 /4.Next, we estimate the probability that the random walk, once it has reached ∂[x, x + le 1 ], covers ∂[x, x + le 1 ] in [3dl] ≪ b 1 /4 steps.And finally, we find a lower bound on the probability that the random walk starting from ∂[x, x + le 1 ] does not visit the segment [x, x + le 1 ] during a time interval of length b 1 .With this program in mind, note that for x ∈ [e 1 , b 0 e 1 ] and N ≥ c ′ , one has
large N, consider any positive integer r such that 10L ≤ r ≤ N
[1]and such that O is at an |.| ∞ -distance of at most N β from any point in E. This unique component is referred to as the giant component.As in[1], we consider dimensions d ≥ d 0 , with d 0 defined as the smallest integer d 0 ≥ 5 such that 49 s.