Decoupling inequalities and supercritical percolation for the vacant set of random walk loop soup

It has been recently understood (arXiv:1212.2885, arXiv:1310.4764, arXiv:1410.0605) that for a general class of percolation models on $\mathbb{Z}^d$ satisfying suitable decoupling inequalities, which includes i.a.\ Bernoulli percolation, random interlacements and level sets of the Gaussian free field, large scale geometry of the unique infinite cluster in strongly percolative regime is qualitatively the same; in particular, the random walk on the infinite cluster satisfies the quenched invariance principle, Gaussian heat-kernel bounds and local CLT. In this paper we consider the random walk loop soup on $\mathbb{Z}^d$ in dimensions $d\geq 3$. An interesting aspect of this model is that despite its similarity and connections to random interlacements and the Gaussian free field, it does not fall into the above mentioned general class of percolation models, since it does not satisfy the required decoupling inequalities. We identify weaker (and more natural) decoupling inequalities and prove that (a) they do hold for the random walk loop soup and (b) all the results about the large scale geometry of the infinite percolation cluster proved for the above mentioned class of models hold also for models that satisfy the weaker decoupling inequalities. Particularly, all these results are new for the vacant set of the random walk loop soup. (The range of the random walk loop soup has been addressed by Chang arXiv:1504.07906 by a model specific approximation method, which does not apply to the vacant set.) Finally, we prove that the strongly supercritical regime for the vacant set of the random walk loop soup is non-trivial. It is expected, but open at the moment, that the strongly supercritical regime coincides with the whole supercritical regime.


Introduction
Consider the integer lattice Z d with dimension d ≥ 3. Any nearest neighbor patḣ = (x 1 , . . . , x n ) on Z d with x n being a neighbor of x 1 is called a (non-trivial discrete) based loop. Two based loops of length n are equivalent if they differ only by a circular permutation of their vertices, i.e., (x 1 , . . . , x n ) is equivalent to (x i , . . . , x n , x 1 , . . . , x i−1 ) for all i. Equivalence classes of based loops for this equivalence relation are called loops. Consider the measureμ on based loops defined bẏ µ(˙ ) = 1 n 1 2d n ,˙ = (x 1 , . . . , x n ), and denote the push-forward ofμ on the space of loops by µ. For α > 0, let L α be the Poisson point process of loops with intensity measure αµ (random walk loop soup).
Poisson ensembles of Markovian loops (loop soups) have been recently actively researched by probabilists and mathematical physicists partly due to their connections to the Gaussian free field, the Schramm-Loewner Evolution and the loop erased random walk, see, e.g., [16,17,32,36,19,20,5,3,31]. Although they already appear implicitly in the work of Symanzik [33] on representations of the φ 4 Euclidean field, the first mathematically rigorous definitions were given by Lawler and Werner [16] in the context of planar Brownian motion (Brownian loop soup) and by Lawler and Trujillo Ferreras [15] in discrete setting. Percolation of loop soups was first considered by Lawler and Werner [16] and Sheffield and Werner [32], who identified, in particular, the value of the critical intensity for the planar Brownian loop soup. The existence of percolation phase transition for the random walk loop soup on Z d and properties of the critical intensity have been investigated in [18,19,7,21,6]. Comprehensive analysis of connectivity properties of the random walk loop soup on Z d in subcritical regime was achieved by Chang and the second author [7] and in supercritical regime by Chang [6]. One of the main challenges for the study of connectivity properties of the loop soup is the polynomial decay of correlations (see [7]). Models of percolation exhibiting strong spatial correlations have been of immense interest in the last decade, including the random interlacements, the vacant set of random interlacements and the level sets of the Gaussian free field, see, e.g., [34,35,29]. Many of the methods (particularly, the coarse graining and Peierls-type arguments) developed for Bernoulli percolation do not apply to these models. The fundamental idea behind the major progress in understanding these models (which are monotone in their intensity parameters) is that the effect of correlations can be well dominated with a slight tilt of the intensity parameter (sprinkling). This idea is formalized in correlation inequalities, known as decoupling inequalities [34,35,29,11,22,23,1,28]. A general class of percolation models, which satisfy a suitable decoupling inequality and contains the three models mentioned above, was considered in [9,24,30], where most of the geometric properties of the infinite percolation cluster, previously only known to hold for Bernoulli percolation, were proven. (See Section 6 for a precise formulation of conditions from [9].) An interesting aspect of the random walk loop soup percolation is that it does not fall into this general class of models, since the decoupling inequalities assumed there (see condition P3 in Section 6) are not valid. The main reason is that the error term in the decoupling inequality P3 gets smaller on larger scales, while the stochastic behavior of macroscopic loops in the loop soup is scale invariant, see Remark 6.2 for some more details.
The main goal of this paper is the study of geometric properties of connected components of the vacant set of the loop soup L α -the vertices of Z d that do not belong to any of the loops in L α -which we denote by V α . The vacant set exhibits a non-trivial percolation phase transition: there exists α * ∈ (0, ∞) such that • for α < α * there is almost surely a unique infinite connected component in V α , • for α > α * all the connected components are almost surely finite.
The fact that α * < ∞ is elementary, since V α is stochastically dominated by Bernoulli site percolation with parameter exp − α 4d 2 (by restricting L α to loops of length 2), and the positivity of α * follows from Theorem 1.3. The uniqueness of the infinite cluster is not entirely trivial, since the so-called positive finite energy property fails for V α , but still can be proved by a direct adaptation of the standard Burton-Keane argument [4], cf. Remark 3.5.
Our main focus is on geometric properties of the unique infinite cluster of V α . As already mentioned, a unified framework to study infinite clusters of (correlated) percolation models on Z d was proposed in [9], within which various results that were previously known only for supercritical Bernoulli percolation have been proven. These include i.a. quenched Gaussian heat kernel bounds, Harnack inequalities, invariance principle and local CLT for the simple random walk on the infinite cluster [24,30]. The loop soup percolation does not fall into this general class of models, since decoupling inequalities P3 assumed there are not valid, see Remark 6.2. However, Chang [6] was able to prove all the above mentioned results for the infinite cluster in the range of the loop soup L α by observing that the properties of the infinite cluster are predominantly determined by loops with bounded diameter. In a way, the infinite cluster is a small perturbation on top of the infinite cluster of truncated loops. His analysis relies substantially on the Poisson point process structure of the loop soup and cannot be adapted to the vacant set, which is thus considerably more difficult.
Our first result states that the range of L α does satisfy a decoupling inequality, which is however weaker than the one imposed in [9], see Remarks 6.1(4) and 6.2. Theorem 1.1. (Decoupling inequalities) Let R α be the set of vertices visited by loops from L α (the range of L α ) and denote by E α the expectation with respect to the distribution of {1 x∈R α } x∈Z d on {0, 1} Z d . There exist constants C, c such that for any α > 0, δ ∈ (0, 1), integers L, s ≥ 1, x 1 , x 2 ∈ Z d with x 1 − x 2 = sL, and any functions f 1 , f 2 : {0, 1} Z d → [0, 1] such that f i (ω) only depends on values of ω x with x − x i ≤ L, from [9]. More precisely, in Section 6, after recalling the assumptions from [9], we prove that condition P3 on spatial correlations can be relaxed, cf. condition D in Section 6, without any effect on the conclusions of [9] and of [24,30] where the framework of [9] was further used, see Theorem 6.4 and Corollary 6.5. Crucially, even though the vacant set V α does not satisfy condition P3, it does satisfy the weaker condition D by Theorem 1.1 (see Remark 6.1(4)). Furthermore, let us emphasize that condition D is not only weaker than P3, but also more natural, since it postulates decorrelation of local events occuring in large boxes only when the boxes are far apart. All in all, we believe that Theorem 6.4 and Corollary 6.5 are of independent importance beyond their application in the present paper, nevertheless, we postpone their formulation to Section 6 because of a large amount of necessary notation. Incidentally, the results of Chang [6] about the geometry of the infinite cluster in the range of the loop soup can now be directly deduced as a special case of Corollary 6.5 (and Theorem 1.1).

Remark 1.2.
It is natural to ask if the error term of decoupling inequalities (1.1) and (1.2) is optimal. We believe it is not, but do not know a good heuristics. Our proof is based on a delicate interplay between probabilities of two rare events (excess in the number of large loop excursions near x 1 , resp., x 2 ) and it looks so that our result is optimal for the method, see Remark 4.2. For the application of Theorem 1.1 in this paper (Theorem 1.4), an error term in the form C exp −c δ β s γ with some β, γ > 0 would suffice, see Corollary 6.5 and Remark 6.6.
Our next result proves that for small enough values of α, the vacant set V α contains with high probabilitity a unique giant cluster in all large enough boxes. In particular, it implies that the supercritical phase is non-trivial (α * > 0). Theorem 1.3. (Local uniqueness) For any d ≥ 3 there exist α 1 > 0, c = c(d) > 0 and C = C(d) < ∞ such that for all 0 ≤ α ≤ α 1 and n ≥ 1, and P any two connected subsets of Properties (1.3) and (1.4) appear as assumption S1 in the framework of [9], see Section 6. The remaining conditions (ergodicity, monotonicity, continuity) from [9] are easily verified for V α , see Remark 6.6. As a result, we can summarize the main conclusions about the geometry of the infinite cluster of V α as follows. (This is an immediate application of Theorem 1.1, Corollary 6.5 and Remark 6.6.) We refer the reader to the introduction of [30] for the precise statements of these results.
We strongly believe that properties (1.3) and (1.4) with some c = c(d, α) > 0 and C = C(d, α) < ∞ hold for all α < α * . This has been proven to hold for Bernoulli percolation (for all p > p c , see [12, (7.89)]), the random interlacements (for all u > 0, see [26]) and for the range of the loop soup (for all α > α c , see [6]), but is still conjectured for the level sets of the Gaussian free field and for the vacant set of random interlacements. (Analogues of Theorem 1.3 are proved for the level sets of the Gaussian free field on Z d in [9] and on transient graphs from a broad class in [8] and for the vacant set of random interlacements on Z d in [38] (for d ≥ 5) and [10] (for d ≥ 3).) Overview of the paper. In Section 2 we collect basic definitions and classical results on random walks. In Section 3 we study the Poisson point process of loops that intersect two disjoint sets. Such loops can be cut into successive excursions between the two sets which are distributed as independent random walk bridges conditioned on their starting and ending points, see Proposition 3.4. In Section 4 we prove Theorem 1.1 and in Section 5 Theorem 1.3. Finally in Section 6, which can be read independently of all the other sections, we recall the general conditions on percolation models from [9], formulate a weaker decoupling inequality D and prove in Theorem 6.4 that the condition P3 from [9] can be substituted by D without any loss in conclusions. The punchline of Section 6 is Corollary 6.5, which particularly gives Theorem 1.4.

Notation and preliminaries
For x ∈ Z d , let x and x 1 be the ∞ -, resp., 1 -norm of x and denote by B(x, r) the ∞ closed ball in Z d of radius r centered in x. For a set A ⊆ Z d , let ∂ int A = {y ∈ A : y − y 1 = 1 for some y ∈ Z d \ A} be the interior boundary of A and ∂ ext A = {y / ∈ A : y − y 1 = 1 for some y ∈ A} the exterior boundary of A.
Let W + be the set of all infinite nearest neighbor paths on Z d endowed with the σ-algebra generated by coordinate maps X n , n ∈ N. Denote by P x the law of a simple random walk on Z d started at x and by g : Z d × Z d → R the Green function of the simple random walk, g(x, y) = ∞ n=0 P x [X n = y]. It is well known, see, e.g., [13,Theorem 1.5.4], that for any d ≥ 3, there exist c g > 0 and C g < ∞ such that (2.1) For A ⊂ Z d and a nearest neighbor path w = (w 0 , . . . , w N ) on Z d , where N ∈ N 0 ∪ {+∞}, let H A (w) = inf{n ≥ 0 : w n ∈ A} be the entrance time in A and H A (w) = inf{n ≥ 1 : w n ∈ A} the hitting time of A. The equilibrium measure of a finite set A is defined by e A (x) = P x [ H A = ∞]1 A (x). Its total mass is the capacity of A, cap(A) = x e A (x). The equilibrium measure of any finite set in dimensions d ≥ 3 is non-zero and we denote by e A the normalized equilibrium measure. The following relation between the entrance time probability, the Green function and the equilibrium measure is classical, see, e.g., [34, (1.8)]: By taking x = 0 and A = ∂ int B(0, n) in (2.2) and using (2.1), one easily gets the bounds on the capacity of balls: 3) The following lemma and corollary are also standard. They will be used in the proof of Theorem 1.1.
2. for all n ≥ 1, m > 2n, A ⊂ B(0, n), x / ∈ B(0, m) and y ∈ A, There exist constants c = c(d) > 0 and C = C(d) < ∞ such that for all r > C, x ∈ S 1 and y ∈ S 2 , Proof. Immediate from Lemma 2.1 and the Markov property of random walk.
of a random walk path (bridge) from x conditioned to enter A at y.
The set of all based loops is denoted byL and all loops by L. For a loop ∈ L and A ⊂ Z d , we write ∩ A = ∅ if some (and hence all) representative from the equivalence class contains at least one vertex in A. If A = {x}, then we instead write x ∈ . If L is a subset of L and x ∈ Z d , then we write x ∈ L if there exists ∈ L such that x ∈ . We denote by π :L → L the canonical projection, i.e., π(˙ ) is the equivalence class of˙ . Consider the measureμ onL defined bẏ and denote by µ the push-forward ofμ on L by π.
For α > 0 let • L α be the Poisson point process of loops with intensity measure αµ, • N α the field of cumulative local times for the loops in L α , We assume that these processes are defined on a probability space (K, K, P), whose precise description is irrelevant and also use P α and E α to denote the law, resp., expectation, of Constants that only depend on the dimension (and in Seciton 6 possibly also on a and b) are denoted by c and C. Their value may change from line to line and even within lines.

Decomposition of loops in excursions
In this section we study properties of loops that visit two disjoint sets A, B ⊂ Z d . Any such loop can be cut into alternating excursions from A to B and from B to A, which, given their starting and ending points, are distributed as independent random walk bridges. This gives a useful way to sample the Poisson point process of loops that visit A and B, see Proposition 3.4. Furthermore, the total number of loop excursions is unlikely to be large if A and B are far apart, see Lemma 3.6. Let A, B ⊂ Z d be disjoint and consider the set of all loops that visit A and B: We first recall a useful representation of the measure µ on L A,B from [7].  Any loop in L A,B can be decomposed into alternating nearest neighbor excursions from A to B and from B to A. For any ∈ L A,B and˙ = (x 1 , . . . , x n ) ∈ L(A, B)( ), we define the entrance times φ 1 (˙ ) = 1, where | | is the length of the loop .
Let L α A,B be the restriction of L α to L A,B . It is a Poisson point process with intensity measure α 1 L A,B µ, which is independent from the restriction of L α to L \ L A,B . We are interested in the distribution of excursions from A to B of the loops in L α A,B (parts of the loop between times φ i and ψ i ). The set of excursions is only determined up to cyclic permutations, therefore, it is more convenient to work with excursions of based loops. The following lemma identifies L α A,B with a projection of a suitable Poisson point process of based loops. LetL i.e., the set of all based loops˙ = (x 1 , . . . , x n ) such that • there exists i such that x i ∈ B and x j / ∈ (A ∪ B) for all j > i.
(Mind thatL A,B is not the set of all based loops that intersect A and B, as may be suggested by notation.) In other words, to sample L α A,B one first samples the Poisson point processL α A,B of based loops and then replaces each based loop by its equivalence class.
Proof. This is a direct consequence of Lemma 3.2. Indeed, π L α A,B is a Poisson point process with intensity measure The advantage of based loops inL α A,B is that their excursions from A to B are naturally ordered. Of course, the range of all based loops inL α A,B has the same law as the range of all loops in L α A,B . Next, we decompose the Poisson point processL α A,B according to the number of excursions that a based loop makes from A to B. Namely, for j ≥ 1, we denote byL α,j A,B the restriction ofL α A,B toL j A,B = {˙ ∈L A,B : k(˙ ) = j}. Then, We show in Proposition 3.4 that each loop soupL α,j A,B can be constructed by sampling the starting and ending locations of all the excursions from A to B of all the loops iṅ L α,j A,B according to a Poisson point process and then joining the endpoints by independent random walk bridges. Let j ≥ 1 and recall φ i and ψ i defined in (3.1). For a loop˙ = (x 1 , . . . , x n ) ∈L j A,B , denote the starting and ending locations of all the excursions of˙ from A to B by and the excursions from B to A by Figure 1 for an illustration; and consider the Poisson point processes (multisets) For an infinite path w = (x 0 , x 1 , . . .), consider the sequence of times Then, for any α > 0 and integer j ≥ 1, are independent and sampled as products of bridge measures P B a ik ,b ik , resp., Thus, the loops fromL α A,B can be sampled in steps: first sample the number and the starting and ending locations of all excursions of all loops inL α A,B by sampling independently E α,j A,B , j ≥ 1, and then complete all the excursions by sampling independent random walk bridges from P B ·,· , resp., The result is immediate from the following representation ofμ A,B : where in the last step we used the Markov property of random walk and set Φ j+1 = Φ 1 .

Remark 3.5. Proposition 3.4 (applied to
can be used to adapt to V α the standard Burton-Keane argument [4] for the uniqueness of the infinite percolation cluster, even though one of the main requirements, the positive finite energy property, is not satisfied by V α . (The positive finite energy property states that We end this section with a large deviation bound on the total number of excursions from A to B in all loops from L α A,B . Lemma 3.6. Let A, B be (disjoint) subsets of Z d such that Let Z α A,B be the total number of excursions from A to B of all the loops from L α . Then, Proof. Let Z α,j A,B be the number of loops inL α,j A,B . By Proposition 3.4(1), Z α,j A,B are independent Poisson random variables with intensities where in ( * ) we used the strong Markov property at times τ 2i , 1 ≤ i < j, and in ( * * ) at time τ 1 . Furthermore, by Lemma 3.3, Z α and the result follows from the exponential Chebyshev inequality.

Proof of Theorem 1.1
The proofs of (1.1) and (1.2) are very similar and we only provide here the proof of (1.1).
We begin with an outline of the proof. We decompose the loops from L α that intersect into inner (from S 1 to S 1 ) and outer (from S 1 to S 1 ) excursions. By Proposition 3.4, given their starting and ending locations, the inner and outer excursions are independent random walk bridges. By the locality of f 1 and f 2 and disjointness of B(x 1 , rL) and B(x 2 , L), the inner excursions contribute only to the value of f 1 and the outer only to the value of f 2 . By Lemma 3.6 the total number of the outer excursions is bounded by k with probability ≤ e α−k . For each outer excursion, its range in B(x 2 , L) is stochastically dominated by the range of a random walk loop soup with intensity δ k on an event of probability ≥ 1 − exp δ k s 2(d−2) (see Lemma 4.1). Since f 2 is monotone and depends only on the configuration in B(x 2 , L), the stochastic domination implies the desired inequality for expectations. Optimization over k gives (for k = √ δs d−2 ) the desired error term.
We proceed with the details of the proof. Without loss of generality, we may assume that s ≥ s 0 = s 0 (d). Let L ≥ 1 and take 2 < r ≤ s/2 sufficiently large (the ultimate choice of r depends only on the dimension). Let multiset of starting and ending positions of all the excursions (i.e., all the pairs from E α,j S 1 ,S 1 , j ≥ 1). By Proposition 3.4, conditioned on E, the excursions are distributed as independent random walk bridges started at X i and conditioned to hit S 1 at Y i . Let k ≥ 1 (to be specified later) and consider the event By the locality of f 1 and f 2 , f 1 only depends on the loops from L α that are contained in B 1 and on the excursions of the loops intersecting both S 1 and S 1 that start on S 1 and end • the loops from L α that do not intersect S 1 and • independent k-tuple of independent random walk bridges with the ith bridge starting at x i and conditioned to hit S 1 at y i .
By the monotonicity of f 2 , It now suffices to analyse separately the influence of each bridge on the configuration in B 2 . We will prove the following lemma, which easily gives the main result.
Lemma 4.1. For x ∈ S 1 and y ∈ S 1 , let R x,y be the range in B 2 of a random walk bridge started at x and conditioned to hit S 1 at y, see Figure 2. For δ ∈ (0, 1), let R be the range in B 2 of the loops from the loop soup L δ . Then for each r ≥ r 0 = r 0 (d) there exists a coupling (R x,y , R) of R x,y and R such that where C = C(d, r) and c = c(d, r).
We first complete the proof of the theorem using the lemma. By taking δ = δ k in Lemma 4.1, it is immediate that .
, and it remains to show that with this choice of k, . This follows from Lemma 3.6. Indeed, by (2.4) . This completes the proof of Theorem 1.1 subject to Lemma 4.1.

Proof of Lemma 4.1
Fix x ∈ S 1 and y ∈ S 1 . By (2.4), (2.5) and (2.6), the probability that a random walk bridge started at x and conditioned to hit S 1 in y visits B 2 is bounded by which is small if s ≥ s 0 (d) (sufficiently large). In particular, Thus, if we denote by R x,y the range in B 2 of the Poisson point process η of bridges with intensity λ = 2P S 1 x,y , then R x,y is stochastically dominated by R x,y , and it suffices to compare R x,y to R. Every bridge visits B 2 by means of excursions that start on S 2 and end on S 2 . Let η m be the restriction of η to the bridges that make exactly m excursions from S 2 to S 2 . By properties of Poisson point processes, η m are independent Poisson point processes and Furthermore, each η m induces a Poisson point process σ m on m-tuples of excursions from S 2 to S 2 , see Figure 3. To describe its intensity measure, let S be the set of all finite nearest neighbor paths starting on S 2 and ending on their first entrance to S 2 . For be the probability that the excursions from S 2 to S 2 made by a simple random walk started at w 1 (0) before it ever visits S 1 are precisely w 1 , . . . , w m . Note that Γ m is a measure on S m . Then, the intensity measure of σ m is We would like to compare λ m with the intensity measure of the Poisson point process of m-tuples of excursions from S 2 to S 2 induced by the Poisson point process L δ S 2 ,S 2 of loops that visit S 2 and S 2 . A slight problem is that these loop excursions are only defined up to a cyclic permutation. To avoid this issue, we use Lemma 3.3, which states that the Poisson point process L δ S 2 ,S 2 can be constructed by (a) sampling the Poisson point process η of based loops with intensity measurė and (b) "forgetting" the location of the root. In particular, the ranges in B 2 of loops from L δ that visit both S 2 and S 2 and that of loops from η have the same distribution. The excursions of loops in η are naturally ordered. Let η m be the restriction of η to the loops that make exactly m excursions, then η m are independent Poisson point processes and η = ∞ m=1 η m . Furthermore, η m induces a Poisson point process σ m on m-tuples of excursions (see Figure 3) with intensity measure In particular, by Lemma 2.1, δ , (4.4) which implies that for these m's, σ m is stochastically dominated by σ m . In particular, if Finally, for each m, using (4.2) and Lemma 2.1, Thus, by choosing r sufficiently large (depending only on the dimension), which completes the proof of the lemma.

Remark 4.2.
[Some comments on the proof of Theorem 1.1] The following observations suggest that the error term of (1.1) and (1.2) could not be improved with our method.
1. The estimate (4.1) may at first look rather crude. It seems better to consider events F k = {Z = k} and write However, using Lemma 4.
2. In the comparison of intensity measures λ m and λ m in the proof of Lemma 4.1 we use the trivial bound Γ m ≤ Γ m , which allows to conclude that λ m ≤ λ m only for m satisfying (4.4). By taking into account the information that the random walk bridge does not return to S 1 between the excursions w i , one can show that for every m, This gives no improvement to the trivial bound for the ms of the order s r 2(d−2) δ , although it does imply λ m ≤ λ m for all large enough m.
Incidentally, using this better comparison of Γ m and Γ m one obtains that λ m ≤ c r 2−d δ λ m for every m. In particular, if δ ≥ C r d−2 , then the range of the random walk bridge in B 2 is stochastically dominated by the range of the loop soup L δ (with probability 1).
Remark 4.3. The arguments of the proof of Theorem 1.1 apply also to loop soups of random walks with general bounded jump distributions considered in [14] as well as to the Brownian loop soup defined in [16], leading to analogous decoupling inequalities for these models.

Proof of Theorem 1.3
The overall idea of the proof is similar to that of [10], where a result analogous to Theorem 1.3 is proven for the vacant set of random interlacements, although the implementations are quite different. As in [10] we partition the lattice Z d into good and bad boxes. Each good box has a vacant "frame" (see Definition 5.1) and uniformly bounded cumulative occupation local times for L α . In Proposition 5.4 and Corollary 5.5 we prove that the set of good boxes typically contains an infinite connected component, whose complement consists only of small holes. When it is the case, any vacant path of big diameter will pass through a large number of good boxes. However, each time the path enters a good box, there is a uniformly positive probability that it locally connects to the frame of the good box, as proved in Lemma 5.6, which makes the existence of long isolated vacant paths unlikely.
Let us indicate the key differences of our approach from that in [10]. The existence of a ubiquitous infinite cluster of good boxes is proven in [10] using in an essential way a strong version of decoupling inequalities for random interlacements (see [10,Theorem 7.2]). Because of an explicit and very specific dependence of the error term on the intensity of random interlacements and relevant scales (see [10, (7.5)]), these decoupling inequalities imply a qualitative bound on the probabilities of cascading events under the assumption that a box of size L 0 is unlikely to be bad for the random interlacements with intensity There are several issues in adapting this approach to our setting. The decoupling inequalities (1.1) and (1.2) are weaker than the ones in [10, Theorem 7.2] (e.g., the latter imply the decoupling inequalities P3, which are not available for the loop soup, cf. Remark 6.2). Still, they do give an analogue of [10, Lemma 2.2] under a stronger assumption that large boxes are unlikely to be bad for the loop soup with a fixed intensity (see Theorem 6.4). This assumption cannot be true for the loop soup though, predominantly because of the positive density of small loops. Instead of trying to solve these issues (which, even if successful, would only give (1.3) and (1.4) with probability ≥ 1 − C exp(−(log n) 1+ ), since the scales L n in Theorem 6.4 grow faster than exponentially), we develop an approach that does not rely on decoupling inequalities. We use an idea from [38] adapted to our setting to bound the probability that a suitably spread out family of boxes consists only of bad ones (see Lemma 5.9) directly using the decomposition of loops into excursions (Proposition 3.4) and the large deviation bound on the number of excursions (Lemma 3.6). This approach may be of independent interest, since it could potentially apply to models, for which decoupling inequalities are not available or have not been developed yet, such as, e.g., the voter percolation [27]. Fix an integer R ≥ 1, let L 0 = 2R + 1 and consider the lattice Note that • the set is connected in Z d , • for any x 1 Any function n : Z d → N 0 = {0, 1, . . .} gives a decomposition of G 0 into good and bad vertices: We write The choice of α 1 > 0 in Theorem 1.3 is made in the following proposition, which is proven in Section 5.1. Recall that N α denotes the field of local times of the loop soup L α .
Proposition 5.4. For any d ≥ 3, there exist R ≥ 1, α 1 > 0, c > 0 and C < ∞ such that for all α ≤ α 1 and N ≥ 1,  x is connected to The lower bound on the conditional probability of the local connectedness to x ∈G α N (x ) after each step of exploration follows from Lemma 5.6 below. For R ≥ 1 and α > 0, denote by Σ G = σ 1 x ∈G(N α ) , x ∈ G 0 the σ-algebra generated by all the good boxes for N α . (Note that G α N is Σ G -measurable.) For any x ∈ G 0 , denote by the σ-algebra generated by the vacant set V α (equivalently, by the range) of the loop soup L α outside of the box Q(x ).
Lemma 5.6. Let d ≥ 3, R ≥ 1 and α > 0. There exists γ = γ(d, R, α) > 0 such that for all α ∈ (0, α], x ∈ G 0 and y ∈ ∂ ext Q(x ), We postpone the proof of Lemma 5.6 to Section 5.2 and now complete the proof of Theorem 1.3 using the lemma. Fix x ∈ B(0, L 0 2 3 N ). We now define the algorithm for the exploration of the connected component of x in V α which progressively reveals V α in boxes Q(x ), x ∈ G 0 . Assume that the vertices of Z d are ordered lexicographically.
• Let x 0 ∈ G 0 be the unique vertex such that x ∈ Q(x 0 ) and define A 0 = Q(x 0 ).
(Necessarily, x 0 ∈ B G 0 (0, 2 3 N ).) • Let k ≥ 0 and assume that x k and A k are determined. We stop the algorithm if x is not connected to ∂ int A k in V α , and define τ = k, y l = y k , x l = x k , A l = A k , for all l > k.
Else, we define y k+1 ∈ ∂ int A k as the smallest vertex such that x is connected to y k+1 in V α ∩A k , -x k+1 ∈ G 0 \ {x 0 , . . . , x k } as the smallest vertex such that y k+1 ∈ ∂ ext Q(x k+1 ), (See Figure 4 for an illustration.) The algorithm always stops in a finite time (which we denote by τ ), and if x is connected to Z d \ B(x, L 0 1 25 N ) in V α , then the algorithm stops exactly on "reaching" S G 0 (x 0 , 1 30 N ).

Consider the sigma-algebras
Note that the random elements y i , x i , A i , for 1 ≤ i ≤ k, are A k−1 -measurable, since by revealing the shape of A k−1 and the state of V α in A k−1 , one can reconstruct the steps 1, . . . , k − 1 of the algorithm uniquely and also uniquely determine y k , x k and A k . Same reasoning gives that the event {τ ≥ k} belongs to A k−1 .
Consider the events with γ as in Lemma 5.6 (for α = α 1 ). Indeed, to see that (5.6) holds, fix k ≥ 1 and for any admissible G, A and V , define the event F (G, Note that if F (G, A, V ) occurs, then x k = x and y k = y for some x and y, which are uniquely determined by A and V . Thus, which proves (5.6).
We can now complete the proof of (5.4). Let Note that {τ i = k} ∈ Z k−1 for all i and k. Let M = √ N − 1. Then, the probability on the left hand side of (5.4) is bounded from above by An application of (5.3) completes the proof of (5.4) and thus of (1.4), subject to Proposition 5.4 and Lemma 5.6.

Proof of Proposition 5.4
The proof uses a multiscale analysis and embedding of dyadic trees. Its main idea is similar to the proof of [38, Theorem 3.2] about random interlacements, although we use embeddings of dyadic trees as in [35,25] instead of skeletons as in [38]. After defining the embeddings and proving some of their relevant properties (detailed proofs of various results about such embeddings can be found in [25]) we prove in Lemma 5.9 that an embedding into the set B(N α ) of bad vertices is very unlikely. Since the connection event in (5.1) implies that such an embedding must exist (within a not too big class of embeddings), it must be very unlikely too.
We proceed with the details. Recall that L 0 = 2R + 1. Let l ≥ 1 be an integer and consider the sequence of geometrically growing scales L n = L 0 l n , n ≥ 0, and respective lattices G n = L n Z d . For n ≥ 0, we denote by T n = n k=0 {1, 2} k the dyadic tree of depth n and write T (k) = {1, 2} k for the collection of elements of the tree at depth k. Let Λ n be the set of embeddings T : T n → Z d such that • for all 1 ≤ k ≤ n and m ∈ T (k) , T (m) ∈ G n−k , • for all 0 ≤ k ≤ n − 1, m ∈ T (k) and i ∈ {1, 2}, Lemma 5.7. For all n ≥ 1, L 0 ≥ 1, l ≥ 1, 2. for all T ∈ Λ n , k ≥ 0 and m ∈ T (n) , Proof of Lemma 5.7. Statement 1 follows easily by induction on n.
For Statement 2, it suffices to consider 0 ≤ k ≤ n − 1 and l ≥ 6. Take a ∈ T (n−k−1) and b , b ∈ {1, 2} k . Then for the elements a1b , a2b ∈ T (n) , Thus, any m, m ∈ T (n) with T (m ) − T (m) ≤ l−5 l−1 L k+1 can only differ in the last k digits, i.e., there exist a ∈ T (n−k) , b, b ∈ {1, 2} k such that m = ab and m = ab . Since for any m there are at most 2 k such m , the result follows.
By Lemma 5.7, if l ≥ 10, then the sets D x in the above union are pairwise disjoint.
The next lemma is the main ingredient for the proof of Proposition 5.4.
Recall that for two disjoint sets A, B, Z α A,B denotes the number of excursions of all loops from L α from A to B. Then, . (5.8) By the choice of l, Lemma 5.8 and Lemma 3.6, where in the second inequality we used α ≤ 1 and M = K + 2.
To bound the second term in (5.8), recall that by the choice of l, the sets D x , x ∈ T (T (n) ), are pairwise disjoint. Thus, In particular, if Z α C T ,D T ≤ M 2 n , then there exists a subset S of T (T (n) ) with cardinality 2 n−1 such that Z α C x ,D x ≤ 2M for all x ∈ S. As the number of possible subsets of T (T (n) ) with cardinality 2 n−1 is at most 2 2 n , we obtain that where the supremum is over all subsets S of T (T (n) ) with cardinality 2 n−1 .
The event that x is R-bad only depends on the restriction of N α to Q(x ). Thus, if we denote by N α x the total local time of all loops from L α that intersect Q(x ) but not D x , then for all z ∈ Q(x ), N α (z) is the sum of N α x (z) and the total number of visits to z of all the excursions of L α from C x to D x . Note that x , x ∈ S, are independent, • the excursions of L α from C x to D x , conditioned on their starting and ending locations, are distributed as independent random walk bridges (see Proposition 3.4), • the event that x is R-bad for n : Z d → N is increasing in n.
Thus, if we denote by N the total local time of 2M random walk excursions from C 0 to D 0 , then where the maximum is over all 2M -tuples of pairs (y i , z i ) ∈ C 0 × D 0 -the starting and ending locations of excursions from C 0 to D 0 . It remains to prove that for a suitable choice of α and R, Indeed, if (5.10) and (5.11) hold, then the second summand in (5.8) is bounded from above by and, combined with (5.9), this gives the result. We begin with (5.11). Let (y i , z i ) 2M i=1 be the 2M -tuple for which the maximum is attained. By the definition of R-bad vertex, the probability in (5.11) is bounded from above by Indeed, by (5.13), (2.2), (2.1) and the Harnack principle, the first sum is bounded from above by CM log R . By the Markov inequality, (2.1) and the Harnack principle, the second sum is bounded from above by CM 2 R 2−d . Thus, if R ≥ R 0 = R 0 (K), then (5.11) holds. It remains to show that for α ≤ α 0 = α 0 (K, R), (5.10) holds, but this is immediate, since by properties of L α , the probability in (5.10) is bounded from above by CR d α.
Proof of Proposition 5.4. First note that it suffices to prove that for some R ≥ 1, l ≥ 1 and α > 0, for all n ≥ 1. Indeed, let N ≥ 1 and choose n so that 2L n ≤ L 0 N ≤ 2L n+1 . Then, the event in (5.1) implies the event in (5.14) and N ≤ 2L n+1 L 0 = 2l n+1 ≤ 2 Cn for some C = C(l).
Claim (5.14) easily follows from Lemma 5.9 and the observation that the event in (5.14) implies the existence of an embedding T ∈ Λ n such that the images of all leaves T (n) are R-bad for N α (see, e.g., [35, (3.24)] or [25,Lemma 3.3]). Namely, Let l ≥ C 5.8 and choose K = K(l) so that Finally, choose R = R 0 (K) and α = α 0 (R, K) > 0 as in Lemma 5.9. Then, by Lemma 5.9, and (5.14) follows for this choice of l, R and α.

Proof of Lemma 5.6
We begin with an outline of the proof. For x ∈ G 0 , we decompose all the loops from the loop soup L α that visit A = ∂ int Q(x ) and B = ∂ ext Q(x ) into inner (from A to B) and outer (from B to A) excursions. By Proposition 3.4, given their starting and ending locations, the inner and outer excursions are independent random walk bridges. In view of independence, the conditional probability in (5.5) with respect to the σ-algebras generated by all good boxes and all the vacant set in the complement of Q(x ) can be substituted by the conditional probability with respect to only the starting and ending locations of the inner excursions and the event that x is good, cf. (5.15) and (5.16). Now, by Definition 5.2(2) of the good box (see also Remark 5.3) the total number of inner excursions is bounded from above by R d−1 . Since all of them are distributed as independent random walk bridges, one can prescribe their values as simple paths inside of Q(x ) in such a way that a given point y ∈ ∂ ext Q(x ) is connected to (x ) by a nearest neighbor path in Q(x ) which is avoided by all the bridges, see (5.21) and below. Since the number of bridges is bounded and each is realized as a simple path in Q(x ), the price of such a local surgery is uniformly positive. Furthermore, with positive probability there are no loops of L α that are entirely contained in Q(x ), thus the constructed nearest neighbor path from y to (x ) in Q(x ) is in fact a path in the vacant set V α . Finally, such a surgery keeps x good.
We proceed with the details of the proof. Let x ∈ G 0 and y ∈ ∂ ext Q(x ). Define and recall from (3.2) the definition of Poisson point processes E α,j A,B , − → E α,j A,B , and ← − E α,j A,B , j ≥ 1, of pairs of loop entrance points in A and B, inner and outer bridges, respectively.

Define sigma-algebras
and the sigma-algebra F ext generated by the loops from L α that do not intersect Q(x ).
Let x be the unique neighbor of y in ∂ int Q(x ) and consider the event D that x is connected to (x ) in V α ∩ Q(x ). Then, Finally, let E(x,y) be the event that none of the loop excursions from A to B starts at x and none of them ends at y, namely, for all the pairs of points in E α,j A,B , j ≥ 1, the first point is not x and the second is not y. Note that {y ∈ V α } ⊆ E(x,y).
To prove (5.5) it suffices to show that which gives (5.5).
By the definition of Poisson point process, the sigma-algebras F ext and σ(E, − → E , ← − E ) are independent. Furthermore, by Proposition 3.4, the sigma-algebras − → E and ← − E are conditionally independent given E. Thus, Indeed, by Dynkin's π-λ lemma, it suffices to show that for any admissible e, ← − e , and F ∈ F ext , which is immediate, since by the (conditional) independence of sigma-algebras, for all compatible e and ← − e .
Thus, by (5.15) and (5.16), it suffices to prove that in other words, that for all e such that {{E α,j A,B } j≥1 = e} ⊆ E(x,y), In particular, we may and will assume from now on that since otherwise the claim is trivial. In fact, we will show a stronger statement. Let F int,∅ be the event that the set of loops from L α contained in Q(x ) is empty, then for all e as in (5.17) and satisfying additionally {{E α,j A,B } j≥1 = e} ∩ {x ∈ G(N α )} = ∅. (This basically means that none of the loop excursions can start from (x ) or end in a neighbor of (x ) and that the total number of excursions does not exceed 1 2 R d−1 , cf. Definition 5.2.)

Let
− → N α be the field of cumulative occupation local times in Q(x ) of all the excursions from is the total number of times z is visited by and the two events on the right are independent. Since the number of loops from L α contained in Q(x ) is a Poisson random variable with parameter αc, for c = c(R), P[F int,∅ ] = e −αc ≥ e −αc > 0, and to finish the proof of (5.18) it suffices to show that for all e as before and some or, equivalently, that be a family of independent random walk bridges distributed according to N be the field of cumulative occupation local times in Q(x ) of all the bridges X i , that is, for z ∈ Q(x ), − → N (z) is the total number of times z is visited by the bridges X i . Also, let We prove that there exist N simple (deterministic) paths ρ i from x i to y i , such that Once the existence of such paths ρ i is shown, (5.20) is immediate. Recall that we assume x / ∈ (x ). Thus, precisely one of the coordinates, say coordinate i, of the vector x − x is −R or R, and the other coordinates take values between −R + 3 and R − 3. Let j be the first coordinate which is not equal to i and denote by e s the sth coordinate unit vector. We define the set Π in Q(x ) as if the ith coordinate of x − x equals −R, and as if the ith coordinate of x − x equals R, see Figure 5. Note that for R ≥ 4, Figure 5: On the left, the "tunnel" Π, which connects x to (x ) inside of Q(x ). On the right, a simple path ρ i between x i and y i inside the connected set Q = Q(x ) \ (∂ int Q(x ) ∪ (x ) ∪ Π). The simple path ρ i , defined as (x i , ρ i , y i , y i ), visits the boundary ∂ int Q(x ) exactly 2 times, namely, at x i and y i .
Coming back to the random walk bridges, for each x i and y i , let x i be the unique neighbor of ) and y i the unique neighbor of y i in Q. Let ρ i be an arbitrary simple path from x i to y i in Q, see Figure 5. We define ρ i as the path (x i , ρ i , y i , y i ). Then, each ρ i is a simple path from x i to y i that avoids Π, visits ∂ int Q(x ) exactly twice and stops on entering B (at y i ). Thus, and • the total number of visits of all ρ i to ∂ int Q(x ) is not bigger than R d−1 .
In other words, the collection of paths ρ i satisfies the desired properties (5.21). This way, the proof of (5.20) (hence of Lemma 5.6) is complete.
6 General approach to correlated percolation models Let F be the sigma-algebra on Ω generated by the coordinate maps Ψ x , x ∈ Z d , and let P u , u ∈ (a, b), be a family of probability measures on (Ω, F), for some (fixed) 0 < a < b < ∞. Under general assumptions on the family {P u } u∈(a,b) introduced in [9] it has been proven that for each u ∈ (a, b), the random set S contains a unique infinite connected component S ∞ , which on large scales "looks like Z d ", for instance, for P u -almost every ω ∈ Ω, balls in S ∞ have asymptotic deterministic shape [9], the simple random walk on S ∞ converges to a Brownian motion with a deterministic positive diffusion constant [24], its transition probabilities satisfy quenched Gaussian heat kernel bounds and the local CLT, etc. [30]. These assumptions on {P u } u∈(a,b) are the following. P1 (Ergodicity) For each u ∈ (a, b), every lattice shift is measure preserving and ergodic on (Ω, F, P u ).
P2 (Monotonicity) For any a < u < u < b and increasing event G ∈ F, P3 (Decoupling) There exist R P , L P < ∞ and ε P , χ P > 0 such that for any integers 10L)) are increasing events and B 1 , B 2 ∈ σ(Ψ y , y ∈ B(x i , 10L)) are decreasing, then While properties P1 and S1 are rather natural and have been extensively used in the analysis of supercritical percolation models, conditions P2, P3 and S2 represent the novelty of this framework and serve as a substitute to independence. (In fact, P2 easily follows from P3 and is stated separately only for convenience.) They provide a connection between the measures P u with different values of the parameter and serve only to prove the likeliness of certain patterns in S ∞ , cf. [30,Remark 1.9(1)]. More precisely, if an increasing, resp. decreasing, (seed ) event is unlikely with respect to measure P u+δ , resp. P u−δ , then by applying P3 recursively, one concludes that a family of 2 n translates of the event sufficiently spread out on Z d in a certain hierarchical manner (cascading events) occur with probability ≤ 2 −2 n with respect to measure P u , cf. [9,Theorem 4.1]. Then, one uses S2 to show that the probabilities of suitable seed events (cf. [9,Section 5]) with respect to measures P u+δ , resp. P u−δ , and P u are close for small enough δ, cf. [9,Lemmas 5.2 and 5.4]. In other words, one starts with a suitable increasing, resp. decreasing, seed event unlikely with respect to P u , concludes that it is also unlikely with respect to P u+δ , resp. P u−δ , for small δ > 0, and obtains that sufficiently spread out translates of the seed event are unlikely with respect to P u , but now with an explicit bound on the probability. All the other arguments in [9], as well as in [24,30], do not require comparison of probability laws with different parameters and go through for each fixed u if P u satisfies P1 and S1.
In this section we prove in Theorem 6.4 that the result of [9, Theorem 4.1] holds for families of probability measures P u that satisfy condition D, which is weaker than P3.
As P3 is only used in [9,24,30] to derive [9,Theorem 4.1], all the results about geometric properties of S ∞ proved in [9,24,30] hold for families of probability measures P u that satisfy P1, P2, D, S1, S2, see Corollary 6.5. This weakening is crucial in the study of the vacant set of the random walk loop soup, since it satisfies D, but not P3 (see Remarks 6.1(4) and 6.2).
3. In applications one uses D to prove certain behavior of S ∞ under P u for a fixed u (see discussion before the definition of D), thus one only needs D for u s in a vicinity of u. In other words, one can assume that b − a < 1. If so, inequalities (6.1) and (6.2) get weaker by enlarging β or diminishing γ. Thus, the reader should think of γ being small and β large. Incidentally, D is satisfied by the random interlacements and the level sets of the Gaussian free field with γ = d − 2 and β = 2, see, e.g., [23,22].
4. By Theorem 1.1, condition D is satisfied by the range of the loop soup L α with γ = d − 2 and β = 1 2 (and any ζ > 0). 5. The key differences between D and P3 are that (a) in models with polynomially decaying correlations (such as random interlacements, the Gaussian free field and the random walk loop soup), condition D holds automatically if s ≤ (log L) 1 γ ; this way it is more natural than P3, since it only postulates decorrelation of local events occuring in large boxes when the boxes are far apart in comparison to their size, (b) the error term in P3 improves by passing to higher scales L, while the one in D is essentially invariant under rescaling of L.
Remark 6.2. The observations in Remark 6.1(5) are crucial for why P3 is not a valid condition for the loop soup percolation. Indeed, the range of the loop soup in disjoint boxes is correlated because of big loops that visit both boxes. If the boxes and the distance between them have the same scale (of order L, resp., RL with a large but fixed R), then the stochastic behavior of the macroscopic loops visiting these boxes is essentially independent of the scale L. (Note that the loop soup on 1 L Z d converges for large L to the Brownian loop soup, see, e.g., [31].) Using this observation, Chang proved in [6] that condition P3 does not hold for events In general, events defined by the range of the loop soup are quite different from those defined by loop excursions, so the above argument does not disprove P3 for the loop soup. (Mind though that existing proofs of decoupling inequalities for random interlacements (and the one of Theorem 1.1) use decompositions into excursions and do apply to events A 1 , A 2 , thus if P3 were true for the loop soup, it would be at least hard to verify.) However, if d ≥ 5 and α > 0 small enough then for all large L, the event that there are at least 2N vertex disjoint paths in the range from ∂ int B(x, L) to ∂ int B(x, 2L) (later called crossings) is essentially equivalent to the event that there are at least N inner loop excursions from ∂ int B(x, L) to ∂ int B(x, 2L) and N outer excursions from ∂ int B(x, 2L) to ∂ int B(x, L). (The argument below works for any α < α , where α is the critical threshold for the finiteness of the expected size of the cluster of the origin, see [7, (2)].) More precisely, using the same ideas as in [7,Section 5] one shows that with high probability as L → ∞, each crossing from ∂ int B(x, L) to ∂ int B(x, 2L) is built from a chain of at most C log L loops, from which exactly one loop has diameter of order L and all the others are of diameter at most L 1−2 . This implies that every crossing uses an inner or an outer loop excursion between ∂ int B(x, L + L 1− ) and ∂ int B(x, 2L − L 1− ). In dimensions d ≥ 5 with high probability as L → ∞, each excursion is a chain of small sausages linked through cut points, which allows to show that each such excursion contributes to exactly one crossing. Thus, if the number of crossings from ∂ int B(x, L) to ∂ int B(x, 2L) is at least 2N (a fixed large number), then with high probability as L → ∞, the number of inner and outer loop excursions between ∂ int B(x, L + L 1− ) and ∂ int B(x, 2L − L 1− ) is at least 2N . Vice versa, if the number of excursions between ∂ int B(x, L − L 1− ) and ∂ int B(x, 2L + L 1− ) is at least 2N , then with high probability as L → ∞, the excursions do not intersect each other in B(x, 2L) \ B(x, L), which implies that the number of crossings from ∂ int B(x, L) to ∂ int B(x, 2L) is at least 2N . Using this correspondence between crossings and loop excursions and the above argument of Chang, it is easy to conclude that P3 does not hold for the events {number of crossings in the range from ∂ int B(x 1 , L) to ∂ int B(x 1 , 2L) is at least 2N } and {number of crossings in the range from ∂ int B(x 2 , L) to ∂ int B(x 2 , 2L) is at least 2c R N }. We leave the details of this argument to the reader. Although the above reasoning only serves to disprove P3 for the loop soup L α in dimensions d ≥ 5 and small α, it is (together with the result of Chang) a good enough evidence that P3 is not a valid condition to study the loop soup. Furthermore, in addition to Remark 6.1 (5), the argument demonstrates that condition D is weaker than P3. Since by Theorem 6.4 condition P3 can be replaced by D in all its known applications, it is not that interesting to try proving if P3 fails in the remaining cases. Remark 6.3. It is easy to see that the measures P u that satisfy D(a) or D(b) are stochastically monotone, i.e., satisfy P2. The condition is particularly interesting for ζ ∈ (0, 1), since in this case e (log L) ζ = o(L p ) for any p > 0. Furthermore, if ζ > 1 2 , then the error term in (6.1) and (6.2) can be replaced by C exp −c min (u − u) β s γ , (u − u) ρ e (log L) ζ with an arbitrary ρ > 0 (see Remark 6.7).

Cascading events
Let l k , r k , L k , k ≥ 0 be sequences of positive integers such that

Consider renormalized lattices
For L 0 ≥ 1 and x ∈ G 0 , any event G x = G x,0 ∈ σ(Ψ y , y ∈ x + [−L 0 , 3L 0 ) d ) is called a seed event. (For simplicity, we omit from notation the dependence of seed events on L 0 .) The family of seed events (G x : L 0 ≥ 1, x ∈ G 0 ) is denoted by G.
For k ≥ 1 and x ∈ G k , we recursively define the events The main result of this section is the following theorem, which states that the result of [9, Theorem 4.1] holds if the family of probability measures P u satisfies assumption D.
Its proof is given in Section 6.2.
Furthermore, if the limit (as L 0 → ∞) in (6.6) exists (and equals 0), then there exists C (u, u , l 0 , G) such that the statements (a) and (b) hold for all L 0 ≥ C .
We refer the reader to the introduction of [30] for the precise statements of these results and relevant discussion.
Remark 6.6. By Remark 6.1(4), the vacant set V α of random walk loop soup satisfies condition D for all α > 0. Theorem 1.3 proves that V α satisfies condition S1 for small enough positive α. (It is believed that S1 holds for all α < α * , see text below Theorem 1.4.) Condition P1 holds for V α due to [7,Proposition 3.2]. Condition P2 follows from D, but also directly follows from the definition of V α . Condition S2 holds for V α for all α < α * by standard arguments of van den Berg and Keane [2] -the probability that 0 is in an infinite cluster of V α is left-continuous for all α, since it can be expressed as a decreasing limit of non-increasing continuous functions, and it is right-continuous for all α < α * , by the uniqueness of the infinite cluster of V α , see also [37,Corollary 1.2], where the argument of van den Berg and Keane is adapted to the vacant set of random interlacements. (Although the infinite cluster of V α is unique for all α < α * by an adaptation of the classical Burton-Keane argument, see Remark 3.5, the uniqueness is immediate for α that satisfy S1 by the Borel-Cantelli lemma.) Thus, the conclusions of Corollary 6.5 hold for V α , which is the statement of Theorem 1.4.

Proof of Theorem 6.4
The proofs of (a) and (b) are essentially the same, we only prove (a).
Let G x , x ∈ G 0 be increasing events and the family P u satisfy D(a). We assume further that for some u ∈ (a, b), lim L 0 →∞ sup x∈G 0 P u G x = 0 (6.8) and prove that for any u ∈ (a, u ), there exist C = C(u, u ) and C = C (u, u , l 0 , G), such that (6.7) holds for all l 0 ≥ 1, r 0 ≥ C(1 + log l 0 ) 2 γ and L 0 ≥ C . It will be seen from the proof how (a) follows if (6.8) is replaced by (6.6), see the note below (6.13).