Transience of conditioned walks on the plane: encounters and speed of escape

We consider the two-dimensional simple random walk conditioned on never hitting the origin, which is, formally speaking, the Doob’s h-transform of the simple random walk with respect to the potential kernel. We then study the behavior of the future minimum distance of the walk to the origin, and also prove that two independent copies of the conditioned walk, although both transient, will nevertheless meet infinitely many times a.s.


Introduction and main results
This paper is about the simple random walk (SRW) on the two-dimensional lattice conditioned on not hitting the origin; formally speaking, it is the Doob's h-transform of two-dimensional SRW with respect to the potential kernel. This random walk (denoted by S) is the main building block of two-dimensional random interlacements introduced in [6] and further studied in [4] and [14] (there is also a continuous version of this process [5]). The two-dimensional random interlacement process is related to (now classical) random interlacements in dimensions d 3 (cf. [3,7,15]) which arise as the limit of the local picture produced by the SRW on the torus Z d n = Z d /nZ d up to time un d , and can be described as a canonical Poisson soup of SRW trajectories. The same construction is impossible in dimension two, simply due to the fact that the two-dimensional SRW is recurrent. However, if we condition the SRW on the two-dimensional torus Z 2 n to avoid a fixed site at an appropriate time scale, then the local picture around this site has a limit again. This limit is the two-dimensional random interlacement process, which is made of conditioned (on not hitting the origin) SRW trajectories, defined through the canonical construction of random interlacements on transient weighted graphs of [16]. A similar construction can be done also in one dimension [2].
The conditioned walk S then became an interesting object on its own. Some of its (sometimes surprising) properties shown in [6,9] include for any x 0 = 0; • any infinite set is recurrent for S; • if A is a "nice" large set (e.g., a large disk or square or segment), then the proportion of sites of A which are ever visited by S is a random variable approximately distributed as Unif[0, 1].
In this paper, our aim is to further examine the properties of the conditioned walk S. This work may be better described as "further quantifying the transience of the conditioned walk." Namely, we first consider the process M n which is the future minimum distance of the walk to the origin, and show in Theorem 1.2 that it has quite an "erratic" behavior. Further, we will prove in Theorem 1.3 that two independent copies of S, although both transient, will nevertheless meet infinitely many times a.s. Now, let us introduce some notation. For x, y ∈ Z 2 we say that x and y are nearest neighbors if they are at Euclidean distance 1, and denote that by x ∼ y. Classical simple random walk on the plane is the random walk on Z 2 that jumps to nearest neighbors with uniform probabilities. We denote it by (S n ; n 0). For A ⊆ Z 2 we define the stopping times τ (A) := inf{n 0; S n ∈ A} and τ + (A) := inf{n 1; S n ∈ A}.
When A = {x} is a singleton we write simply τ (x). When studying properties of S n , a fundamental quantity to understand its behavior is the potential kernel It is known that a(0) = 0, a(x) = 1 for any of the 4 neighbors of the origin and a(x) > 0 for every other x ∈ Z 2 . Moreover, the potential kernel is a harmonic function outside the origin: a(x) = 1 4 y; y∼x a(y), for x = 0.
This property is the basis of many estimates regarding SRW on the plane, since it renders a(S n∧τ (0) ) a martingale. An asymptotic expression for the potential kernel is given by a(x) = 2 π ln |x| + 2γ + 3 ln 2 π + O(|x| −2 ).
These properties and more can be found in [12]. Using the potential kernel a, we define a random walk ( S n ; n 0) on Z 2 that is closely related to SRW. For presentation's sake, we show their transition probabilities side by side: Random walk S is the Doob h-transform of SRW conditioned on not hitting the origin. It is a key object in defining random interlacements on Z 2 , cf. [6].
In what follows, we consider that all random walks are built in a common probability space with probability and expectation denoted by P and E, respectively. We use notation P x to explicit the starting point of the walk and P x 1 ,x 2 when there are two walks involved. If r > 0 and x ∈ Z 2 , we denote the discrete Euclidean disk by B(x, r) = {y ∈ Z 2 ; |y − x| r} and B(r) := B(0, r). Also, define the internal boundary of a set A ∈ Z 2 as Sometimes, instead of P x with x ∈ Z 2 we write P r for some r > 0; this means that the expression works for any x ∈ ∂B(r). Moreover, we denote by τ (A) and τ + (A) the stopping times for the S-walk that are analogous to (1) and define τ (r) := τ (∂B(r)) for r > 0.
The next proposition is taken from [9] and presents some of the basic properties of an S-walk that were proved in [6]. We also add an explicit expression for G(x, y) = E x [# visits to y], the Green function of an S-walk at item (vi).
Proposition 1.1. The following statements hold: (i) The walk S is reversible, with the reversible measure µ x := a 2 (x).
(ii) In fact, it can be represented as a random walk on the two-dimensional lattice with conductances a(x)a(y), x, y ∈ Z 2 , x ∼ y .
(iii) Let N be the set of the four neighbors of the origin. Then the process 1/a( S k∧ τ (N ) ) is a martingale.
(iv) The walk S is transient.
(v) For all x = y with x, y = 0 we have , (vi) The Green function of an S-walk is given by Item (vi) is a consequence of (v). A straightforward derivation of (vi) without the aid of (v) and further potential-theoretic results about the S-walk can be found in [13].
Reference [9] is devoted to proving properties related to how an S-walk intersects some sets. This work advances in the direction of comprehending trajectories of S, but focuses on almost sure properties related to its speed of transience and the relationship between two independent copies of S.
Let us state the main results from the paper. By Proposition 1.1, we know that S n is transient. Our first result is a quantitative assessment of how fast this transience happens. Let us define M n := min m n | S m | and T u := sup{n 0; | S n | u}.
There is a simple relation between these quantities, which can be seen by We develop some almost sure asymptotic properties of M n . Each of them can be translated into a property for T u via (4).
Our second result studies the evolution of two independent S-walks, denoted S 1 and S 2 .
have the same parity. Then, we have Remark 1. Theorem 1.3 shows that almost surely two independent copies of an S-walk will meet infinitely many times; this property is known to hold for two independent SRWs. Following the proof of Theorem 1.3, one could show an analogous result when one of the walks is a SRW and the other is an independent S-walk.
The proof of each of the claims on Theorem 1.2 is obtained by some variation of the Borel-Cantelli lemma. We use both the first and second Borel-Cantelli lemmas and also a generalization known as the Kochen-Stone theorem. These applications rely on being able to control the position of a conditioned walk after n steps. Although the proof of (6) is quite straightforward from choosing a well-suited sequence of scales, the proof of (5) needs more involved arguments to control dependencies on the sequence of events.
The proof of Theorem 1.3 uses the second moment method on a sequence of random variables that count the number of encounters during some wellseparated time scales and a conditional Borel-Cantelli lemma. In order to exemplify the proof on a more classical setting, we prove the analogous result for SRW using this technique on Proposition 6.1.
This paper is organized as follows. We begin Section 2 by presenting some properties already known for S-walks. In Sections 3 and 4 we develop new auxiliary tools for analyzing the position of an S-walk after n steps. In particular, in Section 4 we prove a local Central Limit Theorem for S n that is interesting by itself and in Section 3 we provide tail estimates for the probability that the walk is too close to the origin or too far away from it. Theorems 1.2 and 1.3 are proved in Sections 5 and 6, respectively.
Notational remarks. Throughout the paper we use c, C to denote generic positive constants that can change from line to line. Also, our asymptotic notation uses • both f = o(g) and f g to denote lim n→∞ f (n) g(n) = 0; • f = O(g) to denote |f | C|g| for some constant C; • both f = Θ(g) and f g to denote c|g| |f | C|g|; If such a constant or asymptotic expression depends on other parameters it will be made explicit. We use #A to denote the cardinality of set A.

Auxiliary results
In this section we collect some estimates for SRW and S-walks from other sources. In Sections 3 and 4 we develop other auxiliary results that we will need.
A first useful result is to formalize how close is the law of an S-walk from that of a SRW conditioned on not hitting the origin. To begin with, notice that if we know the endpoints x, y = 0 of an n-step walk then if we sum over all possible paths we get A statement that works for more general events is Lemma 3.3 of [6]. Let x ∈ B(L) \ {0} and denote by Γ L the set of all nearest-neighbor paths starting at x and ending at ∂B(L). For A ⊆ Γ (x) L , we abuse notation and write S ∈ A to denote the event that (S 0 , . . . , S k ) ∈ A for some k. We start with the following estimate.
Using Lemma 2.1, one can translate estimates for SRW to estimates for Swalk. The following estimates for SRW are obtained using the Optional Stopping Theorem with the martingale a(S n∧τ (0) ).
Making L → ∞, we get for |x| n + 1 that

On the position of conditioned walks
Now, we build on the results of the previous section to deduce more precise bounds on probabilities of certain events related to the position of an S-walk.
The general idea is that it is quite close to a SRW when the walk is far away from the origin.
We provide two lemmas that will be constantly used throughout the paper. The first lemma gives an estimate on how large is the hitting time of a disk of large radius.
Moreover, consider an S-walk started from x ∈ B(r) \ {0}. There is c 1 > 0 such that, for r r 0 and t 2 c 0 · r 2 (ln ln r), it holds that Proof. Inequality (10) is shown using a coin-tossing argument: for the event {τ (r) > t} to occur, it is necessary that the walker fails to leave the disk B(r) in each of several non-overlapping time intervals. Indeed, let us divide the time interval [0, t] into intervals where the maximums are on the disk B(r) and we used Markov property at time a 2 . Moreover, we have for large r by the Central Limit Theorem. Thus, we can bound To get (11), we just apply Lemmas 2.1 and 2.2, obtaining where in the last inequality we used that t 2 c 0 · r 2 (ln ln r). Take c 1 = c 0 3 and increase r 0 if needed to ensure 1 2 + O(ln −1 ln r) > 1 3 .
Since we expect that an S-walk is close to a SRW, it is reasonable to expect that after n steps it is at distance roughly √ n. Using Lemma 3.1 we estimate the probability of deviating too much from this. Choose any f satisfying 2 c 0 · ln ln r f n r 2 with c 0 given by Lemma 3.1. If u 3r, there is n 0 > 0 such that for n n 0 we have for universal constants c 1 , c 2 , c 3 . Also, for r 1 3 l and n n 0 we have Proof. All paths of conditioned walk till time n must be inside B(n + r). We decompose our event with respect to stopping time τ (r) the walk hits ∂B(r), using Lemma 3.1. We can write for any t satisfying t 2 c 0 · r 2 (ln ln r) and r r 0 . Since r n 1/4 we just need that n n 0 , a universal constant. Taking t = f r 2 , we obtain Notice also that there is such an f with r 2 f n. Indeed, the hypothesis that r √ n · ln −1/2 ln n implies r 2 (ln ln r) n as n → ∞ and thus we can find an intermediary f satisfying the hypotheses. Now, for z ∈ ∂B(r) we can compare this event with SRW. Noticing that a(z) = 2 π ln r + O(1) and r ∈ [n 1/4 , n 1/2 ], we have by Lemmas 2.1 and 2.2 To estimate the corresponding quantity for SRW, we apply a Berry-Esseen type of estimate for multidimensional walks from Bentkus [1]. For SRW, it states that uniformly on all convex sets A ⊆ R 2 we have where Σ n = n 2 1 0 0 1 is the covariance matrix of S n . Thus, using A = B(u) leads to For a walk started from z, we notice that Putting together all the bounds above, we get and conclude the proof of the first result. The same idea is applied for the second inequality. Applying (12) once again, notice that now it holds for r 1 3 l that n . Analogously to the previous case, we obtain Corollary 3.3. In the same setting of Lemma 3.2, if r n β for some β < 1 2 , then as n → ∞ we have Proof. Just notice that we can take f = n 1/2−β and for this choice of f we have exp −c 1 · f n −1/2 .

Local Central Limit Theorem
In this section we provide some finer estimates for the position of an S-walk after n steps. Proposition 4.1 is a local CLT for S.
Proposition 4.1. If 1 |x|, |y| √ M n and x − y has same parity as n, then there is n 0 (M ) positive such that for n n 0 where the implied constants in " " depend on M > 1 but not on n, x, y. Moreover, there is a universal positive constant C such that for any x, y = 0 and any n it holds The above proposition says, in particular, that the expected number of visits to a fixed site up to a given time converges, but very slowly. Of course it has to converge since an S-walk is transient by construction. Recall also that we have expression (3) Applying the Markov property to the stopping time τ = τ (n 1/3 ) we can write This decomposition allows us to break the proof into cases, considering how |x| and |y| compare to n 1/3 .
Uniform upper bound. If |x| < n 1/3 we have from (15) that Since j n, it suffices to prove (14) for |x| n 1/3 . Recall relation (8) and notice For SRW on the plane the uniform bound is straightforward from the local CLT. To extend the result for an S-walk, just notice that a(y) a(x) is bounded by a constant. Indeed, using that |y − x| n and the asymptotic expression for the potential kernel, we have Now, let us focus on proving (13). Notice that (14) already proves the upper bound in (13) for the case |y|, |x| n 1/3 .
Lower bound case |y|, |x| n 1/3 . Our proof of the lower bound is more involved. We break it into two steps.
Step 1. By allowing the walk to run for εn steps we can consider points at distance of order √ n from the origin.
Step 2. We prove the lower bound for points at distance of order √ n confined to a specific region of the plane. Then, we extend to all points we need by concatenating regions.
Step 1. Let us consider the annulus Then it is possible to choose ε, K > 0 depending on M to ensure that for every z with n 1/3 |z| √ M n we have P z S εn ∈ A, τ (0) > εn δ for large n, where δ is a positive constant.
Proof. Notice that by Lemma 2.2 we can bound This means that the claim will follow once we prove P z [S εn ∈ A] can be made close to 1. For that, we once again use the Berry-Esseen result from Bentkus [1]. Using equation (12) twice we have implying that, for everyz ∈ B( M/ε), On the other hand, we have Thus, if we take ε ↓ 0 the right hand side tends to 1.
We can decompose our event with respect to the position of the walk on times εn and n − εn and apply Step 1. We have P x S n = y, τ (0) > n P x S n = y, τ (0) > n, S εn ∈ A, S n−εn ∈ A = w,w ∈A and we just have to prove the lower bound for w, w ∈ A.
Step 2. We claim that there is a positive constant c > 0 such that for any w, w ∈ A we can write Proof. Let us decompose A into 4 regions, by defining and D i for i = 2, 3, 4 are obtained from D 1 by rotating D 1 around the origin with angles π 2 , π and 3π 2 , respectively. Notice that regions D i cover the set A and D i intersects D i+1 (see Figure 1).
We prove first that if w, w ∈ D 1 then the lower bound holds. For this, we use the fact that if we denote S n = (X n , Y n ) then the processes (X n + Y n ; n 1) and (X n − Y n ; n 1) are two independent SRWs on Z. Let w, w ∈ D 1 and denote Using S 1 n to denote the law of a one dimensional SRW we can write P w S n = w , τ (0) > n P w S n = w , S j , (1, 1) > 0 for j = 1, . . . , n = P a S 1 n = a , τ (0) > n · P b S 1 where in the third line we used the reflection principle and in the last line we used the independence of (X n + Y n ) and (X n − Y n ) once again. Since we know w, w ∈ A, the local CLT in two dimensions provides On the other hand, noticing that a, a 2ε √ n we have by one dimensional local CLT that Putting together (17)-(18), we conclude the lower bound for w, w ∈ D 1 and the same argument works if w, w belong to the same D i . In case w ∈ D 1 \D 2 and w ∈ D 2 \ D 1 , we notice that for any w ∈ D 1 ∩ D 2 we have P w S n/2 = w , τ (0) > n/2 c n and P w S n/2 = w , τ (0) > n/2 c n .
Hence, we can write Applying the above reasoning at most twice we prove that the lower bound holds for every w, w ∈ A.

Combining
Step 2 with (16) we conclude the proof of the lower bound in the case n 1/3 |x|, |y| √ M n. Since we already have the upper bound for this case, all that remains is to extend (13) to the other cases using the decomposition in (15).
Case |x| n 1/3 and n 1/3 |y|. Recalling (15) we have Since j n 2/3+δ n and n 1/3 |y|, |z| √ M n, we conclude from the previous case that Case |y| n 1/3 and n 1/3 |x|. This case is straightforward from the reversibility of the S-walk, since P x S n = y = a(y) 2 a(x) 2 P y S n = x a(y) 2 ln 2 n · 1 n .
Case 1 |x|, |y| n 1/3 . Just apply (15) and use the previous case to estimate P z S n−j = y .

Speed of transience
In this section we prove Theorem 1.2. Let us recall that M n := min m n | S m |.

(e + δ)t(ln ln t).
Infinitely often M n n δ . Define τ n = inf{t > 0; S t ∈ ∂B(n δ ) ∪ ∂B(n)} and let θ t be a time shift of t for the Markov chain S n . The stopping times U 0 := inf{n 1; | S n | n 1 2 +δ }, are well defined since the previous result implies that | S n | n 1 2 +δ infinitely often. Notice that all of them are finite, we have U i < V i < U i+1 , and a positive constant. Indeed, this follows from Lemma 2.3. (1)).
Since these events are independent, we have by the second Borel-Cantelli lemma that | S V i | U δ i infinitely often, and thus M n n δ infinitely often.
For any y > x > 0 we can write The second term in this sum is small if we consider y = n 1/2−δ , since by Lemma 3.1. On the other hand, we can estimate the first term by P η x > n, σ y n P η x > σ y , σ y n P y τ (x) < ∞ ∼ ln x ln y , implying that P η x > n c δ · ln x ln n as long as e −cn δ ln x ln n . Consider sequences x k = e k m and n k = e k m+2 where m is a parameter we choose later. Then, we have This means that a.s. there is a random k 0 such that for every k k 0 and n n k it holds | S n | x k . Notice that we can write Thus, we have M n k exp[ln 1− 2 m+2 n k ] for k k 0 . Since M n is non-decreasing we can extend the result for every large n. Indeed, for any t ∈ N let k(t) be the largest integer such that n k(t) t, i.e., k(t) := ln 1 m+2 t . We have Since m is arbitrary, we can choose m so that 2 m+2 < δ. Denoting s = ln t, which holds for large t.
Infinitely often M n √ n ln δ n . In general, at time n we have that the walk should be at a distance of order √ n from the origin. We consider three different breaking points near The letters above are chosen as mnemonics for upper, lower and minimum, respectively. By Corollary 3.3 we know P x | S n | < l c ln 2 ln n and P x | S n | > u c e ln 2 ln n . (20) We notice that on the highly probable event {| S n | ∈ [l, u]} the probability of avoiding B(m) is almost independent of the precise position. Indeed, for |z| ∈ [l, u] our Lemma 2.3 states that and a similar computation leads to the same upper bound (1)) ln ln n ln n .
The idea now is to apply a generalization of Borel-Cantelli Lemma, known as the Kochen-Stone theorem [10]. It asserts that for any sequence of events A n such that with i 0 being any finite starting point. Let and define quantities u k , l k , m k like in (19) for n = n k . This choice of n k ensures that n k grows quickly, which makes events A k closer to independent, while keeping Also, it is simple to check that m k l k u k m k+1 as k → ∞. For the numerator on (22) we get Estimating the denominator. We need to show that the denominator of (22) has the same asymptotic behavior as in (23). We firstly prove Lemma 5.1. It holds that Recall that by (21) we have Using Lemma 2.3, we have a similar estimate for the term After proving Lemma 5.1, the proof that M n √ n ln δ n infinitely often will be complete using the following estimate.
Lemma 5.2. We have that Proof of Lemma 5.1. For i < j we decompose event A i ∩ A j with respect to the position of S n i , considering the intervals We begin showing that by restricting to intervals I 2 and I 3 we obtain upper bounds of smaller order than what we need.
Interval I 3 . Notice that after n i steps an S-walk can be at most at distance |x| + n i . Also, if j = 3i we have that n 1/3 j = exp 1 3 (3i) ln 2 (3i) = exp i ln 2 i + (2(ln 3) + o(1)) · i ln i n i , implying that, for every i i 0 (x), S n i cannot reach ∂B(n 1/3 j ) if j 3i. Thus, we can write which is finite since we know that ln ln n i = (ln i)(1 + o(1)).
Interval I 2 . We have by Markov property at times n i and n j − n i and the estimates in (20) that c e ln 2 ln n j (1+o(1)) .
Thus, using (21) we can write (1)) ln ln n j ln n j .
Summing for i < j and using that e − ln 2 ln n i is summable (recall that ln ln n i ∼ ln i) we get i=i 0 c e ln 2 ln n i c δ ln ln k.
Interval I 1 . The most representative part of A n i ∩ A n j is when | S n i | ∈ I 1 .
The main idea in this case is to study how large is the stopping time and then check how large is | S n j | (see Figure 2). We use Lemma 3.1 to estimate T ij . Fix ε ∈ (0, 1) and define s j = n j ln 1−ε ln n j . For any z with |z| ∈ I 1 we have for j > i i 0 (x) that Figure 2: On Lemma 5.1 we formalize the idea that on event A i ∩ A j the most probable scenario is which is summable, i.e., ∞ i,j=i 0 b ij < ∞. Notice that Denote the maximum above by a ij . To estimate a ij we use Markov property at time τ (l j ), obtaining Using the Markov property once again, together with n i + s j n j , we get We can use Lemma 3.2 with r = n j ln 1−ε ln n j for some ε ∈ (0, 1/3) to estimate Putting all these pieces together, we can write We finish the proof by using (24) to estimate j ln 2 j j ln 2 j − i ln 2 i · 1 e c·ln 2−3ε ln n j j ln 2 j j ln 2 j − (j − 1) ln 2 (j − 1) · 1 e c·ln 2−3ε ln n j = j(1 + o(1)) · 1 e c·ln 2−3ε ln n j 1 j 2 , which implies that the second sum on equation (25) is bounded by We conclude this section with the last missing piece to get Theorem 1.2.
Proof of Lemma 5.2. Since j > i we can write for j = i + s that .
Notice that if we sum only on the range i ln k, we obtain  We finish the proof by noticing that we can compare the sum above with the integral of ln ln x x ln x , whose anti-derivative is 1 2 ln 2 ln x.

Encounters of two walks
In this section we use the local CLT derived for the S-walk to prove that two independent copies of S will meet infinitely often. The idea is to apply the second moment method to a sequence of random variables that count the number of encounters on some well-separated time scales. In order to exemplify the method in a more classical setting, we first prove the following.
Proposition 6.1. Simple random walk on Z 2 is recurrent.
Proof. We prove that S n is recurrent by considering The result will follow from a conditional Borel-Cantelli argument. Relation (26) is obtained by applying the second moment method to N k . Indeed, we have that Since the walk starts at the origin, we have S b k−1 ∈ B(b k−1 ) a.s., and for n ∈ [b k , b k+1 ] notice that Thus, for any x ∈ B(b k−1 ) and n with right parity we have For the second moment, we have for x ∈ B(b k−1 ), and n, m + n ∈ [b k , b k+1 ] with adequate parities that

Thus, we have by Paley-Zygmund inequality
c3 2k C3 2k + c3 k c, a positive constant. As a consequence, we can write We want to prove that for all b k n b k+1 and 1 m b k+1 − n, if k is large enough. Considering only one walk, we have by Corollary 3.3 that if we denote which is summable, so Borel-Cantelli implies that | S b k | r k eventually.
Lower bound. For the first bound we just apply Proposition 4.1. For every z i ∈ B(r k−1 ) and n ∈ [b k , b k+1 ] and y ∈ Z 2 with √ b k |y| 2 √ n we have ln |y| ln n. In this case, we can write Apply the Markov property at time b k−1 . On event | S i b k−1 | r k−1 for i = 1, 2 we have since n ∼ n − b k−1 .
Upper bound. Notice that if z i ∈ B(r k−1 ) and n ∈ [b k , b k+1 ] then we have for small ε > 0 Using Proposition 4.1, we can write 0<|y| √ n P z 1 ,z 2 S 1 n = S 2 n = y 0<|y| √ n P z 1 S n = y P z 2 S n = y # y; |y| √ n · C n 2 C n .
We decompose the remaining range √ n < |y| s into a collection of disjoint intervals I k = (e k−1 √ n, e k √ n]. Notice that to cover the range above we only need to consider 1 k 1 2 ln ln n . If |y| ∈ I k , we have for z ∈ ∂B(n 1/3 ) that P z S n = y a(y) a(z) · P z S n = y k + 1 2 ln n where the second inequality uses the error term from the local CLT given in [11, Theorem 1.1] and the last inequality uses that k = O(ln ln n).
For any z i ∈ B(r k−1 ), we can use once again the method of decomposing with respect to the first time the walk hits ∂B(n 1/3 ). The bound in equation (15) gives P z i S n = y e −cn δ + max z∈∂B(n 1/3 ) 1 j n 2/3+δ P z S n−j = y C n e −e 2k−2 + C n 2 e −2k , since e −cn δ n −l for every l ∈ N and n − j ∼ n. Summing for |y| ∈ I k we get |y|∈I k P z 1 ,z 2 S 1 n = S 2 n = y #{y; |y| ∈ I k } · C n e −e 2k−2 + C n 2 e −2k Each of the three terms on the right hand side is summable in k. Hence, 1 2 ln ln n k=1 |y|∈I k P z 1 ,z 2 S 1 n = S 2 n = y C n , concluding that P z 1 ,z 2 S 1 n = S 2 n C n . Using Markov property and the bound above, we can write that on event | S i b k−1 | ∈ B(r k−1 ) for i = 1, 2 we have Finally, we have from Proposition 4.1 and the independence of walks S i that P x 1 ,x 2 S 1 n = S 2 n , S 1 n+m = S 2 n+m | F b k−1 C n · max x,y =0 P x S 1 m = y C nm .
Encounters. Having estimates (27) and (28), we proceed with the second moment method. Let us consider the events Just like in Proposition 6.1, we have that equations (27) and (28) give for k k 0 that and Paley-Zygmund inequality implies where c is a positive constant. This implies that on V + k 0 we have Since P x 1 ,x 2 [V k , eventually] = P x 1 ,x 2 [∪ k 1 V + k ] = 1, we conclude that (29) holds P x 1 ,x 2 -almost surely and finish the proof with the conditional Borel-Cantelli lemma [8,Theorem 5.3.2].