A special set of exceptional times for dynamical random walk on $\Z^2$

Benjamini,Haggstrom, Peres and Steif introduced the model of dynamical random walk on Z^d. This is a continuum of random walks indexed by a parameter t. They proved that for d=3,4 there almost surely exist t such that the random walk at time t visits the origin infinitely often, but for d>4 there almost surely do not exist such t. Hoffman showed that for d=2 there almost surely exists t such that the random walk at time t visits the origin only finitely many times. We refine the results of Hoffman for dynamical random walk on Z^2, showing that with probability one there are times when the origin is visited only a finite number of times while other points are visited infinitely often.


Introduction
We consider a dynamical simple random walk on Z 2 . Associated with each n is a Poisson clock. When the clock rings the nth move of the random walk is replaced by an independent random variable with the same distribution. Thus for any fixed t the distribution of the walks at time t is that of simple random walk on Z 2 and is almost surely recurrent.
More Thus for each t the random variables {X n (t)} n∈N are i.i.d. The concept of dynamical random walk (and related dynamical models) was introduced by Benjamini, Häggström, Peres and Steif in [2], where they showed various properties of the simple random walk to hold for all times almost surely (such properties are called "dynamically stable"), while for other properties there a.s. exist exceptional times for which they do not hold (and are then called "dynamically sensitive" properties). In particular, they showed that transience of simple random walk is dynamically sensitive in 3 and 4 dimensions, while in dimension 5 or higher it is dynamically stable. For more work on dynamical models see also [8] and [4].
In [5] Hoffman showed that recurrence of simple random walk in two dimensions is dynamically sensitive, and that the Hausdorff dimension of the set of exceptional times is a.s. 1. Furthermore it was showed that there exist exceptional times where the walk drifts to infinity with rate close to √ n.
The following problem was raised by Jeff Steif and Noam Berger during their stay at MSRI: When the dynamical random walk on Z 2 is in an exceptional time, returning to the origin only a finite number of times, is it always because it "escapes to infinity", or are there times where the disc of radius 1 is hit infinitely often, but the origin is hit only a finite number of times.
In this paper we prove that such exceptional times exist, and furthermore that the Hausdorff dimension of the sets of these exceptional times is 1. The techniques used are a refinement of the techniques used in [5] to show the existence of exceptional times where the walk is transient.
The proof is divided into two main parts. First, we define a sequence of events SE k (t), dependent on {S n (t)} n∈N such that SE k (t) implies we are in an exceptional time of the desired form, and show that the correlation between these events in two different times t 1 , t 2 is sufficiently small (Theorem 2). Then we use this small correlation event, in a manner quite similar to [5] to show the existence of a dimension 1 set of exceptional times.

Definitions, Notation and Main Results
We will follow the notation of [5] as much as possible. Let D denote the discrete circle of radius 1: The set of special exceptional times of the process, S exc , is the set of all times for which the walk visits D infinitely often but visits (0, 0) only fintely often. More explicitly we define The main result of this paper is: We will need the following notation. Define a series of stopping times We will divide the walk into segments between two consecutive s i : And define a super segment to be a union of segments

Define a series of annuli
where |x| denotes the standard Euclidean norm |x| = x 2 1 + x 2 2 . And define event G k by Choosing these annuli to be of the right magnitude will prove to be one of the most useful tools at our disposal. We will show that a random walk will typically satisfy G k (see Lemmas 6 and 7). On the other hand, the location of the walk inside the annuli A j in the beginning of the segment M j will have only a small influence of the probabilities of the events we will investigate (see Lemmas 8 and 10). Together this provides "almost independence" of events in different segments.
We will define three events concerned with hitting the disc on the dynamical random walk. We define the event R j (t), to be the event that the walk (at time t) hits D at some step in in the segment M j , i.e.
(Note that in [5], this denoted the event of hitting the origin, and not D, but this difference does not alter any of the calculations.) We denote by SR j (t) the event that the walk (at time t) hits D at some step in the segment M j , but does not hit the origin at any step of M j : It is easy to see that SR j (t) ⊂ R j (t).
We define the third event, SE k (t), by This is the event that the walk never hits the origin in the super segment W k , and there is exactly one segment M j , 2 k ≤ j < 2 k , where the walk hits D. Additionally this event requires the walk ends each segment M i in the annulus A i . It is easy to see that if the event k≥M SE k (t) occurs for some integer M then t ∈ S exc . Given 2 k ≤ j < 2 k+1 we write For any random walk event E we write E(0, t) for E(0) ∩ E(t). Thus for example, we have SR j (0, t) = SR j (0) ∩ SR j (t).
To prove Theorem 1 we will first prove the following theorem.
Theorem 2 There is a function f (t) such that for any M , The first statement in Theorem 1 follows from Theorem 2 and the second moment method. The second statement in Theorem 1 will follow from bounds that we will obtain on the growth of f (t) near zero and by Lemma 5.1 of [11]. Since we will be showing that some events have low correlation between times 0 and t, some of the bounds we use to prove Theorem 2 will only hold when k is sufficiently large compared to 1 t . Therefore, for t ∈ (0, 1), we denote by K(t) the smallest integer greater than | log t| + 1, and by K ′ (t) the smallest integer greater than log(| log t| + 1) + 1. (Here and everywhere we use log(n) = log 2 (n)).
We end this section by stating that we use C as a generic constant whose value may increase from line to line and lemma to lemma. In many of the proofs in the next section we use bounds that only hold for sufficiently large k (or j). This causes no problem since it will be clear that we can always choose C such that the lemma is true for all k.

Proof of Theorem 2
This section is divided into four parts. In the first, we introduce some bounds on the behavior of two dimensional simple random walk. These are quite standard, and are brought here without proof. In the second part we quote, for the sake of completeness, lemmas from [5] which we will use later on. In the third part, which constitutes the bulk of this paper, we prove estimates on P(SE k (0) | SE k−1 (0)) and P(SE k (0, t) | SE k−1 (0, t)). In the fourth part we use the these bounds to prove Theorem 2.

Two Dimensional Simple Random Walk Lemmas
The main tool that we use are bounds on the probability that simple random walk started at x returns to the origin before exiting the ball of radius n with center at the origin. For general x, this probability is calculated in Proposition 1.6.7 on page 40 of [7] in a stronger form than given here. The estimate for |x| = 1 comes directly from estimates on the 2 − d harmonic potential (Spitzer 76, P12.3 page 124 or [6]).

Lemma 3
1. There exists C > 0 such that for all x with 0 < |x| < n 2. For |x| = 1 we have the following stronger bound: We will also use the following standard bounds.
Lemma 4 There exists C > 0 such that for all x ∈ Z 2 , n ∈ N and m < √ n and Lemma 4 is most frequently applied in the form of the following corollary:

Some Useful Lemmas From [5]
First we list some lemmas from [5] that we will use in the proof. (In [5] R j (0) is defined as It is easy to see all lemmas in [5] hold with our slightly expanded definition.) Lemma 6 shows that if the walk is in the annulus A j−1 at step s j−1 , then with high probability the walk will be in the annulus A j at step s j . This will allow us to condition our estimates of various events on the event that S s j ∈ A j .

Lemma 6 For any j and x ∈
We will usually use the following corollary of Lemma 6, which is just an application of Lemma 6 to all segments inside a super-segment, and follows directly by applying the union bound: Lemma 8 shows that we have a good estimate of the probability of hitting D during the segment M k , given the walk starts the segment inside the annulus A k−1 .

Lemma 8
There exists C such that for any k and any x ∈ A k−1 Lemma 9 tells us that resampling a small percentage of the moves is enough to get a very low correlation between hitting D before and after the re-randomization.

Lemma 9
There exists C such that for all k, n ≥ s k−1 + s k /2 10k , for all I ⊂ {1, . . . , n} with |I| ≥ s k /2 10k and for all {x i (t)} i∈{1,...,n}\I The last lemma we quote says there is a low correlation between hitting D (during some segment M k ) at different times.

Lemma 10
There exists C such that for any t, any k > K(t) and any 3.3 Estimating P(SE k (0)) and P(SE k (0, t)) We start by giving estimates for the event SR j (t).
Lemma 11 There exists C > 0, such that for any j and any x ∈ A j−1 , Proof. Let n be the first step in the segment M j such that S n (0) ∈ D, letting n = ∞ if no such step exists. By Lemma 8, Let B i denote the event that the walk does not hit 0 between step i and s j . Then For any s j−1 ≤ i < s j , B i ∩ G j (0) is included in the event that after step i, the walk reaches A j before reaching 0. Therefore by the second statement in Lemma 3 there exists C such that for all i Putting (5) and (3) into (4) we get By Lemma 6 we have that for all x ∈ A j−1 Thus using (6) and (7) together with conditional probabilities with respect to G j (0), we get which establishes the upper bound.
To get the lower bound, note that conditioned on the event G j (0), if after step i the walk reaches A j before reaching (0, 0), and then does not hit (0, 0) in the next s j steps, then B i occurs.
Letting E 1 denote the event that after step i we reach A j before reaching (0, 0), and letting E 2 denote the event that a simple random walk, starting at A j , does not hit (0, 0) in the next s j steps. We get, using the Markovian property, that By Lemma 3 we have And by the Markovian property and stationarity, since a walk starting at A j must hit A j−1 before reaching 0, we can use Lemma 8 to bound Combining (9) and (10) into (8) we get Using (11) for all j and x ∈ A j−1 Thus for all j and x ∈ A j−1 , conditioning on G j (0) and using Lemma 6 and (12) implies that The next two lemmas give a lower bound on P(SE k (0)). We define This is an approximation of the probability of never hitting D during the super segment W k . Note that there exists c > 0 such that c < β k < 1 for all k > 1.

Lemma 12
For any k and any 2 k ≤ j < 2 k+1 , and any x ∈ A 2 k −1

Proof.
By the Markovian property, for any By Lemma 11 By Lemma 8 Multiplying over all l such that 2 k ≤ l < 2 k+1 and l = j, we get Last, by Lemma 6 Putting (15), (16) and (17) into (14) we get Proof. SE k−1 (0) ⊆ G 2 k −1 (0), and therefore by the Markovian property of simple random walk, we have By the definition of SE j k , the events SE j 1 k and SE j 2 k are disjoint for The inequality in (18) follows from Lemma 12. ✷ Next we will work to bound from above the probability of SE k (0, t).
First, we deal with the case that the walk hits D in different sub-segments of W k in times 0 and t.
Lemma 14 There exists a constant C > 0 such that for any k > K ′ (t) and any Proof. Recall that So by the Markovian property, Now we will estimate each of the four terms in (19).
The last inequality holding by Lemma 11. Similarly, by symmetry between times 0 and t, To estimate the third term in (19) we use Lemma 10 to estimate each term in the product. Lemma 10 says that there exists C > 0 such that for any l and any x, y ∈ A l−1 P x,y,l−1 (R l (0, t)) ≤ C l 2 .
And the lower bound from Lemma 8: We then get that max x,y∈A l−1 − min x,y∈A l−1 P x,y,l−1 (R l (t)) + max x,y,l−1 P x,y,l−1 (R l (0, t)) Which yields For the fourth factor in (19), Corollary 7 gives max x,y∈A l−1 Combining (20) ✷ Next, we deal with the case in which the walk hits D in the same subsegment M j ⊂ W k , and show it holds with only negligible probability.
Proof. Let n 0 be the first step D is hit in the segment M j at time 0, and n t be the first step that D is hit in M j at time t, letting them equal ∞ if D is not hit in M j at all at that time. Notice that by symmetry, we can estimate P x,y,j−1 (SR j (0, t)) ≤ 2P x,y,j−1 (SR j (0, t) ∩ (n 0 ≤ n t )).
Combining these we get Thus we get To bound the probabilities of the five terms on the right hand side of (24) we now introduce six new events, bound the probabilities of these events and then use these bounds to bound the terms in (24).
If B 1 does not occur, then |I| is the sum of at least 2 2(j−1) 2 j 6 i.i.d. indicator variables each happening with probability (the last inequality holds because j ≥ 2 k ≥ K(t) ≥ | log t|). Therefore by monotonicity in t and in the number of indicators, |I| stochastically dominates the sum of N = 2 2(j−1) 2 j 6 i.i.d. indicator variables T i each equaling 1 with probability p = 1 2 j+1 . Let T = 1≤i≤N T i . As we are conditioning on (B 1 ) c we have that |I| dominates T and if T ≥ N P 2 then and B 2 does not occur. By the Chernoff bound for some absolute constant c > 0. Thus Put r = 2 (j−1) 2 2 j+2 j 6 . Let B 3 be the event that |S n 0 (t)| < r and Rearranging the order of summation we can write Since all variables {X i } i ∈ I have been re-rolled between time 0 and t, the probability that |S n 0 (t) < r| is the same as the probability that simple random walk starting at x will be at distance less than r from (0, 0) after |I| steps. Therefore by the second part of Lemma 4 And since (B 2 ) c implies |I| > 2 2(j−1) 2 2 j+2 j 6 > r 2 j 6 we get Next we bound the probability that n t − n 0 is small. Let B 4 be the event that 0 < n t − n 0 < r 2 j 6 = 2 2(j−1) 2 2 j+2 j 1 2 . Conditioning on |S n 0 (t)| > r, B 4 implies that the random walk at time t gets to distance r from its position at step n 0 in less than r 2 j 6 steps, therefore we can apply the first part of Lemma 4 to bound Let I 1 denote the set of all indices between n 0 and n t for which, conditioned on our Poisson process, X i (0) and X i (t) are independent. Let B 5 denote the event that |I 1 | < 2 2(j−1) 2 2 4(j+2) j 1 2 . A similar calculation as done for B 2 shows that P x,y,j−1 (B 5 ) < C j 6 Let B 6 be the event that |S nt (0)| < r 1 . Using Lemma 4 in a calculation similar to the one for B 3 gives that Thus we get We will estimate the five probabilities in (24) as follows: 1. By Lemma 8, 2. L 1 is included in the event that after step n 0 , at for which |S n 0 (0)| = 1, the walk reaches distance r before hitting (0, 0), therefore by Lemma 3, 3. To estimate P x,y,j−1 (R j (t) | R j (0) ∩ L 1 ), notice that R j (0) and L 1 depend only on what happens at time 0, and (B 2 ) c implies |I| ≥ (For all large enough j) , so using Lemma 9 we get and therefore 4. To estimate P x,y,j−1 (L 2 | R j (0, t) ∩ L 1 ), we partition according to the value of n t . Since for a given n t , the event L 2 is independent of the variables Since by Lemma 3, for any value of N t , we deduce that 5. Last we use Lemma 6 to bound Combining (32), (33), (34), (35) and (36) into (24) we get

✷
Lemma 16 There exists a constant C > 0 s.t. for any t > 0 and any k > K ′ (t) , so by the Markovian properties of simple random walk we have By Lemma 14, for any 2 SE j,j k (0, t) ⊆ SR j (0, t), therefore by Lemma 15, for any 2 k ≤ j < 2 k+1 Combining (38) and (39) into (37) we conclude
Set k 0 = K ′ (t). Then We start by estimating the first factor: Using the lower bound from Lemma 13 we get that this is bounded above by We now recall that we chose k 0 to be the smallest integer larger then log(| log t| + 1) + 1, to get For the second factor of (40), we use Lemma 13 to bound the denominator, together with Lemma 16 to bound the numerator. We get n k=k 0 +1 Multiplying the two estimates we deduce that for any M . Since 1 0 C(4(1 + | log t|)) 3(log(| log t|+1))+2 dt < ∞, the proof is complete. ✷

Proof of Theorem 1
The proof of Theorem 1 follows from Theorem 2 and a second moment argument exactly as in [5]. We give it here for the sake of completeness.
Proof of Theorem 1. Define The equality (44) is true by Fubini's theorem, (45) is true because and (46) follows from (43). By Jensen's inequality if h(x) = 0 for all x / ∈ A then Then we get . Now we show that the dimensions of T and S exc are one. By Lemma 5.1 of [11] for any β < 1 there exists a random nested sequence of compact sets and These sets also have the property that for any set T if then T has dimension at least β. We construct F k to be independent of the dynamical random walk. So by (42), (51) and (52) we get the same second moment argument as above and (54) implies that with positive probability T satisfies (53). Thus T has dimension β with positive probability. As and Λ is countable, the dimension of the set of S exc ∩ [0, 1] is at least β with positive probability. By the ergodic theorem the dimension of the set of S exc is at least β with probability one. As this holds for all β < 1 the dimension of S exc is one a.s. ✷

Extending the technique
We end the paper with three remarks on the scope of the techniques in this paper.
1. The techniques in this paper can be extended to show other sets of exceptional times exist. For any two finite sets E and V , for which it is possible to visit every point in V without hitting E, there is a.s. an exceptional time t such that each point in V is visited infinitely often, while the set E is not visited at all.
2. Not every set can be avoided by the random walk, as shown in the following claim: Claim 17 Let L be a subset of Z 2 , with the property that there exits some M > 0, such that every point in Z 2 is at distance at most M from some point in L. Then P(∃t ∈ [0, 1] : |{n : S n (t) / ∈ L}| < ∞) = 0 i.e there are a.s. no exceptional times at which L is visited only finitely often.
Proof. First, we can reduce to the case that there is some (possible) path from 0 that misses L. Otherwise we can just drop points out of L until such a path exists without ruining the desired condition on L Second, we note that it is enough to prove that there are a.s. no times at which the random walk never hits L. This follows from the Markovian property of random walk, and the fact that changing finitely many moves is enough to make a walk that visits L a finite number of times miss L all together.
Then an exceptional time exists if and only if n≥0 Q n = ∅.
The condition on L ensures that there is some ǫ > 0, such that for any simple random walk on Z 2 , regardless of current position or history, there is a probability of at least ǫ of hitting L in the next 2M steps. Therefore there exists some constants A > 0 and 0 < c < 1 such that P(0 ∈ Q n ) ≤ Ac n For any i ≥ 0 define Then is the set of times in [0, 1] in which the i − th move gets resampled. Let H n = n i=0 RE i . Then H n is the set of all times at which one of the first n moves of the walk are re-sampled, and if we order H n , then between two consecutive times in H n , the first n steps of the walk do not change. Thus Q n = ∅ if and only if there exists some τ ∈ H n for which τ ∈ Q n . Since {S n (t)} behaves like a simple random walk for all t, we have, by linearity of expectation E(#τ ∈ H n : τ ∈ Q n ) = E(|H n |)P(0 ∈ Q n ) ≤ 2Anc −n Thus P(Q n = ∅) ≤ E(#τ ∈ H n : τ ∈ Q n ) ≤ 2Anc −n And consequently So no exceptional times exist. ✷

We finish with an open question:
Is there a set A ⊂ Z 2 such that both A and A c are infinite and almost surely there are exceptional times in which every element of A is hit finitely often and every element of A c is hit infinitely often?