Convergence results and sharp estimates for the voter model interfaces

We study the evolution of the interface for the one-dimensional voter model. We show that if the random walk kernel associated with the voter model has ﬁnite γ th moment for some γ > 3, then the evolution of the interface boundaries converge weakly to a Brownian motion under diﬀusive scaling. This extends recent work of Newman, Ravishankar and Sun. Our result is optimal in the sense that ﬁnite γ th moment is necessary for this convergence for all γ ∈ (0 , 3). We also obtain relatively sharp estimates for the tail distribution of the size of the equilibrium interface, extending earlier results of Cox and Durrett, and Belhaouari, Mountford and Valle


Introduction
In this article we consider the one-dimensional voter model specified by a random walk transition kernel q(·, ·), which is an Interacting Particle System with configuration space Ω = {0, 1} Z and is formally described by the generator G acting on local functions F : Ω → R (i.e., F depends on only a finite number of coordinates of Z), By a result of Liggett (see [7]), G is the generator of a Feller process (η t ) t≥0 on Ω. In this paper we will also impose the following conditions on the transition kernel q(·, ·): (i) q(·, ·) is translation invariant, i.e., there exists a probability kernel p (·) on Z such that q(x, y) = p (y − x) for all x, y ∈ Z.
Later on we will fix the values of γ according to the results we aim to prove. We also denote by µ the first moment of p µ := x∈Z xp(x) , which exists by (iii).
Let η 1,0 be the Heavyside configuration on Ω, i.e., the configuration: and consider the voter model (η t ) t≥0 starting at η 1,0 . For each time t > 0, let r t = sup{x : η t (x) = 1} and l t = inf{x : η t (x) = 0}, which are respectively the positions of the rightmost 1 and the leftmost 0. We call the voter model configuration between the coordinates l t and r t the voter model interface, and r t − l t + 1 is the interface size. Note that condition (iii) on the probability kernel p (·) implies that the interfaces are almost surely finite for all t ≥ 0 and thus well defined. To see this, we first observe that the rate at which the interface size increases is bounded above by Moreover this is the rate at which the system initially changes if it starts at η 1,0 .
When γ ≥ 2, Belhaouari, Mountford and Valle [1] proved that the interface is tight, i.e., the random variables (r t − l t ) t≥0 are tight. This extends earlier work of Cox and Durrett [4], which showed the tightness result when γ ≥ 3. Belhaouari, Mountford and Valle also showed that, if x∈Z |x| γ p (x) = ∞ for some γ ∈ (0, 2), then the tightness result fails. Thus second moment is, in some sense, optimal. Note that the tightness of the interface is a feature of the onedimensional model. For voter models in dimension two or more, the so-called hybrid zone grows as √ t as was shown in [4].
In this paper we examine two questions for the voter model interface: the evolution of the interface boundaries, and the tail behavior of the equilibrium distribution of the interface which is known to exist whenever the interface is tight. Third moment will turn out to be critical in these cases.
From now on we will assume p (·) is symmetric, and in particular µ = 0, which is by no means a restriction on our results since the general case is obtained by subtracting the drift and working with the symmetric part of p (·): The first question arises from the observation of Cox and Durrett [4] that, if (r t − t ) t≥0 is tight, then the finite-dimensional distributions of (ii) For ( r tN 2 N ) t≥0 resp. ( l tN 2 N ) t≥0 to converge to a Brownian motion, it is necessary that x∈Z |x| 3 log β (|x| ∨ 2) p (x) < ∞ for all β > 1.
In particular, if for some 1 ≤ γ <γ < 3 we have x |x|γp (x) = ∞, then {( is not a tight family in D([0, +∞), R), and hence cannot converge in distribution to a Brownian motion. Remark 1. Theorem 1.1(i) extends a recent result of Newman, Ravishankar and Sun [9], in which they obtained the same result for γ ≥ 5 as a corollary of the convergence of systems of coalescing random walks to the so-called Brownian web under a finite fifth moment assumption. The difficulty in establishing Theorem 1.1(i) and the convergence of coalescing random walks to the Brownian web lie both in tightness. In fact the tightness conditions for the two convergences are essentially equivalent. Consequently, we can improve the convergence of coalescing random walks to the Brownian web from a finite fifth moment assumption to a finite γth assumption for any γ > 3. We formulate this as a theorem. Theorem 1.2. Let X 1 denote the random set of continuous time rate 1 coalescing random walk paths with one walker starting from every point on the space-time lattice Z×R, where the random walk increments all have distribution p (·). Let X δ denote X 1 diffusively rescaled, i.e., scale space by δ/σ and time by δ 2 . If γ > 3, then in the topology of the Brownian web [9], X δ converges weakly to the standard Brownian webW as δ → 0. A necessary condition for this convergence is again x∈Z It should be noted that the failure of convergence to a Brownian motion does not preclude the existence of N i ↑ ∞ such that converges to a Brownian motion. Loss of tightness is due to "unreasonable" large jumps. Theorem 1.3 below shows that, when 2 < γ < 3, tightness can be restored by suppressing rare large jumps near the voter model interface, and again we have convergence of the boundary of the voter model interface to a Brownian motion.
Before stating Theorem 1.3, we fix some notation and recall a usual construction of the voter model. We start with the construction of the voter model through the Harris system. Let {N x,y } x,y∈Z be independent Poisson point processes with intensity p(y − x) for each x, y ∈ Z.
From an initial configuration η 0 in Ω, we set at time t ∈ N x,y : From the same Poisson point processes, we construct the system of coalescing random walks as follows. We can think of the Poisson points in N x,y as marks at site x occurring at the Poisson times. For each space-time point (x, t) we start a random walk X x,t evolving backward in time such that whenever the walk hits a mark in N u,v (i.e., for s ∈ (0, t), (t−s) ∈ N u,v and u = X x,t s ), it jumps from site u to site v. When two such random walks meet, which occurs because one walk jumps on top of the other walk, they coalesce into a single random walk starting from the space-time point where they first met. We define by ζ s the Markov process which describes the positions of the coalescing particles at time s. If ζ s starts at time t with one particle from every site of A for some A ⊂ Z, then we use the notation where the superscript is the time in the voter model when the walks first started, and the subscript is the time for the coalescing random walks. It is well known that ζ t is the dual process of η t (see Liggett's book [7]), and we obtain directly from the Harris construction that Theorem 1.3. Take 2 < γ < 3 and fix 0 < θ < γ−2 γ . For N ≥ 1, let (η N t ) t≥0 be described as the voter model according to the same Harris system and also starting from η 1,0 except that a flip from 0 to 1 at a site x at time t is suppressed if it results from the "influence" of a site y converge in distribution to a σ-speed Brownian Motion with σ defined in (1.2).
tends to 0 in probability for all T > 0.

Remark 2.
There is no novelty in claiming that for ( r tN 2 N ) t≥0 , there is a sequence of processes (γ N t ) t≥0 which converges in distribution to a Brownian motion, such that with probability tending to 1 as N tends to infinity, γ N t is close to r tN 2 N most of the time. The value of the previous result is in the fact that there is a very natural candidate for such a process. Thus the main interest of Theorem 1.3 lies in the lower bound θ > 0. By truncating jumps of size at least N 1−θ for some fixed θ > 0, the tightness of the interface boundary evolution {( The upper bound θ < γ γ−2 simply says that with higher moments, we can truncate more jumps without affecting the limiting distribution. Let {Θ x : Ω → Ω, x ∈ Z} be the group of translations on Ω, i.e., (η • Θ x )(y) = η(y + x) for every x ∈ Z and η ∈ Ω. The second question we address concerns the equilibrium distribution of the voter model interface (η t • Θ t ) t≥0 , when such an equilibrium exists. Cox and Durrett [4] observed that (η t • Θ t |N) t≥0 , the configuration of η t • Θ t restricted to the positive coordinates, evolves as an irreducible Markov chain with countable state spacẽ Therefore a unique equilibrium distribution π exists for (η t • Θ t |N) t≥0 if and only if it is a positive recurrent Markov chain. Cox and Durret proved that, when the probability kernel p (·) has finite third moment, (η t • Θ t |N) t≥0 is indeed positive recurrent and a unique equilibrium π exists. Belhaouari, Mountford and Valle [1] recently extended this result to kernels p (·) with finite second moment, which was shown to be optimal.
Cox and Durrett also noted that if the equilibrium distribution π exists, then excluding the trivial nearest neighbor case, the equilibrium has E π [Γ] = ∞ where Γ = Γ(ξ) = sup{x : ξ(x) = 1} for ξ ∈Ω is the interface size. In fact, as we will see, under finite second moment assumpt ion on the probability kernel p (·), there exists a constant C = C p ∈ (0, ∞) such that extending Theorem 6 of Cox and Durrett [4]. Furthermore, we show that M −1 is the correct order for π{η : Γ(η) ≥ M } as M tends to infinity if p (·) possesses a moment strictly higher than 3, but not so if p (·) fails to have a moment strictly less than 3.
This paper is divided in the following way: Sections 2, 3 and 4 are respectively devoted to the proofs of Theorems 1.1 and 1.2, 1.3, and 1.4. We end with section 5 with the statement and proof of some results needed in the previous sections.
2 Proof of Theorem 1.1 and 1.2 By standard results for convergence of distributions on the path space D([0, +∞), R) (see for instance Billingsley's book [3], Chapter 3), we have that the convergence to the σ-speed Brownian Motion in Theorem 1.1 is a consequence of the following results: Lemma 2.1. If γ ≥ 2, then for every n ∈ N and 0 < t 1 < t 2 < ... < t n in [0, ∞) the finitedimensional distribution converges weakly to a centered n-dimensional Gaussian vector of covariance matrix equal to the identity. Moreover the same holds if we replace r t by l t .
Proposition 2.2. If γ > 3, then for every > 0 and T > 0 In particular if the finite-dimensional distributions of r tN 2 N t≥0 are tight, we have that the path distribution is also tight and every limit point is concentrated on continuous paths. The same holds if we replace r t by l t .
By Lemma 2.1 and Proposition 2.2 we have Theorem 1.1. Lemma 2.1 is a simple consequence of the Markov property, the observations of Cox and Durrett [4] and Theorem 2 of Belhaouari-Mountford-Valle [1] where it was shown that for γ ≥ 2 the distribution of r tN 2 σN converges to a standard normal random variable (see also Theorem 5 in Cox and Durrett [4] where the case γ ≥ 3 was initially considered).
We are only going to carry out the proof of (2.1) for r t since the result of the proposition follows for l t by interchanging the roles of 0's and 1's in the voter model.
Note that by the right continuity of r t , the event in (2.1) is included in By the Markov property, the attractivity of the voter model and the tightness of the voter model interface, ( To see this note that r t ≥ l t − 1, thus (2.4) is a consequence of lim sup which is equivalent to (2.3) by interchanging the 0's and 1's in the voter model.
The proof of (2.3) to be presented is based on a chain argument for the dual coalescing random walks process. We first observe that by duality, (2.3) is equivalent to showing that for all > 0, is bounded above by a constant times for some c > 0 and 0 < β < 1.

Proof:
The proof is based on a chain argument which we first describe informally. Without loss of generality we fix M = 2 b . The event stated in the proposition is a union of the events that Figure 1: Illustration of the j-th step of the chain argument.
some backward random walk starting from [2 k R, 2 k+1 R] × [0, R 2 ] (k ≥ b) hits the negative axis at time 0. Therefore it suffices to consider such events.
The first step is to discard the event that at least one of the backward coalescing random walks X x,s starting in has escaped from a small neighborhood around I k,R before reaching time level K 1 s K 1 , where x = max{m ∈ Z : m ≤ x}. The constant K 1 will be chosen later. We call this small neighborhood around I k,R the first-step interval, and the times {nK 1 } 0≤n≤ R 2 K 1 the first-step times. So after this first step we just have to consider the system of coalescing random walks starting on each site of the first-step interval at each of the first-step times.
In the second step of our argument, we let these particles evolve backward in time until they reach the second-step times: {n(2K 1 )} 0≤n≤ R 2 2K 1 . I.e., if a walk starts at time lK 1 , we let it evolve until time (l − 1)K 1 if l is odd, and until time (l − 2)K 1 if l is even. We then discard the event that either some of these particles have escaped from a small neighborhood around the first-step interval, which we call the second-step interval, or the density of the particles alive at each of the second-step times in the second-step interval has not been reduced by a fixed factor 0 < p < 1.
We now continue by induction. In the jth-step, (see Figure 1) we have particles starting from the (j − 1)th-step interval with density at most p j−2 at each of the (j − 1)th-step times. We let these particles evolve backward in time until the next jth-step times: We then discard the event that either some of these particles have escaped from a small neighborhood around the (j − 1)th-step interval, which we call the jth-step interval, or the density of the particles alive at each of the jth-step times in the jth-step interval has not been reduced below p j−1 .
We repeat this procedure until the Jth-step with J of order log R, when the only Jth-step time left in [0, R 2 ] is 0. The rate p will be chosen such that at the Jth-step, the number of particles alive at time 0 is of the order of a constant which is uniformly bounded in R but which still depends on k. The Jth-step interval will be chosen to be contained in [0, 3 · 2 k R].
We now give the details. In our approach the factor p is taken to be 2 −1/2 . The constant K 1 = 7K 0 where K 0 is the constant satisfying Proposition 5.4, which is necessary to guarantee the reduction in the number of particles. Note that K 1 is independent of k and R. The j th-step interval is obtained from the (j − 1)th-step intervals by adding intervals of length β R j 2 k R, where is taken to be the last step in the chain argument. Here x = min{m ∈ Z : m ≥ x}. We have chosen J R because it is the step when 2 J R −1 K 1 first exceeds R 2 and the only J R th-step time in [0, R 2 ] is 0. With our choice of β R j , we have that the J R th-step interval lies within [0, 3(2 k R)], and except for the events we discard, no random walk reaches level 0 before time 0.
Let us fix γ = 3 + in Theorem 1.1. The first step in the chain argument described above is carried out by noting that the event we reject is a subset of the event Since implies that the probability of the above event is bounded by for R sufficiently large. Therefore, for each k ≥ b, instead of considering all the coalescing random walks starting from [2 k R, 2 k+1 R] × [0, R 2 ], we just have to consider coalescing random walks starting from are the first-step times. By this observation, we only need to bound the probability of the event We start by defining events which will allow us to write A k,R in a convenient way. For n 1 := n ∈ N and for each 1 ≤ j ≤ J R − 1, define recursively For a random walk starting at time nK 1 in the dual voter model, n j K 1 is its time coordinate after the jth step of our chain argument. Then define Note that W k,R j is the event that in the (j + 1)th step of the chain argument, some random walk starting from a jth-step time makes an excursion of size β R j+1 2 k R before it reaches the next (j + 1)th-step time. Then we have In other words, U k,R j is the event that after the (j + 1)th-step of the chain argument, the density of particles in the (j + 1)th-step interval at some of the (j + 1)th-step times The chain argument simply comes from the following decomposition: We are going to estimate the probability of the events in (2.8) and (2.9).
We start with (2.9). It is clear from the definitions that the events U k,R i were introduced to obtain the appropriate reduction on the density of random walks at each step of the chain argument.
i c implies the existence of jth-step times t 1 = (2m+1)2 j−1 K 1 and t 2 = (2m + 2)2 j−1 K 1 such that, after the jth-step of the chain argument, the walks at ti me t 1 and t 2 are inside the jth-step interval with density at most 2 − j−1 2 , and in the (j + 1)th-step these walks stay within the (j + 1)th-step interval until the (j + 1)th-step time t 0 = m2 j K 1 , when the density of remaining walks in the (j + 1)th-step interval exceeds 2 − j 2 . We estimate the probability of this last event by applying three times Proposition 5.4 with p = 2 − 1 2 and L equal to the size of the (j + 1)th-step interval, which we denote by L k,R j+1 . We may suppose that at most 2 − j−1 2 L k.R j+1 random walks are leaving from times t 1 and t 2 . We let both sets of walks evolve for a dual time interval of length 7 −1 · 2 j−1 K 1 = 2 j−1 K 0 . By applying Proposition 5.4 with γ = 2 − j−1 2 , the density of particles starting at times t 1 or t 2 is reduced by a factor of 2 − 1 2 with large probability. Now we let the particles evolve further for a t ime interval of length 2 j K 0 . Apply Proposition 5.4 with γ = 2 − j 2 , the density of remaining particles is reduced by another factor of 2 − 1 2 with large probability. By a last application of Proposition 5.4 for another time interval of length 2 j+1 K 0 with γ = 2 − j+1 2 we obtain that the total density of random walks originating from the two j th-step times t 1 (resp. t 2 ) remaining at time t 0 (resp. t 1 ) has been reduced by a factor 2 − 3 2 . Finally we let the random walks remaining at time t 1 evolve un till the (j + 1)th-step time t 0 , at which time the density of random walks has been reduced by a factor 2·2 − 3 2 = 2 − 1 2 with large probability. By a decomposition similar to (2.8) and (2.9) and using the Markov property, we can assume that before each application of Proposition 5.4, the random walks are all confined within the (j +1)th-step interval. All the events described It is simple to verify that this last expression is bounded above by Now we estimate the probability of the event in (2.8). For every j = 1, ..., J R − 1, , the random walks are contained in the jth-step interval with density at most 2 − j−1 2 , and some of these walks move by more than β R j+1 2 k R in a time interval of length 2 j K 1 . If X t denotes a random walk with transition kernel q(x, y) = p(y − x) starting at 0, then the probability of the above event is bounded by bounds the number of walks we are considering. By Lemma 5.1 the probability in (2.10) is dominated by a constant times Then multiplying by (2.11) and summing over 1 ≤ j ≤ J R , we obtain by straightforward computations that if R is sufficiently large, then there exist constants c > 0 and c > 1 such that the probability of the event in (2.8) is bounded above by a constant times Adjusting the terms in the last expression we complete the proof of the proposition.
Proof of (ii) in Theorem 1.1: For the rescaled voter model interface boundaries N to converge to a σ-speed Brownian motion, it is necessary that the boundaries cannot wander too far within a small period of time, i.e., we must have In terms of the dual system of coalescing random walks, this is equivalent to and the same statement for its mirror event. If some random walk jump originating from the region [ σN, ∞) × [0, tN 2 ] jumps across level 0 in one step (which we denote as the event D N ( , t)), then with probability at least α for some α > 0 depending only on the random walk kernel p(·), that random walk will land on the negative axis at time 0 (in the dual voter model). Thus (2.14) implies that lim In particular, , which are the discrete gradient and laplacian of H. Then for k ≥ k 0 for some k 0 ∈ Z + , 0 < H (2) , we have by summation by parts for β > 1. This concludes the proof.
We end this section with Proof of Theorem 1.2: In [5,6], the standard Brownian webW is defined as a random variable taking values in the space of compact sets of paths (see [5,6] for more details), which is essentially a system of one-dimensional coalescing Brownian motions with one Brownian path starting from every space-time point. In [9], it was shown that under diffusive scaling, the random set of coalescing random walk paths with one walker starting from every point on the space-time lattice Z × Z converges toW in the topology of the Brownian web (the details for the continuous time walks case is given in [11]), provided that the random walk jump kernel p(·) has finite fifth moment. To improve their result from finite fifth moment to finite γ-th moment for any γ > 3, we only need to verify the tightness criterion (T 1 ) formulated in [9], the other convergence criteria require either only finite second moment or tightness.
Recall the tightness criteria (T 1 ) in [9], , and A t,u (x 0 , t 0 ) is the event that (see Figure 2) the random set Figure 2: Illustration of the event A t,u (x 0 , t 0 ). of coalescing walk paths contains a path touching both R(x 0 , t 0 ; u, t) and (at a later time) the left or right boundary of the bigger rectangle R(x 0 , t 0 ; 2u, 2t). In [9], in order to guarantee the continuity of paths, the random walk paths are taken to be the interpolation between consecutive space-time points where jumps take place. Thus the contribution to the event A t,u (x 0 , t 0 ) is either due to interpolated line segments intersecting the inner rectangle R(x 0 , t 0 ; u, t) and then not landing inside the intermediate rectangle R(x 0 , t 0 ; 3u/2, 2t), which can be shown to have 0 probability in the limit δ → 0 if p(·) has finite third moment; or it is due to some random walk originating from inside R(x 0 , t 0 ; 3u/2, 2t) and then reaches either level −2u or 2u before time 2t. In terms of the unscaled random walk paths, and note the symmetry between left and right boundaries, condition (T 1 ) reduces to which by the reflection principle for random walks is further implied by which is a direct consequence of Proposition 2.3. This establishes the first part of Theorem 1.2.
It is easily seen that the tightness of {X δ } imposes certain equicontinuity conditions on the random walk paths, and the condition in (2.15) and its mirror statement are also necessary for the tightness of {X δ }, and hence the convergence of X δ (with δ = 1 N ) to the standard Brownian webW. Therefore, we must also have x∈Z

Proof of Theorem 1.3
In this section we assume that 2 < γ < 3 and we fix 0 < θ < γ−2 γ . We recall the definition of (η N t ) t≥0 on Ω. The evolution of this process is described by the same Harris system on which we constructed (η t ) t≥0 , i.e., the family of Poisson point processes {N x,y } x,y∈Z , except that if t ∈ N x,y ∪ N y,x , for some y > x with y − x ≥ N 1−θ and [x, y] ∩ [r N t− − N, r N t− ] = φ, then a flip from 0 to 1 at x or y, if it should occur, is suppressed. We also let (η N t ) t≥0 start from the Heavyside configuration η 1,0 . We also recall that we denote by r N t the position of its rightmost "1".
Since (η t ) t≥0 and (η N t ) t≥0 are generated by the same Harris system and they start with the same configuration, it is natural to believe that r N t = r t for "most" 0 ≤ t ≤ N 2 with high probability. To see this we use the additive structure of the voter model to show (ii) in Theorem 1.3.
For a fixed realization of the process (η N t ) t≥0 , we denote by t 1 < ... < t k the times of the suppressed jumps in the time interval [0, T N 2 ] and by x 1 , ..., x k the target sites, i.e., the sites where the suppressed flips should have occurred. Now let (η t i ,x i t ) t≥0 be voter models constructed on the same Harris system starting at time t i with a single 1 at site x i . As usual we denote by r t i ,x i t , t ≥ t i , the position of the rightmost "1". It is straightforward to verify that The random set of times {t i } is a Poisson point process on [0, N 2 ] with rate at most which is further bounded by 2 x∈Z |x| α p(x) N (1−θ)α−1 for every α > 1. Therefore if we take α = γ, then by the choice of θ and the assumption that the γ-moment of the transition probability is finite, we have that the rate decreases as N −(1+ ) for = (1 − θ)γ − 2 > 0. Lemma 3.1. Let {(t i , x i )} i∈N with t 1 < t 2 < · · · denote the random set of space-time points in the Harris system where a flip is suppressed in (η N t ) t≥0 . Let K = max{i ∈ N : t i ≤ T N 2 }, and let and for all i ∈ N, Moreover, from these estimates we have that

Proof:
The proof is basically a corollary of Lemma 5.6, which gives that the lifetime τ of a single particle voter model satisfies P[τ ≥ t] ≤ C √ t for some C > 0. Thus, by the strong Markov Property which gives the first assertion in the lemma. The verification of E[τ i ; τ i ≤ N 2 ] ≤ CN is trivial. Now from the first two assertions in the lemma we obtain easily the third one.
Now to complete the proof of (ii) in Theorem The result follows from the previous lemma by usual estimates. for which we can adapt the proof of Theorem 1.1. As the next lemma shows, it suffices to consider the system of coalescing random walks with jumps of size greater than or equal to N 1−θ suppressed.

Proof:
Since (η N t ) t≥0 starts from the Heavyside configuration, for a realization of the Harris system with sup 0≤s≤δN 2 r N s ≥ N , by duality, in the same Harris system with jumps that are discarded in the definition of (η N t ) t≥0 suppressed, we can find a backward random walk which starts from some site (x, s) ∈ {Z ∩ [ N, +∞)} × [0, δN 2 ] with η N s (x) = 1 and attains the left of the origin before reaching time 0. If by the time the walk first reaches the left of the origin, it has made no jumps of size greater than or equal to N 1−θ , we are done; otherwise when the first large jump occurs the ra ndom walk must be to the right of the origin, and by the definition of η N t , either the jump does not induce a flip from 0 to 1, in which case we can ignore this large jump and continue tracing backward in time; or the rightmost 1 must be at least at a distance N to the right of the position of the random walk before the jump, in which case since < 1, at this time there is a dual random walk i n Z ∩ [ N, +∞) which also attains the left of the origin before reaching time 0. Now either this second random walk makes no jump of size greater than or equal to N 1−θ before it reaches time 0, or we repeat the previous argument to find another random walk starting in {Z ∩ [ N, +∞)} × [0, δN 2 ] which also att ains the left of the origin before reaching time 0. For almost surely all realizations of the Harris system, the above procedure can only be iterated a finite number of times. The lemma then follows. Lemma 3.2 reduces (3.1) to an analogous statement for a system of coalescing random walks with jumps larger than or equal to N 1−θ suppressed.
Take 0 < σ < θ and let : The estimate required here is the same as in the proof of Theorem 1.1, except that as we increase the index N , the random walk kernel also changes and its (3 + )th-moment increases as CN (1−θ+σ) . Therefore it remains to correct the exponents in Proposition 2.3. Denote by ζ N the system of coalescing random walks with jumps larger than or equal to N 1−θ suppressed, and recall that R = √ δN and M = / √ δ in our argument, (3.1) then follows from is bounded above by a constant times for some c > 0 and 0 < β < 1.
The only term that has changed from Proposition 2.3 is the first term, which arises from the application of Lemma 5.5. We have incorporated the fact that the 3 + moment of the random walk with large jumps suppressed grows as CN (1−θ+σ) , and we have employed a tighter bound for the power of R than stated in Proposition 2.3. The other three terms remain unchanged because the second term comes from the particle reduction argument derived from applications of Proposition 5.4, while the third and forth terms come from the Gaussian correction on Lemma 5.1. The constants in these three terms only depend on the second moment of the truncated random walks which is uniformly bounded. The verification of this last assertion only need some more concern in the case of the second term due to applications of Lemma 5.2. But if we go through the proof of Theorem T1 in section 7 and Proposition P4 in Section 32 of [10], we see that in order to obtain uniformity in Lemma 5.2 for a family of random walks, we only need uniform bounds on the characteristic functions associated to the walks in the family, which are clearly satisfied by the family of random walks with suppressed jumps. This concludes the proof of Theorem 1.3.
4 Proof of Theorem 1.4 4.1 Proof of (i) in Theorem 1.4 We start by proving (i) in Theorem 1.4. Since (η t • Θ t |N) t≥0 is a positive recurrent Markov chain onΩ, by usual convergence results, we only have to show that starting from the Heavyside configuration for every t and M sufficiently large for some C > 0 independent of M ant t. Now fix λ > 0, this last probability is bounded below by which by tightness is bounded below by for M sufficiently large. To estimate the last probability we introduce some notation first, let (X −M t ) t≥0 and (X M t ) t≥0 be two independent random walks starting respectively at −M and M at time 0 with transition probability p(·).
. For every set A ⊂ Z, let τ A be the stopping time inf{t ≥ 0 : Z M t ∈ A} . If A = {x}, we denote τ A simply by τ x . Then by duality and the Markov property after translating the system to have the leftmost 0 at the origin by time t − λM 2 we obtain that Part (i) of Theorem 1.4 then follows from the next result: is a non-nearest neighbor transition probability and has zero mean and finite second moment, then we can take λ sufficiently large such that for some C > 0 independent of M and for all M sufficiently large, where as before, for every x and y, (X x t ) t≥0 and (X y t ) t≥0 denote two independent random walks starting respectively at x and y with transition probability p(·), and . To prove Lemma 4.1 we apply the following result: Lemma 4.2. Let K ∈ N be fixed. For all l ∈ N sufficiently large, there exists some C > 0 such that for all s ≤ λM 2 /2, |x| < lM and 0 < k ≤ K, and M sufficiently large Proof of Lemma 4.1: The probability in (4.1) is then bounded below by which by the Strong Markov property is greater than or equal to |x|≤lM 1≤k≤K where l ∈ N is some fixed large constant. Now applying Lemma 4.2 we have that the probability in (4.1) is bounded below by Thus to finish the proof we have to show that |x|≤lM 1≤k≤K is bounded below uniformly over M by some positive constant.
Let D = {(x, x + k) : 1 ≤ k ≤ K and |x| < lM }, then this last expression can be rewritten as We claim that the second term can be bounded uniformly away from 1 for large M by taking K large. This follows from a standard result for random walks (see, e.g., Proposition 24.7 in [10]), which states that: if a mean zero random walk Z M t starting from 2M > 0 has finite second moment, then the overshoot Z M τ Z − converges to a limiting probability distribution on Z − as 2M → +∞. The distribution is concentrated at 0 only if the random walk is nearestneighbor. Then by Donsker's invariance principle, the first term can be made arbitrarily close to 1 uniformly over large M by taking λ large, and finally the last term can be made arbitrarily close to 0 uniformly over large M by taking l sufficiently large. With appropriate choices for K, λ and l, we can guarantee that (4.2) is bounded below by a positive constant uniformly for large M , which completes the proof of the Lemma.
It remains to prove Lemma 4.2.

Proof of Lemma 4.2:
By the Markov property the probability of A s (M, k, x) is greater than or equal to Since λM 2 /4 ≤ r(M, s) ≤ 3λM 2 /4, by Donsker's invariance principle the above quantity is uniformly bounded below by some C > 0 for M sufficiently large. This establishes Claim 1.

Proof of Claim 2:
We write the sum in (4.3) as which by the definition of D 1 is greater than or equal to The first term in this expression is bounded below by C/M for some constant C > 0, dependent only on K. This follows from Theorem B in the Appendix of [4], which states that the conditional distribution of Z k λM 2 /M := (X x+k λM 2 − X x λM 2 )/M conditioned on τ 0 > λM 2 converges to a twosided Rayleigh distribution. For the second term, we apply Lemma 2 in [1] and then Lemma 5.2, to dominate it by where C depends only on K. Since P sup 0≤t≤ λM 2 2 X 0 t > lM can be made arbitrarily small uniformly for large M if l is sufficiently large, and 1 ≤ k ≤ K, we obtain the desired uniform bound in Claim 2.

Proof of (ii) in Theorem 1.4
We still consider the voter model (η t : t ≥ 0) starting from the initial Heavyside configuration. Under the assumption γ > 3, P(r t − t ≥ M ) converges to π(ξ : Γ(ξ) ≥ M ) as t → +∞. Therefore, to prove Theorem 1.4 (ii), it suffices to show that, for every M > 0, if t is sufficiently large, then P(r t − l t ≥ M ) ≤ C M for some C > 0 independent on M and t.
We now fix N ∈ N and assume M = 2 N , which is not a restriction to the result since 2 N ≤ M < 2 N +1 for some N ∈ N and the inequality (1.4) remains valid by replacing C 2 with 2C 2 . In the following t will be >> 2 2N . Let ∆ t (s), for s < t, be the event that a crossing of two dual coalescing random walks starting at time t (in the voter model) occurs in the dual time interval (s, t] and by the dual time t they are on opposite sides of the origin, i.e, there exists u, v ∈ Z with X u,t s < X v,t s and X v,t t ≤ 0 < X u,t t . From the estimates in the proof of lemma 5 in Cox and Durrett [4], one can show that P(∆ t (s)) ≤ C/ √ s, if we have that P(0 ∈ ζ s s (Z)) ≤ C/ √ s, which holds if p(·) has finite second moment (see Lemma 5.6). Therefore, all we have to show is that for some C independent of t and N . We denote the event {r t − l t ≥ 2 N } ∩ (∆ t (4 N )) c by V N which is a subset of ∪ N k=0 V N k where V N k is the event that (see Figure 3) there exists x, y ∈ Z with y − x ≥ 2 N such that, for the coalescing walks X x,t s and X y,t s , (i) X x,t s < X y,t s for every 0 ≤ s ≤ 4 k−1 ; (ii) There exists s ∈ (4 k−1 , 4 k ] with X x,t s > X y,t s ; (iii) X x,t t > 0 and X y,t t ≤ 0.
For k = 0 we replace 4 k−1 by 0. We will obtain suitable bounds on V N k which will enable us to conclude that N k=0 , otherwise the range of the coalescing random walk at (s, y) ∈ (0, t] × Z. Obviously V N k is contained in the event that there exists x, y in ζ t 4 k−1 (Z) with x < y such that (i) R x (4 k−1 ) + R y (4 k−1 ) + |y − x| ≥ 2 N ; (ii) There exists s ∈ (4 k−1 , We call the crossing between two coalescing random walks a relevant crossing if it satisfies conditions (i) and (ii) in the definition ofṼ N k up to the time of the crossing. We are interested in the density of relevant crossings between random walks in the time interval (4 k−1 , 4 k ] and (as is also relevant) the size of the overshoot, i.e., the distance between the random walks just after crossing. To begin we consider separately three cases: (i) The random walks at time 4 k−1 are at x < y with |x − y| ≤ 2 k−1 (so it is "reasonable" to expect the random walks to cross in the time interval (4 k−1 , 4 k ], and either R x (4 k−1 ) or R y (4 k−1 ) must exceed 2 N −2 ).
(ii) The random walks are separated at time 4 k−1 by at least 2 k−1 but no more than 2 N −1 (so either R x (4 k−1 ) or R y (4 k−1 ) must exceed 2 N −2 ).
(iii) The random walks are separated at time 4 k−1 by at least 2 N −1 . In this case we disregard the size of the range.
Before dealing specifically with each case, we shall consider estimates on the density of particles in ζ t 4 k (Z) with range greater than m2 k . We first consider the density of random walks at time 4 k which move by more than m2 k in the time interval (4 k , 4 k+1 ]. By Lemma 5.6, the density of particles in ζ t 4 k (Z) is bounded by C 2 k . By the Markov property and Lemma 5.1, we obtain the following result: Lemma 4.3. For every 0 < β < 1, there exists c, C ∈ (0, ∞) such that for every k ∈ N and m ≥ 1, the probability that a fixed site y ∈ Z satisfies y ∈ ζ t 4 k (Z), and the backward random walk starting at (y, t − 4 k ) makes an excursion of size at least m2 k before reaching time level t − 4 k+1 is bounded by As a corollary, we have Lemma 4.4. For every 0 < β < 1, there exists c, C ∈ (0, ∞) so that for every k ∈ N and m ≥ 1, the density of y ∈ ζ t 2 2k (Z) whose range is greater than m2 k is bounded by

Proof:
Let d l,k be the density of coalescing random walks remaining at time 4 l , which on interval (4 l , 4 l+1 ] move by more than By Lemma 4.3 we have that d l,k is bounded above by It is not difficult to see that l<k d l,k provides an upper bound for the density we seek. Summing the above bounds for d l,k establishes the lemma. We can now estimate the relevant crossing densities and overshoot size in cases (i), (ii) and (iii) above. More precisely, we will estimate the expectation of the overshoot between two random walks starting at x < y at time 4 k−1 restricted to the event that: x, y ∈ ζ t 4 k−1 (Z), R x and R y are compatible with y − x as stated in cases (i) -(iii), and the two walks cross before time 4 k . From now on, we fix β ∈ (0, 1).
and {y ∈ ζ t 4 k−1 (Z)} both occur, they always occur on disjoint trajectories of random walks in the dual time interval [0, 4 k−1 ], we may apply the van den Berg-Kesten-Reimer inequality (see Lemma 4 in [2] and the discussion therein) which together with the previous lemma implies that the probability that x, y ∈ ζ t 4 k−1 (Z) and at least one has range 2 N −2 is less than Moreover the expectation of the overshoot (see [4]) is the time of crossing, is uniformly bounded over k and y − x.
Case (ii): In this case we must also take into account that the probability of the two random walks crossing before time 4 k is small. We analyze this by dividing up the crossing into two cases. In the first case the two random walks halve the distance between them before crossing. In the second case the crossing occurs due to a jump of order y − x.
Then as in Case (i), is uniformly bounded by some constant C > 0. Therefore On the other hand it is easily seen (by estimating the rates at which a large jump occurs, see Section 3 for details) that and so we have a contribution

Case (iii):
In this case we argue as in (ii) except the factor 2 k e −c2 (1− is dropped as we make no assumption on the size of R x or R y . So our bound is From the three cases above, we can sum over y ∈ Z and verify that, for a given site x ∈ Z, the total expected overshoot associated with relevant crossings in the time interval (4 k−1 , 4 k ] involving (x, 4 k−1 ) and (y, 4 k−1 ) for all possible y ∈ Z is bounded by We say a d-crossover (d ∈ N) occurs at site x ∈ Z at time s ∈ (4 k−1 , 4 k ] if at this time (dual time, for coalescing random walks) a relevant crossing occurs leaving particles at sites x and x + d immediately after the crossing. We de note the indicator function for such a crossover by Let X x s and X x+d s be two independent random walks with transition probability p(·) starting at x and x + d at time 0, and let τ x,x+d = inf{s : X x s = X x+d s } . Then If we know that for some C > 0 independent of k, d, s, t and N , and then substituting (4.6) and (4.7) into the bound for P (Ṽ N k ) gives for some C > 0 uniformly over all large t and N and we are done.
If we denote Z d s = X x+d s − X x s , (Z d s ) + = Z d s ∨ 0 and τ 0 = inf{s : Z d s = 0}, then by translation invariance, it is not difficult to see that where the inequality with C > 0 uniform over d and t is a standard result for random walks (see Lemma 2 in [4]).
Finally, to show (4.7), we note that the left hand side is the expected overshoot of relevant crossings where one of the two random walks after the crossing is at 0. By translation invariance this is bounded above by the expected overshoot associated with relevant crossings in the time interval (4 k−1 , 4 k ] involving (0, 4 k−1 ) and (y, 4 k−1 ) for every y > 0, which is estim ated in (4.5). Indeed, let F k (x, y; m, m + d) be the indicator function of the event that a relevant crossover occurs before time 4 k due to random walks starting at sites x and y at time 4 k−1 , and immediately after the crossover the walks are at positions m and m + d. Then by translation invariance and a change of variable

Proof of (iii) in Theorem 1.4
We know from [1] that if γ ≥ 2, then the voter model interface evolves as a positive recurrent chain, and hence the equilibrium distribution π exists. In particular, π{ξ 0 } > 0 where ξ 0 is the trivial interface of the Heavyside configuration η 1,0 . Let ξ t denote the interface configuration at time t starting with ξ 0 , and let ν denote its distribution. Then Let X 2n t and X 5n t denote the positions at time t of two independent random walks with transition probability p(·) starting at 2n and 5n at time 0. Let A denote the event that X 2n t ∈ [n, 3n] for all t ∈ [0, n 2 ], and let B s , s ∈ [0, n 2 ], denote the event that X 5n t ∈ [4n, 6n] for all t ∈ [0, s) and X 5n t ∈ (−∞, −n] for all t ∈ [s, n 2 ]. Event B s can only occur if X 5n t makes a large negative jump at time s. By duality between voter models and coalescing random walks, {X 5n t ≤ −n} | X 5n s ≤ −2n , is at lea st β for some β > 0 independent of n and s ∈ [0, n 2 ]. Therefore, which we may symmetrize to obtain If (4.9) fails, then there exists some n 0 ∈ N and > 0 such that, for all n ≥ n 0 , |y|≥8n p(y) ≤ 2 β 3 n 2 ν{Γ(ξ n 2 ) ≥ n} ≤ C n α+ , which implies that y∈Z |y| α+ 2 p(y) < ∞, contradicting our assumption. This proves the first part of (iii) in Theorem 1.4. To find random walk jump kernel p(·) satisfying (1.6), we can choose p(·) with |y|≥n p(y) ∼ Cn −α for some C > 0. (1.6) then follows directly from (4.8) and (4.10).

Technical Estimates
The following lemmas for random walks will be needed.
Lemma 5.1. Let X t be a centered continuous time one-dimensional random walk starting at the origin and with finite 3 + moment for some > 0. Then for every 0 < β < 1, there exists c, C > 0 such that for all T, M > 0.
Proof: By the reflection principle for random walks, we only have to show that for every 0 < β < 1, there exists c, C > 0 such that for all M, T > 0. To prove this inequality, we consider the following usual representation of X t : there exist centered i.i.d. random variables (Y n ) n≥1 on Z with finite 3 + moment and a Poisson process (N t ) t≥0 of parameter 1 independent of the Y n 's, such that where Y 0 = 0. The analogue of (5.1) for discrete time random walks appears as corollary 1.8 in [8], from which we obtain It then follows that By basic large deviations results for Poisson distribution, we have P (N T ≥ 3T ) ≤ C e −c T for some c , C > 0. Then after adjusting the constants, we obtain for every M > 0 and T > 0.
We now suppose T ≤ M . Back to the term after the first inequality in equation (5.3), By Stirling's formula, we can choose C > 0 large enough such that for all M > 0, P (N T ≥ M 1+β ) ≤ Ce −cM 1−β , thus concluding the proof.
Lemma 5.2. Let X x t and X y t be two independent identically distributed continuous time homogeneous random walks with finite second moments starting from positions x and y at time 0. Let τ x,y = inf{t > 0 : X x t = X y t } be the first meeting time of the two walks. Then there exists C 0 > 0 such that for all x, y and T > 0.
Proof. This is a standard result. See, e.g., Proposition P4 in Section 32 of [10], or Lemma 2.2 of [9]. Both results are stated for discrete time random walks, but the continuous time analogue follows readily from a standard large deviation estimate for Poisson processes.
Proof: To prove the lemma, we construct the system of coalescing random walks from the system of independent walks. Given the trajectories of a system of independent walks starting from positions {x Among the remaining distinct trajectories, we iterate this procedure until no more coalescing takes place. Note that this construction is well defined, since almost surely no two random walk jumps take place at the same time. The resulting collection of random walk trajectories is distributed as a system of coalescing random walks.
In the above construction, almost surely, the number of coalesced walks by time T in the coalescing system is bounded from below by the number of pairs {x haven't coalesced with other walks before time t, in which case the two will coalesce at time t; or one of the two walks has coalesced with another walk before time t. In either case, whenever x (i) 1 and x (i) 2 meet in the independent system, at least one of them will be coalesced in the coalescing system. The asserted stochastic domination then follows by noting that Lemma 5.2 implies that each pair {x T of meeting before time T in the independent system. there exists C > 0 depending only on p (·), such that for all K ≥ 1, P for some (x, s) ∈ [2 k R, 2 k+1 R] × [0, R 2 ], |X x,s u − x| ≥ 2 k R (log R) 2 for some 0 ≤ u ≤ s − K s − 1 K is bounded above by CK(log R) 2(3+ ) 2 2k+3 R for all R sufficiently large.
Proof: Let V x,s be the event as above but concerning only the random walk X x,s u , then denote the event in the statement by V which is the union of V x,s over all (x, s) ∈ [2 k R, 2 k+1 R] × [0, R 2 ]. Due to the coalescence, event V occurs only if V x,s occurs either for some (x, s) with s ∈ {K, 2K, · · · , R 2 K K} ∪ {R 2 }, or for some (x, s) which is a Poisson point in the Harris representation of the voter model detailed in Section 1. Therefore we can bound P (V ) by the expected number of such points, which by the Strong Markov property of Poisson processes can in turn be bounded by where X u is a random walk starting at the origin with transition probability p (·). By our assumption that p (·) has finite 3 + moment, we can apply Lemma 5.1 and obtain where C depends only on p (·). The Lemma then follows if we take R sufficiently large.
We finish by stating a result on the lifetime of a single particle voter model.
Lemma 5.6. Let ζ Z t be the process of coalescing random walks starting from Z at time 0 where all random walk increments are distributed according to a transition probability p(·) with finite second moment. Then for all t > 0 Proof: See Lemma 2.0.7 and the remark that follows it in [11].