Local survival of spread of infection among biased random walks

We study infection spread among biased random walks on $\mathbb{Z}^{d}$. The random walks move independently and an infected particle is placed at the origin at time zero. Infection spreads instantaneously when particles share the same site and there is no recovery. If the initial density of particles is small enough, the infected cloud travels in the direction of the bias of the random walks, implying that the infection does not survive locally. When the density is large, the infection spreads to the whole $\mathbb{Z}^{d}$. The proofs rely on two different techniques. For the small density case, we use a description of the infected cloud through genealogical paths, while the large density case relies on a renormalization scheme.


Introduction
We consider here an infection process that evolves on the d-dimensional integer lattice, where each individual performs a biased nearest-neighbor random walk.Let p(•) be a nearest-neighbor probability distribution on Z d .We assume without loss of generality, that, for each i ∈ [d], 0 < p(e i ) ≤ p(−e i ) < 1, where {e i } d i=1 is the canonical basis of Z d , and write v = x∼0 p(x) x for the d-dimensional drift of the distribution p(•).
We consider a "Poissonian cloud" of independent continuous-time random walks with jump distribution p(•).More formally, at time zero, each site x ∈ Z d receives an i.i.d.number of particles η 0 (x) ∼ Poisson(ρ), where ρ > 0 is a given parameter which we call density.Then, each particle evolves as an independent continuous-time random walk that jumps with rate one and whose increments have distribution p(•).Denote by η t (x) the number of particles at position x at time t.
We now define the infection process we will consider.Particles will be of two types: either healthy or infected.At time zero, we add an additional particle at the origin that is declared infected.A given particle is infected at time t if there exists some time s ∈ [0, t] such that it shared a site with a previously infected particle.Alternatively, one might say that a healthy particle becomes immediately infected if it shares a site with an already infected particle.
Let ξ t (x) denote the number of infected particles at x ∈ Z d at time t.Notice that, at any given time t and site x ∈ Z d , either all particles at site x are healthy or all are infected.
Our interest lies in understanding the behavior of the process ξ = (ξ t ) t≥0 .We will focus on directions where the drift is negative, more precisely, we assume that p(e 1 ) < p(−e 1 ) and examine the projection of the infected cloud in this direction 1 .There are two forces here that are in opposition to each other: while each individual particle has a drift away from the origin, in order for the infection to travel towards the same direction, it is necessary that each site is visited only finitely many times by infected particles, which puts a strain on the system.The strength of this opposing force, though, is controlled by the density parameter ρ.This hints at the existence of different behaviors for the model, as the density parameter ρ changes.
We first consider the case when the density is small.Here, the time it takes for each infected location to be emptied is not enough to sustain the infection process locally, and the infection cloud travels towards v.To precisely detect this phenomena, consider the quantity r t = sup{ x, e 1 : ξ t (x) > 0}, (1.1) the maximum displacement "towards the right" in the e 1 direction.
Theorem 1.1.Assume that p(e 1 ) < p(−e 1 ) and set v 1 = p(e 1 ) − p(−e 1 ).For any δ > 0, there exists a positive density ρ − > 0 such that, for all ρ ∈ (0, ρ − ), there exists a positive constant c 1 such that, for all t ≥ 0, As a corollary of the theorem above, we conclude that, provided the density ρ is sufficiently small, every finite subset is eventually free of the infection, since it travels towards negative directions with positive speed.Corollary 1.2.If p(e 1 ) < p(−e 1 ) and ρ is small enough, then each fixed site is visited finitely many times by infected particles, almost surely.
When the density is large, the picture is different from Corollary 1.2.Even though each particle travels towards the direction v, the time it takes for all particles in a given site to move allows the infection to actually spread in the opposite direction.Our next theorem states that, provided the density is large enough, r t actually grows with positive speed with large probability.Theorem 1.3.Given p(•) such that p(e 1 ) > 0, there exist a positive constant ∆ = ∆(p(•)) > 0 and density ρ + = ρ + (p(•)) > 0 such that, for all ρ > ρ + , there exist positive constants c 2 = c 2 (p(•), ρ), c 3 = c 3 (p(•), ρ), and t 0 = t 0 (p(•), ρ) such that P ρ [r t ≤ c 3 t] ≤ e −c 2 (log t) 1+∆ , for all t ≥ t 0 . (1.3) According to Theorems 1.1 and 1.3, we have r t ≤ −ct, if ρ < ρ − , and r t ≥ ct, for ρ > ρ + .An interesting open problem would be to prove whether there is a critical point ρ c such that, for ρ < ρ c , the infection travels to the left, and, for ρ > ρ c , the infection travels to the right.
Although the heuristics for the theorems above can be easily understood, turning them into proofs is not straightforward.Verifying such statements for similar models, such as in Kesten and Sidoravicius [10], relies on developing intricate renormalization schemes.In Baldasso and Teixeira [4], the authors consider another model, where particles evolve as a one-dimensional zero-range process without drift and also rely on multiscale renormalization to derive their results.
Proof overview.The proof of Theorem 1.1 follows by path-counting arguments.This proof has two distinct parts that rely on the fact that every site that is infected at time t can be reached by a concatenation of trajectories of random walks, with the first one starting at the origin.We will call such a concatenation of random walk trajectories as a genealogical path, but defer the formal definition to Section 2. The overall goal of the proof is to show that, for all times t, there exists no genealogical path that could bring the infection far from t v.
The first part of the proof is similar to the analogous statement from Kesten and Sidoravicius [10], that consider the case when the random walks are balanced.In this part, we simply prove that, with sufficiently high probability, there exists no genealogical path starting at the origin that performs too many jumps.This will allow us to reduce the number of genealogical paths we need to consider in the rest of the proof.
The second part requires a more refined analysis, as we need to account for the contribution that each particle in a genealogical path could have to "move" the infection away from the drift.For this, we first verify that each particle that is followed by a genealogical path gives a contribution towards this displacement away from the drift that has good concentration.From this, we infer that, in order for a genealogical path to move too much away from the drift, it needs to follow too many different particles.
Theorem 1.3 has a much more intricate proof.Here we rely on multiscale renormalization by considering events where the infection does not travel fast enough in the positive direction.We prove that, provided the density is large enough, the probability of such events is small, by relating events of different scales.A central piece of the proof that allows us to establish such relation is the decoupling for biased random walks.

Decoupling.
A decoupling is an estimate on the correlation decay of functions of the space-time configurations with supports that are sufficiently far away.They are powerful tools that replace the use of mixing properties for models that lack such estimates.We here prove a decoupling for the particle system composed of independent continuous-time biased random walks.
We regard the collection of space-time configurations as a subset of N Z d ×R 0 , and consider monotone functions of such configurations, which we assume are defined in this larger space.
We say that a function f : The function f is said decreasing if η η2 implies f (η) ≥ f (η).Given two space-time supports B 1 and B 2 , their time distance is the quantity We will prove a correlation estimate for decreasing functions of the space time with bounded supports that have sufficiently large time distances.
Theorem 1.4.There exist positive constants c 4 and c 5 such that the following holds.Let B 1 and B 2 denote two space-time cubes of side-length n > 0 satisfying and assume, without loss of generality that the time coordinates in the box B 1 are smaller than the ones in B 2 .For any two decreasing functions of the space-time configurations with respective supports in B 1 and B 2 , we have, for any ρ ≥ 1, ) T .
Remark 1.5.Notice that the estimate above is not a correlation estimate, since it relates expectations for different density parameters ρ and ρ * .A natural question is whether it is possible to obtain such estimates without the use of the so-called sprinkling in the density.This is not the case, as verified in [8] for a very similar model, composed of discrete-time balanced random walks on Z.In this case, they exhibit an example where correlations decay as d (see Equation (2.11) from [8]).
Remark 1.6.We remark that the statement of the theorem above can be extended to allow for the case when the function f 1 depends not only on the configuration inside the box B 1 , but actually on the whole past of the process up to the upper time limit given by the ball B 1 .
Proof overview of the decoupling.The proof of Theorem 1.4 relies on a construction of a coupling between two systems η and η * with respective densities ρ and ρ * such that, for a given subset of H ∈ Z d , we have η t (x) ≤ η * t (x), for all x ∈ H with large probability, provided t is large enough.With this coupling in hands, Theorem 1.4 follows easily.
The construction of the coupling is more intricate.Its nature resembles that of Baldasso and Teixeira [3,4].We start with two independent collections of particles η 0 and η * 0 .Inside a large set containing H, we match particles of η 0 to particles of η * 0 .We then let the random walks evolve.Whenever a pair of matched particles meet, they evolve together.Standard heat kernel estimates provide the bounds on the probability of this happening before some given time s.As those bounds are not strong enough for the estimates we need, we do a rematching of the particles a polynomial number of times to boost the aforementioned estimates to yield our desired stretched exponential bounds.We remark that this coupling is more robust than that of [7,14,15], since it does not require such a refined control of heat-kernel estimates.
Related works.There are many different works that treat models for infection spread.Perhaps the most similar to ours is considered by Kesten and Sidoravicius [10].In their case, particles evolve as continuous-time unbiased simple random walks and all particles placed initially at the origin begin infected.Once again, infection spreads through contact and there is no recovery.They consider the set V (t) of sites visited by an infected particle up to time t and prove that there exist positive constants C 1 and C 2 such that, with large probability, B(C In [12], Kesten and Sidoravicius strenghen the results of [10] and conclude that the set V (t) satisfies a shape theorem, while in [11] they studied the case with recovery.
The proof in [10] shares some similarities with ours.The upper bound is also obtained through path-counting arguments while the lower bound revolves around the construction of a delicate renormalization structure.As we mentioned in the proof overview of our Theorems 1.1 and 1.3, the first part of our proof follows the path-counting argument for the upper bound in [10], but we need to proceed one step further to control that the infection does not move too much away from the bias of the random walks.With regard to the lower bound, we focus this discussion on the one-dimensional case to highlight the main differences between our proof and that in [10].First, [10] observes that the infection front (say, the rightmost infected particle) behaves as a symmetric random walk when there is only one infected particle at the front, whereas the front has a drift to the right when there is more than one particle.This implies that, in order to prove that the infection grows linearly, it suffices to prove that the infection front has at least two particles a positive fraction of time, which they obtain via a renormalization scheme.In our case, where particles have a drift to the left, this strategy fails precisely because of two reasons.First, having two particles at the front may not be enough to overcome the drift to the left of the random walks, so one needs a sufficiently large number of particles.Second, even if two particles were enough to overcome the drift, just having a positive density of times with two particles at the front may not be enough to compensate for the drift to the left that the front undertake when it has just one particle.Our strategy is then to develop a multiscale renormalization scheme different from that of [10], with a target of controlling instances where the infection does not travel with a minimal positive speed to the right, and prove that events of this form have very small probability.
Regarding other works, Gracar and Stauffer [7,6] analyzed a more general situation where the random walks move on top of the random conductance model.They prove the existence of a percolation structure (which they call Lipschitz surface) and use this to conclude that the infection spreads with positive speed for d ≥ 2. A less structured percolating argument was obtained by Stauffer [19] in continuous space, where particles move as independent Brownian motions.
A simpler model that can be viewed as an infection process is the so-called frog model.Here, infected particles perform discrete-time simple random walks, while healthy particles do not move until an infected particle jumps onto their position.A thorough discussion about this model can be found in the survey paper by Popov [16].We just remark that, under some minor conditions on the initial location of the particles, Alves, Machado, and Popov [1], and, independently, Ramírez and Sidoravicius [17], prove a shape theorem similar to the one in [12].This was further strengthened by Alves, Machado, Popov and Ravishankar [2].
Let us now briefly review models where particles do not move independently.Baldasso and Teixeira [4] consider particles that move according to a one-dimensional zero-range process.Under mild conditions that garantee the existence of invariant measures for the process, they provide lower and upper bounds for the speed with which the front of the infection grows.Jara, Moreno, and Ramírez [9] consider an infection evolving on top of one-dimensional exclusion process and rely on regeneration arguments to prove a law of large numbers and central limit theorem for the infection front.
Regarding decoupling estimates (as in our Theorem 1.4), sprinkling ideas were first introduced in the context of random interlacements by Sznitman [20] and in the context of independent Brownian motions by Sinclair and Stauffer [18] (see also [14]).These types of inequalities were used to study several conservative particle systems.Peres, Sinclair, Sousi, and Stauffer [14], Benjamini and Stauffer [5], and Stauffer [19] considered independent Brownian motions.Hilário, den Hollander, Sidoravicius, dos Santos, and Teixeira [8] treated discrete-time balanced random walks and built on the strategy from Popov and Teixeira [15] to provide decouplings for this system.The random conductance model was considered in [7], while Baldasso and Teixeira developed a decoupling inequality for the one-dimensional zero-range process [4] and the one-dimensional simple exclusion process [3].

Basic definitions
Let us now precisely construct the particle systems and infection process we consider.Recall that p(•) denotes a nearest-neighbor probability distribution on Z d such that, where {e i } d i=1 is the canonical basis of Z d .The vector v = x∼0 p(x) x is the d-dimensional drift of the distribution p(•).Due to (2.1), every coordinate of v is non-positive.Furthermore, we assume that p(e 1 ) < p(−e 1 ), so that v 1 , the first coordinate of v, is negative.
For each x ∈ Z d and n ∈ N, let S x,n = (S x,n t ) t≥0 denote an independent copy of a rate-one continuous-time random walk with transition probability p(•) and S x,n 0 = x, for all n ∈ N. Denote this collection by S.
Given a non-negative parameter ρ ≥ 0, consider, for each x ∈ Z d , an independent random variable η 0 (x) with distribution Poisson(ρ).For each positive time t > 0, let denote the number of particles at position x at time t.We write P ρ for the distribution of the process η = (η t ) t≥0 .We see particles in a given space-time point as ordered in a pile.This ordering can be arbitrarily chosen, and we will use it to talk about the k-th particle in a site.Furthermore, notice that it also makes sense to talk about the k-th particle that jumps into a site x after time t and that this does not depend on this ordering of particles in each site.
The product measure with marginals Poisson(ρ) is invariant for the process, and we call the quantity ρ the density of the system.Besides, if ρ ′ ≤ ρ, it is possible to define an order preserving coupling between two processes η ρ ′ and η ρ with respective densities ρ ′ and ρ such that for all t ≥ 0 and x ∈ Z d : one simply uses the same collection of walks S to evolve both processes and consider the initial conditions η ρ ′ 0 and η ρ 0 that satisfy η ρ ′ 0 (x) ≤ η ρ 0 (x), for all x ∈ Z d .
We now proceed to define the infection process (ξ t ) t≥0 .Recall we add an infected particle at the origin at time zero.At any given time t ≥ 0, ξ t (x) denotes the number of infected particles at x ∈ Z d .At time zero, only particles at the origin are infected, which means that ξ 0 (x) = (η 0 (0) + 1)1 {x=0} .As for the evolution, each time an infected particle jumps to a site with healthy particles, all particles at that given site become infected.Furthermore, if a healthy particle jumps towards a site with infected particles, it immediately becomes infected.This in particular implies that in every site and nonnegative time, either all particles are healthy or all are infected.

Genealogical infected paths
We can see the infection mechanism through genealogical paths.In order to define these paths, we introduce a notation for the trajectory of a particle.If X denotes a particle present at time zero, we denote by (X(s)) s≥0 the path that it performs.We say that a particle X becomes infected at time t if it is healthy before time t and infected after time t, i.e., t is the first time it shares a site with another infected particle.
We say that γ : [0, t] → Z d is a genealogical infected path up to time t (GIP(t)) if γ(0) = 0 and there exist a sequence of times 0 = t 0 < t 1 < • • • < t n ≤ t and a sequence of particles X 1 , . . .X n+1 such that X 1 (0) = 0, X i becomes infected at time t i−1 , and, for all s ∈ [t i−1 , t i ] we have γ(s) = X i (s), for all i ∈ [n]; for convenience of notation, we assume that t n+1 = t.Of course, ξ t (x) > 0 if and only if there exists a GIP(t) γ with γ(t) = x.See Figure 1 for a representation of a GIP(t).
We will identify a GIP(t) by the following: the number n of particles it follows, a vector (k 1 , . . ., k n ) with non-negative entries that counts the number of jumps each particle performs while it is being followed, and a vector with non-zero integer entries (j 0 , j 1 , . . .j n−1 ) that identifies which is the next particle to be followed (j 0 identifies the first particle that is followed and starts at the origin).If j i > 0, then when X i makes its last jump we take X i+1 to be the j i -th healthy particle that was present at the site where X i jumped to (and thus became infected via X i ).Whenever j i < 0, we wait X i to perform all its k i jumps, and only at this moment wait for |j i | healthy particles to jump on the site where X i is (and thus becomes infected for the first time when it meets X i ), taking the |j i |-th such particle as X i+1 .
This identification has some particularities we need to address.First, observe that, if k i = 0, we demand that j i < 0, i.e., after following a particle that does not jump, we Figure 1: An example of a genealogical infected path.Different colors stand for different particles followed.Following the representation of the path via the quantities introduced above, we have n = 4 and (k 1 , k 2 , k 3 , k 4 ) = (2, 0, 3, 0).Furthermore, notice that the transition from the first (blue) particle to the second (red) particle happens with an index j 1 < 0. Since the second particle does not jump, we have j 2 < 0 as well.In the last transition, from a green to a purple particle, the index j 3 is positive.need to follow a particle that is not yet in the site we are considering.Moreover, we will denote by k = n i=1 k n the total number of jumps of a GIP.Finally, it is not always the case that all possible choices for these vectors will yield a GIP, but all GIP can be obtained by some choice of such values.
3 The small density case: proof of Theorem 1.1 We now consider the case when the density is very small.Using a first moment computation, we will prove that there exists a small density ρ > 0 such that, with large probability, the infection travels towards the negative direction in the first coordinate axis.
We will control how infection spreads by using genealogical paths.The first proposition we prove states that it is unlikely to exist a GIP that jumps many times.This is a general statement that does not depend on the probability distribution p(•).This transition kernel will be important when we consider finer properties of the model.Proposition 3.1.For any ρ ∈ (0, 1), there exists a positive constant c 6 = c 6 (ρ), which might be taken to be monotone non-decreasing as a function of ρ, such that, for all t ≥ 0, P ρ there exists a GIP(t) that jumps more than c 6 t times ≤ e −c 6 t+1 .
As a byproduct of the proposition above, we immediately obtain the following result.
Proposition 3.2.For any densities ρ ∈ (0, 1), there exists c 7 > 0 and α > 0 such that, for all t ≥ 0, Proof of Proposition 3.1.The proof of this statement relies on a first moment calculation.We will bound the expectation of the number of GIPs that jump more than c 6 t times before time t.
The discussion gets simplified when we use the identification of GIP introduced in Subsection 2.1.A GIP is identified by the number of jumps k, the number n of particles it follows, a vector (k 1 , . . ., k n ) with non-negative entries that counts the number of jumps each particle performs and a vector with non-zero integer entries (j 0 , j 1 , . . .j n−1 ) that identifies which is the next particle to be followed.As in Subsection 2.1, we will denote by X 1 , . . .X n the collection of particles followed by the GIP.
If k i > 0, then the time it takes to follow particle X i until its last jump is equal in distribution to the sum of k i independent Exponential(1) random variables.We will call these exponential times T i .Besides, if j i < 0, we gain a time contribution that comes from the fact that, after particle X i does its k i -th jump, it has to wait until j i healthy particles jump into its site from one of its neighboring sites.We will call such times as W i,ℓ , where ℓ ranges from 1 to |j i |.Note that the number of healthy particles in a given such neighboring site is distributed according to a Poisson random variable of intensity ρ times the probability that a particle moving as a biased random walk did not touch other infected particles in the past; this follows from the thinning of Poisson point processes.We simply bound this probability by one.Moreover, we still need to account for the possibility that X i jumps, which happens with rate one.Hence, (W i,ℓ ) ℓ is stochastically dominated by a sequence of |j i | independent exp(1 + ρ) random variables.Notice furthermore that the probability that X i jumps before a healthy particle arrives from a neighboring site is at least 1 1+ρ .In the case when the particle we are following jumps before all the |j i | new particles arrived, we disregard the path.From the above consideration, the probability that the path is not disregarded as described above is at most the probability that G i > |j i |, where G i ∼ Geometric 1 ρ+1 .From the strong Markov property, it follows that the random variables T i , W i,ℓ , and G i are independent.
Let G t denote the number of GIPs that jump more than c 6 t times before time t, where c 6 is a constant that will be chosen later.The discussion above allows us to bound where N i counts the number of particles present at the site onto which X i makes its last jump.
Let us now estimate the expectation above.Notice first that the possible choices for the partitions (k Write J = i:j i <0 |j i | and notice that the random variables k i=1 T i + i:j i <0 ℓ=1 W i,ℓ stochastically dominate a sum of k + J i.i.d.Exponential(1 + ρ) random variables.This allows us to bound where X ∼ Poisson (1 + ρ)t .Notice also that the number of walks we follow n is upper bounded by k + J, since, whenever i ∈ [n] is such that k i = 0, we have j i < 0.
To bound the expectation, we first divide the sum according to which subset of indices A ⊂ [n − 1] is such that j i < 0 for i ∈ A. Notice that, for a fixed choice of set A, using that N i has Poisson distribution with parameter ρ, where the term ρ + 1 comes from the number of particles at the origin at time zero.From the discussion above we obtain the bound We now observe that that the number of choices of indices j i with i ∈ A such that This allows us to bound the quantity above by (3.8) Second, we bound the number of choices for A according to its size.This yields the bound We now change the order of summations, and conclude that the r.h.s. of (3.7) is bounded by We now use the estimates from Proposition A.1.By choosing c 6 ≥ 2(ρ + 1), we obtain To conclude, we set L = max{k, J} ≤ k + J ≤ 2L in the equation above to obtain that the r.h.s. of (3.7) is bounded by if c 6 = c 6 (ρ) = 12 Combining the bound above with (3.7), we obtain that To conclude the proof, simply apply Markov's inequality to obtain We now proceed to conclude the proof of Theorem 1.1.In view of Proposition 3.1, we may restrict ourselves to paths that do not jump many times until time t.Before presenting the proof, we provide some basic facts about biased random walks that will be used in the proof, since GIPs are constructed by concatenating such objects.Lemma 3.3.Let (X t ) t≥0 be a random walk starting from the origin, and let v denote its drift.For any ǫ > 0, there exist a positive random variable R ǫ and a positive constant c 8 = c 8 (p(•), d, ǫ) ∈ (0, 1) such that, almost surely and, for every u ≥ 0, In particular, R ǫ is stochastically dominated by 1 c 8 Y − log c 8 c 8 , where Y ∼ Exponential(1).
Proof.Begin by using Lemma A.4 to bound, for The random variable R ǫ in dimension one.Notice that the random walk is completely contained in the gray area.
for some suitable positive constant c > 0.
Due to Borel-Cantelli Lemma, the random variable is almost surely finite.Define now so that (3.15) clearly holds.Finally, observe that, as an immediate consequence of (3.17) and Lemma A.4, one obtains that, for all u ≥ 0, concluding the proof.
Remark 3.4.Note that the whole argument in the proof above crucially relies on the fact that in a GIP we start following a given particle only at the very moment when it gets infected.This implies that a particle is followed at most once, and a particle that we start to follow at some time t has never intersected the GIP before time t, reducing dependences.
We are now in position to prove Theorem 1.1.The proof is also based on the first moment method by bounding the expected number of GIPs that do not behave as expected.In view of Proposition 3.1, we may consider only GIPs that do not jump many times.
Proof of Thereom 1.1.We assume that ρ < 1  3 .By possibly increasing the value of c 1 , we can assume t ≥ 1.Furthermore, by monotonicity, we may assume that δ < 1  2 |v 1 | and denote by H t the number of GIPs that jump at most c 6 t times and are such that their endpoint x satisfy x − t v, e 1 ≥ δt, where c 6 is the constant from Proposition 3.1.
We first apply Lemma 3.3.Using the fact that δ < |v 1 | and that v 1 < 0, the maximum displacement of a given particle towards the positive direction in the axis e 1 can be bounded by since v 1 < 0 and v 1 + δ < 0. In particular, the maximum displacement towards the positive direction of a GIP can be bounded by a sum of i.i.d.random variables with distribution R δ , one for each followed particle in the path.Notice that independence of the random variables comes from the fact that we only follow newly infected particles.In particular, if the path follows at most αt particles, its maximum displacement is bounded by a sum of αt i.i.d.random variables (R i δ ) αt i=1 with distribution R δ .This implies that the probability that a given fixed GIP3 has displacement to the right bigger than δt is bounded by where Y j are i.i.d.Exponential(1) random variables and Z ∼ Poisson (tc 8 δ + αt log c 8 ).
Choose now α small enough such that c 8 δ + α log c 8 > α and observe that there exists a positive constant c 9 = c 9 (α, δ, c 8 ) such that

.23)
We are now in position to bound the expectation of H t .Reasoning similarly as in (3.3) (and the paragraph preceding this equation), with the same notation, we can obtain the bound where n denotes the number of particles followed in a given path, the vector (k i ) n i=1 counts how many jumps each of the particles performs and (j i ) n−1 i=0 controls transitions between particles.
Proceeding as in the proof of Proposition 3.1 (in particular, Equation (3.6)) we obtain (3.25) Combining the above with the bound in (3.23) yields To estimate the number of vectors (k i ) n i=1 such that n i=1 k i ≤ c 6 t, we bound this quantity by the number of vectors (k i ) n+1 i=1 such that n+1 i=1 k i = c 6 t and apply the bound in (3.4).We now combine this bound with (3.26) to obtain, for α ≤ c 6 , where the bound in the last summation is obtained by maximizing the expression 6ρc 6 te n n in n.From (3.27), one easily obtains that, provided ρ is small enough, there exists a positive constant c 1 such that for all t ≥ 1. Markov's inequality concludes the proof.

Decoupling
This section contains the proof of Theorem 1.4, the main step towards the proof of Theorem 1.3.We first prove this theorem with the aid of an auxiliary proposition, whose proof can be found in Subsection 4.1.
In the following, we say that a process η = (η t ) t≥0 has density ρ > 0 if the initial distribution of the process is a product of i.i.d.Poisson(ρ) random variables.
The proof of the decoupling relies on the construction of a coupling between two processes with different densities such that the process with higher density dominates the less dense one inside a box with large probability.We defer the proof of Proposition 4.1 to Subsection 4.1.For now, we use this proposition to conclude the proof of Theorem 1.4.
Proof of Theorem 1.4.Via a simple change of coordinates, we may assume, without loss of generality, that In particular, d V = T .In fact, using this coordinates, we can allow f 1 to depend on the half-plane We now use the coupling from Proposition 4.1 for H (here we need to choose d T = T large enough).Denote by P the probability measure of the coupling and by E the expectation with respect to P. We obtain two processes η = (η s ) s≥0 and η * = (η * s ) s≥0 with η independent of η * 0 such that if T is taken large enough.Notice that above we used the hypothesis that ρ ≥ 1.
Observe now that, whenever η T H η * T (we write η H ξ if η(x) ≤ ξ(x), for all x ∈ H) and η / ∈ A, we have f 2 (η * ) ≤ f 2 (η).This allows us to estimate (4.5) It remains to estimate the probability of the event A. We split this probability according to the position of the particles at time T .For k ≥ 1, let and notice that there exists a positive constant c = c(d) such that Besides, in order for a particle that is at H(k) at time T to reach B 2 , it needs to perform at least 2n + k + ρT steps.Consider the event A(k) = some particle that is in H(k) at time T has a trajectory that intersects B 2 .(4.8) We will bound the probability of A(k) by considering the number of particles in H(k) at time T .We obtain, by applying Lemma A.3 twice, by further increasing the value of T if necessary.
In particular, Combining the equation above with (4.5) concludes the proof.
Remark 4.2.Using the notation of the proof, notice that we can allow for f 1 to depend on the whole past (η * s ) s≤0 .In this case, one only needs to observe that (4.5) still remains valid, which follows easily from properties of the conditional expectation.This in particular establishes the extension of Theorem 1.4 as stated in Remark 1.6.

Coupling
In this subsection we present the proof of Proposition 4.1.The proof follows the same general steps from [4] and [3].For this reason, we omit some simple computations.
The idea for constructing the coupling is to start with two independent configurations η 0 and η * 0 and evolve them simultaneously in order to obtain the domination at time T .We first observe that we can restrict ourselves to a larger box H * around H and assume that all particles that end up inside H at time T never leave H * .Now, to obtain the domination, we fix a deterministic sequence of times (s i ) i≥0 and, in each of these times, we construct a pairing between particles of η and of η * that are inside H * .The evolution is then set in a way that, if a pair of matched particles meets, they continue evolving together.In particular, the probability that there is no domination at time T is bounded by the probability that there exists a particle of η that never meets a pair.This will be easily bounded with the aid of Proposition B.3.
We now proceed to prove Proposition 4.1.
Proof of Proposition 4.1.Fix T large enough so that Proposition B.3 applies.The coupling will use independent initial configurations η 0 and η * 0 with respective densities ρ and ρ * .Besides, we consider two independent copies S and S ′ of the graphical construction presented in Section 2.
The process η = (η t ) t≥0 will follow the walks from S, while the process η * = (η * s ) x≥0 will alternate between the two constructions.This implies that η is independent of η * 0 .Consider the sequence of times s k = kT  d+2) , and, for i ∈ Z d , write Observe that H * ⊂ i∈I H(i).If necessary, we increase H * to coincide with the union i∈I H(i).
We will perform the pairing inside each box H(i) with i ∈ I. Concentration bounds (via exponential Markov inequality together with the inequality log(1 + x) ≤ x − 1 4 x 2 , for x ∈ [0, 1]) on the number of particles yields, for each i ∈ Z d , and In particular, if then we can use (4.16) and (4.17) to obtain Here it is important to notice that the bound above does not require η t and η * t to be independent, only that they have the correct marginal distributions.
We now construct the evolution of the process η * = (η * s ) s≥0 .If we are in the event A ∪ B 0 , then η * evolves using the graphical construction given by the paths S ′ and, consequently, η and η * have independent evolutions.Assume we are in A c ∩ B c 0 .We perform a pairing between particles of η 0 and η * 0 inside each set H(i), for i ∈ I.This pairing is deterministic and follows the following steps.
1. First pair as many particles as possible of η 0 to particles of η * 0 that are in the same site.
2. Pair the remaining particles of η 0 to particles of η * 0 that are in the same sub-box H(i).
Observe that this pairing is always possible in the event B c 0 .This pairing between particles will be used to construct the evolution of the process η * .Particles of this process will use the trajectories of the construction S ′ until the time they share a site with their corresponding pair.When this happens, the particle will follow the trajectory from S that its pair from η uses.At the times (s k ) , these pairings are remade, following the rules mentioned above (in particular from first step in the construction of the pairings, it is possible to retain pairs of particles that meet before this rearrangement).These rules imply that the number of particles that meet a pair cannot decrease when the pairings are remade.
In view of Proposition B.3, the probability that a particle meets its couple between times s k and s k+1 is at least c 22 T − d 2(d+2) > 0. Furthermore, each particle has ⌊T d+1 d+2 ⌋ ≥ ⌊T 2 3 ⌋ attempts to find a pair.We obtain the bound P a given particle does not find any of its pairs in any of its allowed attempts, A c , ∩ In particular, we can use union bounds to obtain + ρ(6ρT + n + 1) d P a given particle does not find any of its pairs in any of its allowed attempts, A c , ∩ concluding the proof.

The large density case
We now focus on the proof of Theorem 1.3.Here, we develop a renormalization structure for the infection front.We start by providing elementary bounds for the probability that the environment behaves exceptionally bad, and for events where the infection act abnormally.In Subsection 5.2, we establish the main notations used for the renormalization scheme presented in Subsection 5.3.Finally, Subsection 5.4 contains the proof of Theorem 1.3.

Elementary bounds
In this subsection, we present some rough initial estimates that will be used to bound the events where the infection process behaves exceptionally bad.These estimates are not sharp and rely mostly on union bounds and large deviations for the number of particles or jumps in a given time interval.
Our first lemma bounds the probability that a given vertex has many particles at some moment before a given time.
Lemma 5.1.There exists a positive constant c 12 such that, for all L ≥ 1 and density ρ ≤ L 2 , Proof.Notice that, in order for the origin to have many particles before time L, it is necessary that a large ball around it starts with many particles, or there exists a particle that performs many jumps before time L. Proceeding as in (4.9), we can bound there exists some particle that starts outside B(0, 3L) and reaches the origin before time by choosing c 12 appropriately.
The next lemma bounds the probability that the infection process travels abnormally fast during a given time interval.Lemma 5.2.There exists a positive constant c 13 such that, for all L ≥ 2 and density ρ ≤ L 2 , P ρ there exists an infected particle outside Proof.We can bound the probability of the event above by the probability that some vertex inside [−L d+6 , L d+6 ] d has many particles at some moment before time L or the infection process travels fast through a field of vertices that are typical.Let A denote the event in the statement of the lemma, and define the events and and a sequence of particles X 0 , X 1 , . . ., X L d+6 −1 such that X i jumps from x i to x i+1 after X i−1 arrives at x i and before time L As for the probability of C ∩ B c , we use a union bound on the possible choices for the path.Notice that, since the number of particles that are at position x i is bounded by L d+4 , the probability of having particles realize a given fixed path can be bounded by the probability that a Poisson process of intensity L d+4 has more than L d+6 ticks up to time L.This yields the bound ( The proof is completed by combining the bounds above and choosing the constants appropriately. Recall that we defined the front of the infection towards the direction e 1 as r t = sup{ x, e 1 : ξ t (x) > 0}. (5.8) The next lemma states that, provided the density is large enough depending on the time parameter, there is a big probability that r t is large.
Lemma 5.3.There exists a positive constant c 14 > 0 such that, for any positive integer L ≥ 1 (5.9) Proof.Consider the event A k,i defined as between times k + i 8 and k + i+1 8 , there exists a particle at position (k + i)e 1 that jumps to position (k + i + 1)e 1 , and there exists particles at positions (k + i)e 1 and (k + i + 1)e 1 that do not jump    , (5.10) and notice that, in ∩ L−1 k=0 ∩ 7 i=0 A k,i , we have r t ≥ 8L.This is due to the fact that, between times k and k + 1, in the intersection ∩ L−1 k=0 ∩ 7 i=0 A k,i the infection spreads from 8ke 1 to 8(k + 1)e 1 and thus, at time L, there exists an infected particle at position 8L.(5.11) Combining this with union bound yields for some positive constant c 14 , and concludes the proof.

The box notation
Let us now introduce the notation in order to develop our renormalization analysis.We first fix a sequence of scales by setting (5.13) The initial value L 0 will be chosen later to be a large enough integer.
Our goal is to bound the probability that the infection process travels very slowly towards the e 1 direction.The continuous time nature of the process implies that events of this form do not have a bounded support.For this reason, we introduce events that approximate these and have support contained in the box B k .For each k ∈ N 0 , define and, for m For m = (x, t) ∈ Z × N 0 , we denote by (ξ m s ) s≥t for the infection process starting with initial configuration η t and initial collection of infected particles the ones in x.
Consider the events E k (m) = {ξ m t (x) = 0, for all (x, t) ∈ R k (m)} . (5.17) See Figure 3 for a representation of the event above for d = 1.Observe that the events E k (m) are non-increasing and have support inside B k (m).When m = (0, 0), we omit this in the definition and denote E k (0, 0) simply by E k .We introduce the sequence of densities ), (5.18) and notice that, for each k, we have ρ k+1 ≤ L 2 k , since L 0 ≥ 2. Furthermore the sequence (ρ k ) k∈N 0 is monotone increasing and ρ ∞ = lim ρ k exists and is finite.For each k, define also which does not depend on m, by translation invariance of the process.
Finally, let M k denote the set of indices m in scale k in the box B k+1 , namely and observe that (5.21)

Estimates on p k
In this subsection, we provide estimates on p k , by proving that, provided L 0 is large enough, the sequence (p k ) k∈N 0 decays quickly.
The first lemma we prove here states a recursive inequality that relates p k to p k+1 .
Lemma 5.4.There exist positive constants ℓ 0 and A such that, if L 0 ≥ ℓ 0 is an integer, then, for all k ∈ N 0 , the event where the infection ξ m travels abnormally fast before time L k .We first claim that, on E k+1 , one of the following two conditions hold: 1.For some m ∈ M k , the event D k (m) holds; 2. There are 2d + 15 values (m i ) 2d+15 i=1 in M k with different time coordinates such that E k (m i ) holds for all i ≤ 2d + 15.
In order to verify the claim, suppose we are in the event E k+1 , that the first condition does not hold, and that the second one holds for at most 2d+14 values of m with different time coordinates.
For each m = (x, t) ∈ M k such that η t (x) > 0, let X(m) denote a point in Z d such that X(m), e 1 = max x, e 1 : ξ m L k (x) > 0 . (5.25) By assuming that the first condition above does not hold, we have (5.26) Furthermore, using that the second condition does not hold, the lower bound can be improved to for all but at most 2d + 14 different time coordinates of m.Define the sequence of sites in Z d as We now estimate which is a contradiction with the fact that we are in the event E k+1 .We just concluded that, on E k+1 , there are two possible outcomes: either D k (m) holds for some m ∈ M k or there exists a choice of indices (m i ) 2d+15 i=1 in M k with different time coordinates such that E k (m i ) holds for all i ≤ 2d + 15.
Suppose we are in the last case described above, and fix one possible choice of indices (m i = (x i , s i )) 2d+15 i=1 .Observe that, for all i, we have L k ≤ s i+2 − s i ≤ L k+1 .
Figure 4: The first application of the decoupling estimate.Small boxes represent the supports of the events E k (m i ) and the bold boxes are the supports of the events considered when applying the decoupling.
By possibly further increasing the value of ℓ 0 , we can apply Theorem 1.4 a few times (see Figure 4) to conclude that k .
(5.31) Union bounds, Lemma 5.2 and a change of constants yields concluding the proof.

Proof of Theorem 1.3
In this subsection, we combine the lemmas provided so far to conclude the proof of Theorem 1.3.
Since the event in (1.3) is non-increasing, it suffices to verify the statement for one value of ρ.
Define Ēk (m) analogously to E k (m), but with R k (m) replaced by Rk (m), an m- and notice that this is a non-increasing sequence of events.In particular, for m = (x, t), we have for all k ≥ 0 and all m ∈ M k , provided L 0 is large enough.We can now proceed with the proof of Theorem 1.3.
Proof of Theorem 1.3.Choose k 0 large enough such that ρ ∞ ≤ L 2 k 0 , and set t 0 = L k 0 .For t ≥ t 0 , let k ≥ k 0 be the only value such that In particular, if the above holds simultaneously with r t ≤ t, then all particles that are at a position that realizes rl L k at time lL k must jump at least 7 lL k − t ≥ 6L k times between times lL k and t.This can be easily bound with concentration on the number of jumps of any given particle.In conclusion, we obtain ≤ exp u q(1)e λ + q(−1)e −λ − 1 − λv i − λǫ ) + exp u q(1)e −λ + q(−1)e λ − 1 + λv i − λǫ ) ≤ 2e −cu , (A.12) if λ > 0 is taken small enough, depending of the values of q(1), q(−1) and ǫ.This yields (A.9) and, together with (A.7) and (A.8), concludes the proof.

B Sampling of random walks
In this section, we provide bounds for transition probabilities for biased random walks and collect some consequences of these bounds.
We first consider zero-mean random walks.The following lemma is a consequence of an analogous result for discrete time balanced random walks.
Lemma B.1.Let (X s ) s≥0 denote a nearest-neighbor continuous-time random walk with transition probability that has mean zero.There exists a positive constant c 18 > 0 such that, if all t ≥ 1 and x ∈ Z d satisfying ||x|| ≤ √ t, then Proof.First notice that (X s ) s≥0 may be realized as a discrete-time zero-mean lazy random walk ( Xn ) n≥0 together with a Poisson process (P s ) s≥0 in R + with intensity 2 that controls the jump times.
According to the remark after Proposition 2.1.2from [13], there exists a constant c > 0 such that, if ||x|| ≤ √ n, then and write Z = 2 d i=1 p i .Let ( Xs ) s≥0 be a continuous-time random walk with transition probability q(•) given by q(e i ) = q(−e i ) = p i Z .We can write (B.10) The central limit theorem implies that, if t is large enough, the last probability above is uniformly bounded from below by some positive constant.This concludes the proof.
The following proposition states that, provided two random walks do not start very far away, the probability that they meet at a given time t does not decay very fast.

Furthermore, we can bound the probability P L 1 2 1 2
[A c k,i ] ≤ P Poisson((1 − e − 1 8)p(e 1 )L Figure 3 contains a representation of the sets B k and R k for d = 1.

Figure 3 :
Figure 3: A representation for the sets B k and R k .We represent the event E k by considered the shaded area as the infection process.

P
x [ Xn = x] ≥ c n d /2 .(B.2)We now boundP x [X t = 0] ≥ k: |k−2t|≤t P[P t = k]P x [ Xk = x] ≥ k: |k−2t|≤t P[P t = k] c t d /2 ≥ c t d /2 P[|P t − 2t| ≤ t] ≥ c 18 t d /2 , (B.3) concluding the proof.Our next goal is to obtain an analogous result to Lemma B.1 for biased random walks.Lemma B.2.Let (X s ) s≥0 denote a biased nearest-neighbor random walk with transition probability p(•).Assume p(e i ) > 0 and p(−e i ) > 0, for all i ∈ [d], and set v = (v 1 , . . ., v d ) = y∼0 p(y)y ∈ R d .(B.4)There exist positive constants c 19 , c 20 , and c 21 such that, if t ≥ c 19 and x∈ Z d is such that ||x − vt|| ≤ c 20 √ t, then P 0 [X t = x] ≥ c 21 t d /2 .(B.5)The proof in based in writing the biased random walk as the sum of drift terms and a zero-mean continuous time random walk, and using the estimate provided by Lemma B.1.Proof.For each i ∈ [d], let p i = min{p(e i ), p(−e i )}, (B.6)

Proposition B. 3 . 2 √
Let p(•) be a transition probability satisfying all hypotheses from Lemma B.2.There exists a positive constant c 22 > 0 such that, for all t ≥ c 19 and x, y ∈ Z d such that ||x − y|| ≤ c 20 t, the following holds.If (X s ) s≥0 and (Y s ) s≥0 are independent random walks with jump distribution p(•) and initial positions X 0 = x and Y 0 = y, then P[X t = Y t ] ≥ c 22 t d /2 .(B.11) Xt(1− d i=1 |v i |) , (B.7)where Y i t ∼ Poisson(t|v i |) are independent.Split the probability of X t = x according to the value of the sum d i=1 Sign(v i )Y i t e i .This yieldsP 0 [X t = x] = Sign(v i )Y i t e i = y P 0 Xt(1− d i=1 |v i |) = x − y .