Intermittency on catalysts: symmetric exclusion

We continue our study of intermittency for the parabolic Anderson equation $\partial u/\partial t = \kappa\Delta u + \xi u$, where $u\colon \Z^d\times [0,\infty)\to\R$, $\kappa$ is the diffusion constant, $\Delta$ is the discrete Laplacian, and $\xi\colon \Z^d\times [0,\infty)\to\R$ is a space-time random medium. The solution of the equation describes the evolution of a ``reactant'' $u$ under the influence of a ``catalyst'' $\xi$. In this paper we focus on the case where $\xi$ is exclusion with a symmetric random walk transition kernel, starting from equilibrium with density $\rho\in (0,1)$. We consider the annealed Lyapunov exponents, i.e., the exponential growth rates of the successive moments of $u$. We show that these exponents are trivial when the random walk is recurrent, but display an interesting dependence on the diffusion constant $\kappa$ when the random walk is transient, with qualitatively different behavior in different dimensions. Special attention is given to the asymptotics of the exponents for $\kappa\to\infty$, which is controlled by moderate deviations of $\xi$ requiring a delicate expansion argument. In G\"artner and den Hollander \cite{garhol04} the case where $\xi$ is a Poisson field of independent (simple) random walks was studied. The two cases show interesting differences and similarities. Throughout the paper, a comparison of the two cases plays a crucial role.

Lyapunov exponents, i.e., the exponential growth rates of the successive moments of u. We show that these exponents are trivial when the random walk is recurrent, but display an interesting dependence on the diffusion constant κ when the random walk is transient, with qualitatively different behavior in different dimensions. Special attention is given to the asymptotics of the exponents for κ → ∞, which is controlled by moderate deviations of ξ requiring a delicate expansion argument. In Gärtner and den Hollander (10) the case where ξ is a Poisson field of independent (simple) random walks was studied. The two cases show interesting differences and similarities. Throughout the paper, a comparison of the two cases plays a crucial role.
1 Introduction and main results

Model
The parabolic Anderson equation is the partial differential equation Here, the u-field is R-valued, κ ∈ [0, ∞) is the diffusion constant, ∆ is the discrete Laplacian, acting on u as ∆u(x, t) = y∈Z d y−x =1 [u(y, t) − u(x, t)] (1.1.2) ( · is the Euclidian norm), while is an R-valued random field that evolves with time and that drives the equation. As initial condition for (1.1.1) we take u(·, 0) ≡ 1.
(1.1.4) Equation (1.1.1) is a discrete heat equation with the ξ-field playing the role of a source. What makes (1.1.1) particularly interesting is that the two terms in the right-hand side compete with each other : the diffusion induced by ∆ tends to make u flat, while the branching induced by ξ tends to make u irregular. Intermittency means that for large t the branching dominates, i.e., the u-field develops sparse high peaks in such a way that u and its moments are each dominated by their own collection of peaks (see Gärtner and König (11), Section 1.3, and den Hollander (10), Section 1.2). In the quenched situation this geometric picture of intermittency is well understood for several classes of time-independent random potentials ξ (see Sznitman (21) for Poisson clouds and Gärtner, König and Molchanov (12) for i.i.d. potentials with doubleexponential and heavier upper tails). For time-dependent random potentials ξ, however, such results are not yet available. Instead one restricts attention to understanding the phenomenon of intermittency indirectly by comparing the successive annealed Lyapunov exponents λ p = lim t→∞ 1 t log u(0, t) p 1/p , p = 1, 2, . . . (1.1.5) One says that the solution u is p-intermittent if the strict inequality λ p > λ p−1 (1.1. 6) holds. For a geometric interpretation of this definition, see (11), Section 1.3.
In their fundamental paper (3), Carmona and Molchanov succeeded to investigate the annealed Lyapunov exponents and to draw the qualitative picture of intermittency for potentials of the form ξ(x, t) =Ẇ x (t), (1.1.7) where {W x , x ∈ Z d } denotes a collection of independent Brownian motions. (In this important case, equation (1.1.1) corresponds to an infinite system of coupledÎto diffusions.) They showed that for d = 1, 2 intermittency of all orders is present for all κ, whereas for d ≥ 3 p-intermittency holds if and only if the diffusion constant κ is smaller than a critical threshold κ * p tending to infinity as p → ∞. They also studied the asymptotics of the quenched Lyapunov exponent in the limit as κ ↓ 0, which turns out to be singular. Subsequently, the latter was more thoroughly investigated in papers by Carmona, Molchanov and Viens (4), Carmona, Koralov and Molchanov (2), and Cranston, Mountford and Shiga (6), (7).
In the present paper we study a different model, describing the spatial evolution of moving reactants under the influence of moving catalysts. In this model, the potential has the form with {Y k , k ∈ N} a collection of catalyst particles performing a space-time homogeneous reversible particle dynamics with hard core repulsion, and u(x, t) describes the concentration of the reactant particles given the motion of the catalyst particles. We will see later that the study of the annealed Lyapunov exponents leads to different dimension effects and requires the development of different techniques than in the white noise case (1.1.7). Indeed, because of the non-Gaussian nature and the non-independent spatial structure of the potential, it is far from obvious how to tackle the computation of Lyapunov exponents.
Let us describe our model in more detail. We consider the case where ξ is Symmetric Exclusion (SE), i.e., ξ takes values in {0, 1} Z d × [0, ∞), where ξ(x, t) = 1 means that there is a particle at x at time t and ξ(x, t) = 0 means that there is none, and particles move around according to a symmetric random walk transition kernel. We choose ξ(·, 0) according to the Bernoulli product measure with density ρ ∈ (0, 1), i.e., initially each site has a particle with probability ρ and no particle with probability 1 − ρ, independently for different sites. For this choice, the ξ-field is stationary in time.
One interpretation of (1.1.1) and (1.1.4) comes from population dynamics. Consider a spatially homogeneous system of two types of particles, A (catalyst) and B (reactant), subject to: (i) A-particles behave autonomously, according to a prescribed stationary dynamics, with density ρ; (ii) B-particles perform independent random walks with diffusion constant κ and split into two at a rate that is equal to the number of A-particles present at the same location; (iii) the initial density of B-particles is 1. It is possible to add that B-particles die at rate δ ∈ (0, ∞). This amounts to the trivial transformation u(x, t) → u(x, t)e −δt .
In Kesten and Sidoravicius (16) and in Gärtner and den Hollander (10), the case was considered where ξ is given by a Poisson field of independent simple random walks. The survival versus extinction pattern (in (16) for δ > 0) and the annealed Lyapunov exponents (in (10) for δ = 0) were studied, in particular, their dependence on d, κ and the parameters controlling ξ.

SE, Lyapunov exponents and comparison with IRW
Throughout the paper, we abbreviate Ω = {0, 1} Z d (endowed with the product topology), and we let p : Z d × Z d → [0, 1] be the transition kernel of an irreducible random walk, that is assumed to be symmetric, p(x, y) = p(y, x) ∀ x, y ∈ Z d . (1.

2.2)
A special case is simple random walk The exclusion process ξ is the Markov process on Ω whose generator L acts on cylindrical functions f as (see Liggett (19), Chapter VIII) The graphical representation shows that the evolution is invariant under time reversal and, in particular, the equilibria (ν ρ ) ρ∈[0,1] are reversible. This fact will turn out to be very important later on. The arrows represent a path from (x, 0) to (y, t).
By the Feynman-Kac formula, the solution of (1.1.1) and (1. where X κ is simple random walk on Z d with step rate 2dκ and E x denotes expectation with respect to X κ given X κ (0) = x. We will often write ξ t (x) and X κ t instead of ξ(x, t) and X κ (t), respectively. For p ∈ N and t > 0, define where X κ q , q = 1, . . . , p, are p independent copies of X κ , E 0,...,0 denotes expectation w.r.t. X κ q , q = 1, . . . , p, given X κ 1 (0) = · · · = X κ p (0) = 0, and the time argument t − s in (1.2.7) is replaced by s in (1.2.9) via the reversibility of ξ starting from ν ρ . If the last quantity admits a limit as t → ∞, then we define to be the p-th annealed Lyapunov exponent. From Hölder's inequality applied to (1.2.8) it follows that Λ p (t) ≥ Λ p−1 (t) for all t > 0 and p ∈ N \ {1}. Hence λ p ≥ λ p−1 for all p ∈ N \ {1}. As before, we say that the system is pintermittent if λ p > λ p−1 . In the latter case the system is q-intermittent for all q > p as well (cf. Gärtner and Molchanov (13), Section 1.1). We say that the system is intermittent if it is p-intermittent for all p ∈ N \ {1}. Let (ξ t ) t≥0 be the process of Independent Random Walks (IRW) with step rate 1, transition kernel p(·, ·) and state space Ω = N Z d 0 with N 0 = N ∪ {0}. Let E IRW η denote expectation w.r.t. (ξ t ) t≥0 starting fromξ 0 = η ∈ Ω, and write E IRW νρ = Ω ν ρ (dη) E IRW η . Throughout the paper we will make use of the following inequality comparing SE and IRW. The proof of this inequality is given in Appendix A and uses a lemma due to Landim (18).
This powerful inequality will allow us to obtain bounds that are more easily computable.

Main theorems
Our first result is standard and states that the Lyapunov exponents exist and behave nicely as a function of κ. We write λ p (κ) to exhibit the dependence on κ, suppressing d and ρ.
is continuous, non-increasing and convex.
Our second result states that the Lyapunov exponents are trivial for recurrent random walk but are non-trivial for transient random walk (see Fig. 2), without any further restriction on p(·, ·). Our third result shows that for transient random walk the system is intermittent for small κ.
Our fourth and final result identifies the behavior of the Lyapunov exponents for large κ when d ≥ 4 and p(·, ·) is simple random walk (see Fig. 3).

Discussion
Theorem 1.3.1 gives general properties that need no further comment. We will see that they in fact hold for any stationary, reversible and bounded ξ.
The intuition behind Theorem 1.3.2 is the following. If the catalyst is driven by a recurrent random walk, then it suffers from "traffic jams", i.e., with not too small a probability there is a large region around the origin that the catalyst fully occupies for a long time. Since with not too small a probability the simple random walk (driving the reactant) can stay inside this large region for the same amount of time, the average growth rate of the reactant at the origin is maximal. This phenomenon may be expressed by saying that for recurrent random walk clumping of the catalyst dominates the growth of the moments. For transient random walk, on the other hand, clumping of the catalyst is present (the growth rate of the reactant is > ρ), but it is not dominant (the growth rate of the reactant is < 1). As the diffusion constant κ of the reactant increases, the effect of the clumping of the catalyst gradually diminishes and the growth rate of the reactant gradually decreases to the density of the catalyst. with where ∇ R 3 and ∆ R 3 are the continuous gradient and Laplacian, · 2 is the L 2 (R 3 )-norm, In Section 1.5 we will explain how this conjecture arises in analogy with the case of IRW studied in Gärtner and den Hollander (10) In words, we conjecture that in d = 3 the curves in Fig. 3 never merge, whereas for d ≥ 4 the curves merge successively. Let us briefly compare our results for the simple symmetric exclusion dynamics with those of the IRW dynamics studied in (10). If the catalysts are moving freely, then they can accumulate with a not too small probability at single lattice sites. This leads to a double-exponential growth of the moments for d = 1, 2. The same is true for d ≥ 3 for certain choices of the model parameters ('strongly catalytic regime'). Otherwise the annealed Lyapunov exponents are finite ('weakly catalytic regime'). For our exclusion dynamics, there can be at most one catalytic particle per site, leading to the degenerate behavior for d = 1, 2 (i.e., the recurrent case) as stated in Theorem 1.3.2(i). For d ≥ 3, the large κ behavior of the annealed Lyapunov exponents turns out to be the same as in the weakly catalytic regime for IRW. The proof of Theorem 1.3.4 will be carried out in Section 4 essentially by 'reducing' its assertion to the corresponding statement in (10), as will be explained in Section 1.5. The reduction is highly technical, but seems to indicate a degree of 'universality' in the behavior of a larger class of models. Finally, let us explain why we cannot proceed directly along the lines of (10). In that paper, the key is a Feynman-Kac representation of the moments. For the first moment, for instance, we have where X is simple random walk on Z d with generator κ∆ starting from the origin, ν is the density of the catalysts, and w denotes the solution of the random Cauchy problem For large κ, the ξ-field in (1.5.1) evolves slowly and therefore does not manage to cooperate with the X-process in determining the growth rate. Also, the prefactor 1/κ in the exponent is small. As a result, the expectation over the ξ-field can be computed via a Gaussian approximation that becomes sharp in the limit as κ → ∞, i.e., (1.5.3) (In essence, what happens here is that the asymptotics for κ → ∞ is driven by moderate deviations of the ξ-field, which fall in the Gaussian regime.) The exponent in the r.h.s. of (1.5.3) Now, for x, y ∈ Z d and b ≥ a ≥ 0 we have where the first equality uses the stationarity of ξ, the third equality uses (1.2.6) from the graphical representation, and the fourth equality uses that ν ρ is Bernoulli. Substituting (1.5.5) into (1.5.4), we get that the r.h.s. of (1.5.3) equals The r.h.s. of (1.3.1), which is valid for d ≥ 4, is obtained from the above computations by moving the expectation in (1.5.6) into the exponent. Indeed, (1.5.9) Thus we see that the result in Theorem 1.3.4 comes from a second order asymptotics on ξ and a first order asymptotics on X in the limit as κ → ∞. Despite this simple fact, it turns out to be hard to make the above heuristics rigorous. For d = 3, on the other hand, we expect the first order asymptotics on X to fail, leading to the more complicated behavior in (1.4.1). Remark 2: In Gärtner and den Hollander (10) the catalyst was γ times a Poisson field with density ρ of independent simple random walks stepping at rate 2dθ, where γ, ρ, θ ∈ (0, ∞) are parameters. It was found that the Lyapunov exponents are infinite in d = 1, 2 for all p and in d ≥ 3 for p ≥ 2dθ/γG d , irrespective of κ and ρ. In d ≥ 3 for p < 2dθ/γG d , on the other hand, the Lyapunov exponents are finite for all κ, and exhibit a dichotomy similar to the one expressed by Theorem 1.3.4 and Conjecture 1.4.1. Apparently, in this regime the two types of catalyst are qualitatively similar. Remarkably, the same asymptotic behavior for large κ was found (with ργ 2 replacing ρ(1 − ρ) in (1.3.1)), and the same variational formula as in (1.4.2) was seen to play a central role in d = 3. [Note: In (10) the symbols ν, ρ, G d were used instead of ρ, θ, G d /2d.]

Outline
In Section 2 we derive a variational formula for λ p from which Theorem 1.3.1 follows immediately. The arguments that will be used to derive this variational formula apply to an arbitrary bounded, stationary and reversible catalyst. Thus, the properties in Theorem 1.3.1 are quite general. In Section 3 we do a range of estimates, either directly on (1.2.9) or on the variational formula for λ p derived in Section 2, to prove Theorems 1.3.2 and 1.3.3. Here, the special properties of SE, in particular, its space-time correlation structure expressed through the graphical representation (see Fig. 1), are crucial. These results hold for an arbitrary random walk subject to (1.2.1-1.2.2). Finally, in Section 4 we prove Theorem 1.3.4, which is restricted to simple random walk. The analysis consists of a long series of estimates, taking up more than half of the paper and, in essence, showing that the problem reduces to understanding the asymptotic behavior of (1.5.6). This reduction is important, because it explains why there is some degree of universality in the behavior for κ → ∞ under different types of catalysts: apparently, the Gaussian approximation and the two-point correlation function in space and time determine the asymptotics (recall the heuristic argument in Section 1.5). The main steps of this long proof are outlined in Section 4.2.

Lyapunov exponents: general properties
In this section we prove Theorem 1.3.1. In Section 2.1 we formulate a large deviation principle for the occupation time of the origin in SE due to Landim (18), which will be needed in Section 3.2. In Section 2.2 we extend the line of thought in (18) and derive a variational formula for λ p from which Theorem 1.3.1 will follow immediately.

Large deviations for the occupation time of the origin
Kipnis (17), building on techniques developed by Arratia (1), proved that the occupation time of the origin up to time t, satisfies a strong law of large numbers and a central limit theorem. Landim (18) subsequently proved that T t satisfies a large deviation principle, i.e., lim sup given by an associated Dirichlet form. This rate function is continuous, for transient random walk kernels p(·, ·) it has a unique zero at ρ, whereas for recurrent random walk kernels it vanishes identically. Return to (1.2.9). In this section we show that, by considering ξ and X κ 1 , . . . , X κ p as a joint random process and exploiting the reversibility of ξ, we can use the spectral theorem to express the Lyapunov exponents in terms of a variational formula. From the latter it will follow that κ → λ p (κ) is continuous, non-increasing and convex on [0, ∞). Define Then we may write (1.2.9) as The random process Y = (Y (t)) t≥0 takes values in Ω × (Z d ) p and has generator in L 2 (ν ρ ⊗ m p ) (endowed with the inner product (·, ·)), with L given by (1.2.4), ∆ i the discrete Laplacian acting on the i-th spatial coordinate, and m the counting measure on Z d . Let Although this is a general fact, the proofs known to us (e.g. Carmona and Molchanov (3), Lemma III.1.1) do not work in our situation.
Proof. Let (P t ) t≥0 denote the semigroup generated by G κ V .
Upper bound: By a standard large deviation estimate for simple random walk, we have Thus it suffices to focus on the term with the indicator. Estimate, with the help of the spectral theorem (Kato (15), Section VI.5), where 1 1 (Q t log t ) p is the indicator function of (Q t log t ) p ⊂ (Z d ) p and (E µ ) µ∈R denotes the spectral family of orthogonal projection operators associated with G κ V . Since 1 Lower bound: For every δ > 0 there exists an f δ ∈ L 2 (ν ρ ⊗ m p ) such that Kato (15), Section VI.2; the spectrum of G κ V coincides with the set of µ's for which E µ+δ − E µ−δ = 0 for all δ > 0). Approximating f δ by bounded functions, we may without loss of generality assume that 0 ≤ f δ ≤ 1. Similarly, approximating f δ by bounded functions with finite support in the spatial variables, we may assume without loss of generality that there exists a finite where p κ t (x, y) = P x (X κ (t) = y) and C δ = min x∈K δ p κ 1 (0, x) > 0. The equality in (2.2.10) uses the Markov property and the fact that ν ρ is invariant for the SE-dynamics. Next estimate where and z i → y i means that the argument z i is replaced by y i .
The proof also works for modifications of the random walk Y for which a lower bound similar to that in the last two lines of (2.2.10) can be obtained. Such modifications will be used later in Sections 4.5-4.6.
We are now ready to give the proof of Theorem 1.3.1.
The variational formula in Proposition 2.2.2 is useful to deduce qualitative properties of λ p , as demonstrated above. Unfortunately, it is not clear how to deduce from it more detailed information about the Lyapunov exponents. To achieve the latter, we resort in Sections 3 and 4 to different techniques, only occasionally making use of Proposition 2.2.2.

Lyapunov exponents: recurrent vs. transient random walk
In this section we prove Theorems 1.3.2 and 1.3.3. In Section 3.1 we consider recurrent random walk, in Section 3.2 transient random walk.

Recurrent random walk: proof of Theorem 1.3.2(i)
The key to the proof of Theorem 1.3.2(i) is the following.
Note that H Q 0 = Q and that t → H Q t is non-decreasing. Denote by P and E, respectively, probability and expectation associated with the graphical representation. Then where R t is the range after time t of the random walk with transition kernel p(·, ·) driving ξ and E p(·,·) 0 denotes expectation w.r.t. this random walk starting from 0. Indeed, by time reversal, the probability that there is a path from (x, 0) to {0} × [0, t] in the graphical representation is equal to the probability that the random walk starting from 0 hits x prior to time t. It follows from (3.1.3-3.1.6) that Finally, since lim t→∞ 1 t E p(·,·) 0 R t = 0 when p(·, ·) is recurrent (see Spitzer (20), Chapter 1, Section 4), we get (3.1.1).
We are now ready to give the proof of Theorem 1.3.2(i).
Proof. Since p → λ p is non-decreasing and λ p ≤ 1 for all p ∈ N, it suffices to give the proof for p = 1. For p = 1, (1.2.9) gives By restricting X κ to stay inside a finite box Q ⊂ Z d up to time t and requiring ξ to be 1 throughout this box up to time t, we obtain (3.1.9) For the second factor, we apply (3.1.1). For the third factor, we have with λ κ (Q) > 0 the principal Dirichlet eigenvalue on Q of −κ∆, the generator of the simple random walk X κ . Combining (3.1.1) and (3.1.8-3.1.10), we arrive at Finally, let Q → Z d and use that lim Q→Z d λ κ (Q) = 0 for any κ, to arrive at λ 1 ≥ 1. Since, trivially, λ 1 ≤ 1, we get λ 1 = 1. Throughout the present section we assume that the random walk kernel p(·, ·) is transient.

Proof of the lower bound in Theorem
Proof. Since p → λ p (κ) is non-decreasing for all κ, it suffices to give the proof for p = 1. For every ǫ > 0 there exists a function φ ǫ : Therefore we may use f ǫ as a test function in (2.2.12) in Proposition 2.2.2. This gives and In the last line we use that , we find Because ρ ∈ (0, 1), it follows that for ǫ small enough the r.h.s. is strictly larger than ρ.

Proof of the asymptotics in Theorem 1.3.2(ii)
The proof of the next proposition is somewhat delicate. Proof. We give the proof for p = 1. The generalization to arbitrary p is straightforward and will be explained at the end. We need a cube Q = [−R, R] d ∩ Z d of length 2R, centered at the origin and δ ∈ (0, 1). Limits are taken in the order The proof proceeds in 4 steps, each containing a lemma.
Step 1: Let X κ,Q be simple random walk on Q obtained from X κ by suppressing jumps outside of Q. Then (ξ t , X κ,Q t ) t≥0 is a Markov process on Ω × Q with self-adjoint generator in L 2 (ν ρ ⊗ m Q ), where m Q is the counting measure on Q.
Proof. We consider the partition of Z d into cubes Q z = 2Rz + Q, z ∈ Z d . The Lyapunov exponent λ 1 (κ) associated with X κ is given by the variational formula (2.2.12-2.2.14) for p = 1. It can be estimated from above by splitting the sums over Z d in (2.2.14) into separate sums over the individual cubes Q z and suppressing in A 3 (f ) the summands on pairs of lattice sites belonging to different cubes. The resulting expression is easily seen to coincide with the original variational expression (2.2.12), except that the supremum is restricted in addition to functions f with spatial support contained in Q. But this is precisely the Lyapunov exponent λ Q 1 (κ) associated with X κ,Q . Hence, λ 1 (κ) ≤ λ Q 1 (κ), and this implies (3.2.14).
Step 2: For large κ the random walk X κ,Q moves fast through the finite box Q and therefore samples it in a way that is close to the uniform distribution.

2.25)
where N t+δ is the total number of jumps that ξ makes inside Q up to time t + δ. The second term in the r.h.s. of (3.2.25) equals the second term in the r.h.s. of (3.2.15). The first term will be negligible on an exponential scale for δ ↓ 0, because, as can be seen from the graphical representation, N t+δ is stochastically smaller that the total number of jumps up to time t + δ of a Poisson process with rate |Q ∪ ∂Q|. Indeed, abbreviating which is the desired bound in (3.2.15). Note that a ↓ 0, b ↓ 1 as δ, ε ↓ 0 and hence N 0 (a, b) ↓ N 0 > 1.
Step 3: By combining Lemmas 3.2.4-3.2.5, we now know that for any Q finite,

2.30)
where (ξ t ) t≥0 is the process of Independent Random Walks on Z d with step rate 1 and transition kernel p(·, ·), and E IRW νρ = Ω ν ρ (dη) E IRW η . The r.h.s. can be computed and estimated as follows.

2.33)
which has the representation is the single random walk with step rate 1 and transition kernel p(·, ·), and E RW x denotes the expectation w.r.t. to Y starting from Y 0 = x.   Step 4: The proof is completed by showing the following:

Proof. Let
Proof. Let G denote the Green operator acting on functions V :  14) with an extra factor p in the exponent. Then proceed as before, which leads to Lemma 3.2.6 but with w Q the solution of (3.2.33) with p |Q| 1 Q (x) between braces. Then again proceed as before, which leads to (3.2.40) but with an extra factor p in the r.h.s. of (3.2.42). The latter gives a factor e pρt replacing e ρt in (3.2.32). Now use Lemma 3.2.7 to get the claim.

Lyapunov exponents: transient simple random walk
This section is devoted to the proof of Theorem 1.3.4, where d ≥ 4 and p(·, ·) is simple random walk given by (1.2.3), i.e., ξ is simple symmetric exclusion (SSE). The proof is long and technical, taking up more than half of the present paper. After a time scaling in Section 4.1, an outline of the proof will be given in Section 4.2. The proof for p = 1 will then be carried out in Sections 4.3-4.7. In Section 4.8, we will indicate how to extend the proof to arbitrary p.

Scaling
As before, we write X κ s , ξ s (x) instead of X κ (s), ξ(x, s). We abbreviate  2.4)). Three parameters will be important: t, κ and T . We will take limits in the following order: and denote by P η,x the law of Z starting from Z 0 = (η, x). Then Z = (Z t ) t≥0 is a Markov process on Ω × Z d with generator (acting on the Banach space of bounded continuous functions on Ω × Z d , equipped with the supremum norm). Abbreviate X κ t = X κt , t ≥ 0, where X = (X t ) t≥0 is simple random walk with step rate 2d, being independent of (ξ t ) t≥0 . We therefore have Then λ 1 (κ) = κλ * 1 (κ). Therefore, in what follows we will focus on the quantity and compute its asymptotic behavior for large κ. We must show that

Outline
To prove (4.1.8), we have to study the asymptotics of the expectation on the r.h.s. of (4.1.7) as t → ∞ and κ → ∞ (in this order). This expectation has the form Let ψ be the bounded solution of the equation In fact, such a solution exists only after an appropriate regularization, which turns out to be asymptotically correct for d ≥ 4 but not for d = 3.) Then the term in the exponent of (4.2.1) is a martingale M t modulo a remainder that stays bounded as t → ∞: . Hence, the asymptotic investigation of (4.2.1) reduces to the study of is an exponential martingale (Lemma 4.3.1(iii) below) and r is close to 1. Hence, applying Hölder's inequality, we may bound the expectation in the r.h.s. of (4.2.4) from above by the Rayleigh-Ritz formula shows that, asymptotically as t → ∞, the expectation in (4.2.7) gets larger when we replace X s by 0. Using an explicit representation of ψ, we see that for certain kernels K diag and K off (Lemma 4.6.2 below). Substituting this into the previous formulas and separating the "diagonal" term from the "off-diagonal" term by use of the Cauchy-Schwarz inequality, we finally see that the whole proof reduces to showing that (4.2.11) (Lemmas 4.6.3 and 4.6.4 below). To prove the latter statements, we use Jensen's inequality to move the kernels K diag and K off out of the exponents. Then we are left with the derivation of upper bounds for terms of the form

2.12)
and (Lemmas 4.6.8 and 4.6.10 below). The first expectation can be handled with the help of the IRW approximation (Proposition 1.2.1). The handling of the second expectation is more involved and requires, in addition, spectral methods.

SSE+RW generator and an auxiliary exponential martingale
Recall (4.1.3-4.1.4). Let (P t ) t≥0 be the semigroup generated by A. The following lemma will be crucial to rewrite the expectation in the r.h.s. of (4.1.7) in a more manageable form.  Then: for bounded continuous f : is a strongly continuous semigroup with generator Proof. The proof is standard.
(i) This follows from the fact that A is a Markov generator and ψ belongs to its domain (see Liggett (19), Chapter I, Section 5).
(ii) Let η ∈ Ω, x ∈ Z d and f : Ω × Z d → R bounded measurable. Rewrite (4.3.2) as and where we use the Markov property of Z at time t 1 (under P η,x ) together with the fact that N r t 1 +t 2 /N r t 1 only depends on Z t for t ∈ [t 1 , t 1 + t 2 ]. Equations (4.3.6-4.3.7) show that (P new t ) t≥0 is a semigroup which is easily seen to be strongly continuous. Taking the derivative of (4.3.2) in the norm w.r.t. t at t = 0, we get Inverting this Laplace transform, we see that Applying the Markov property of Z at time t, we get where the third equality uses (4.3.11).
(iv) This follows from (iii) via a calculation similar to (4.3.7).

Proof of Theorem 1.3.4
In this section we compute upper and lower bounds for the r.h.s. of (4.1.7) in terms of certain key quantities (Proposition 4.4.1 below). We then state two propositions for these quantities (Propositions 4.4.2-4.4.3 below), from which Theorem 1.3.4 will follow. The proof of these two propositions is given in Sections 4.6-4.7.
For T > 0, let ψ : Ω × Z d be defined by where (P t ) t≥0 is the semigroup generated by A (recall (4.1.4)). We have where p t (x, y) is the probability that simple random walk with step rate 1 moves from x to y in time t (recall that we assume (1.2.3)). Using (1.2.6), we obtain the representation  1.1). Note that ψ depends on κ and T . We suppress this dependence. Similarly, The auxiliary function ψ will play a key role throughout the remaining sections. The integral in (4.4.1) is a regularization that is useful when dealing with central limit type behavior of Markov processes (see e.g. Kipnis (17)). Heuristically, T = ∞ corresponds to −Aψ = φ. Later we will let T → ∞.
The following proposition serves as the starting point of our asymptotic analysis.  and 1/r + 1/q = 1 for any r, q > 1 in the first inequality and any q < 0 < r < 1 in the second inequality.
Proof. Recall (4.1.7). From the first line of (4.3.1) and (4.4.4) it follows that (4.4.8) Hence and By Hölder's inequality, with r, q > 1 such that 1/r + 1/q = 1, it follows from (4.4.9) that where the second line of (4.4.12) comes from the fact that N r t = exp[V r t ] is a martingale, by Lemma 4.3.1(iii). Similarly, by the reverse of Hölder's inequality, with q < 0 < r < 1 such that 1/r + 1/q = 1, it follows from (4.4.9) that The middle term in the r.h.s. of (4.4.10) can be discarded, because (4.4.3) shows that −ρT ≤ ψ ≤ (1 − ρ)T . Apply the Cauchy-Schwarz inequality to the r.h.s. of (4.4.12-4.4.13) to separate the other two terms in the r.h.s. of (4.4.10).
Note that in the r.h.s. of (4.4.7) the prefactors of the logarithms and the prefactors in the exponents are both positive for the upper bound and both negative for the lower bound. This will be important later on.
The following two propositions will be proved in Sections 4.6-4.7, respectively. Abbreviate    Letting r tend to 1, we obtain

Preparatory facts and notation
In order to estimate I r,q 1 (κ, T ) and I r,q 2 (κ, T ), we need a number of preparatory facts. These are listed in Lemmas 4.5.1-4.5.4 below.
It follows from (4.4.3) that where we recall the definitions of 1[κ] and η a,b in (4.1.1) and (1.2.5), respectively. We need bounds on both these differences.
Lemma 4.5.1. For any η ∈ Ω, a, b, x ∈ Z d and κ, T > 0, where G d is the Green function at the origin of simple random walk.
Proof. The bound in (4.5.3) is immediate from (4.5.1). By (4.5.2), we have Using the bound p t (x, y) ≤ p t (0, 0) (which is immediate from the Fourier representation of the transition kernel), we get Again by (4.5.2), we have (4.5.8) where ∆ 1 denotes the discrete Laplacian acting on the first coordinate, and in the fifth line we use that (∂/∂t)p t = (1/2d)∆ 1 p t . For x ∈ Z d , let τ x : Ω → Ω be the x-shift on Ω defined by (4.5.13) An upper bound is obtained by dropping B 3 (f ), i.e., the part associated with the simple random walk X. After that, split the supremum into two parts, (4.5.14) where f z (η) = f (η, z)/g(z) with g(z) 2 = Ω ν ρ (dη)f (η, z) 2 . The second supremum in (4.5.14), which runs over a family of functions indexed by z, can be brought under the sum. This gives r.h.s. (4.5.14) = sup (4.5.15) By (4.5.11) and the shift-invariance of ν ρ , we may replace z by 0 under the second supremum in (4.5.15), in which case the latter no longer depends on z, and we get r.h.s. (4.5.15) = sup  Proof. First, using that Ψ d (ρ) = 0, we obtain the lower bound max Next, for any δ > 0, we have max (4.5.19) where in the second inequality we use that Ψ d has a unique zero at ρ. Letting γ ↓ 0 followed by δ ↓ 0, we get the desired upper bound.
Lemma 4.5.4. There exists C > 0 such that, for all t ≥ 0 and x, y ∈ Z d , (4.5.20) Proof. This is a standard fact. Indeed, we can decompose the transition kernel of simple random walk with step rate 1 as t (x j , y j ), x = (x 1 , . . . , x d ), y = (y 1 , . . . , y d ), (4.5.21) where p (1) t (x, y) is the transition kernel of 1-dimensional simple random walk with step rate 1. In Fourier representation, The bound in (4.5.20) follows from (4.5.21) and . (4.6.1) Lemma 4.6.2. For any κ, T > 0, α ∈ R and r > 0, lim sup Lemma 4.6.6. Uniformly in η ∈ Ω and x ∈ Z d , as κ → ∞, (4.6.9) Taylor expansion of the r.h.s. of (4.6.9) gives that uniformly in η ∈ Ω and x ∈ Z d , Taylor expansion of the r.h.s. of (4.6.11) gives that uniformly in η ∈ Ω and x ∈ Z d , (4.6.14) where K κ,T : Z d × Z d → R is given by Therefore, for all κ, T > 0, lim sup which satisfies W (η, x) = W (τ x η, 0) as required in (4.5.11). Splitting the sum in the r.h.s. of (4.6.16) into its diagonal and off-diagonal part and using the Cauchy-Schwarz inequality, we arrive at (4.6.2).

Proof of Lemma 4.6.3
The proof of Lemma 4.6.3 is based on the following two lemmas. Recall (2.1.1).
Lemma 4.6.7. For any T > 0 there exists C T > 0, satisfying lim T →∞ C T = 0, such that Before giving the proofs of Lemmas 4.6.7-4.6.8, we first prove Lemma 4.6.3.
Proof of Lemma 4.6.12. We prove the second line of (4.6.38). The first line follows by symmetry.