Variable speed symmetric random walk driven by the simple symmetric exclusion process

We prove a quenched functional central limit theorem for a one-dimensional random walk driven by a simple symmetric exclusion process. This model can be viewed as a special case of the random walk in a balanced random environment, for which the weak quenched limit is constructed as a function of the invariant measure of the environment viewed from the walk. We bypass the need to show the existence of this invariant measure. Instead, we find the limit of the quadratic variation of the walk and give an explicit formula for it.


Introduction
We prove a quenched functional central limit theorem for a one-dimensional random walk driven by a simple symmetric exclusion process. The model belongs to the class of random walks in dynamical random environments. Recent works have studied examples where the environment is an interacting particle system, including independent random walks [14], the contact process [8] and the simple symmetric exclusion process (SSEP).
To define a random walk driven by the SSEP, one fixes parameters p 1 , p 0 , ρ ∈ [0, 1], λ 0 , λ 1 > 0 and makes the random walk jump from x ∈ Z to x + 1 at time t at rate λ 1 p 1 η t (x) + λ 0 p 0 (1 − η t (x)), where η t (x) is the state of the exclusion process (either 0 or 1) at site x and time t, started from equilibrium at density ρ. The rate for a jump from x to x − 1 is λ 1 (1 − p 1 )η t (x) + λ 0 (1 − p 0 )(1 − η t (x)). Several cases were studied. The results in [17] and [15] that we are about to cite were proven for a discrete-time random walk, but we believe that the continuous-time results we state are true as well. In [17], laws of large numbers and Gaussian fluctuations are proven for λ 0 = λ 1 sufficiently large or sufficiently small and appropriate assumptions on p 0 and p 1 . When λ 0 = λ 1 , [10] proves that the limiting speed, if any, is strictly between λ 0 (2p 0 − 1) and λ 1 (2p 1 − 1). In [15] it is proven that, for λ 0 = λ 1 = 1 the law of large numbers holds for all ρ, with only two possible exceptions, and when the speed is not zero a Gaussian central limit theorem holds. Moreover, when p 0 = 1 − p 1 (as in [2] and [17]) and ρ = 1/2 it was shown in [15] that the speed is zero, but it is an interesting open problem to determine the scale of the fluctuations in this case and there are several competing conjectures: in [19] it is conjectured that under the scaling t 3/4 the limiting process is a fractional Brownian motion with Hurst index H = 3/4; in [12], it is conjectured (for a related continuous model) that the fluctuations are either of order t 1/2 (for a fast particle) or t 2/3 (for a slow particle); on the other hand in [16] and [18], it is conjectured that for either fast or slow particle dynamics the fluctuations are always of order t 1/2 for time t sufficiently large.
Here we allow λ 0 = λ 1 but assume p 0 = p 1 = 1 2 . In this setting, the random walk is a time-change of a simple symmetric random walk. The law of large numbers is immediate, and the problem is to prove convergence to Brownian motion and compute the variance of this limiting Brownian motion at time t. We perform this computation when the environment starts in equilibrium at density ρ ∈ [0, 1]. With those assumptions, our model falls into the class of balanced dynamic random environments. For this class of models an invariance principle was proved in [9]. In this paper we give an entirely different proof of the invariance principle for this particular model. Since random walks in balanced environments are martingales, the key to proving an invariance principle is in proving that the quadratic variation grows linearly. In all previous proofs of invariance principles for random walks in (static or dynamic) environments this was accomplished by proving the existence of an invariant measure for the environment viewed from the particle that was absolutely continuous with respect to the initial measure on environments (see e.g., [22,13,6,9]). In this paper, however, we are able to prove the linear growth of the quadratic variation without any reference to the existence of invariant measures for the environment viewed from the particle. Not only does this give a simpler proof of the invariance principle for this particular model, but it also enables us to compute explicitly the scaling constant in the invariance principle and allows us to obtain quantitative estimates on the rate of convergence for the quadratic variation, see (3.54).
Since the underlying dynamic environment in our model has only two types of sites (particles/holes), the key to analyzing the growth rate of the quadratic variation is to compute the asymptotic fraction of time, lim t→∞ t −1 t 0 η Xs (s) ds. We accomplish this by providing an explicit function ϕ and explicit constants a and b such that Lϕ ≈ aξ 0 + b, where ξ x (t) := η t (x + X t ) and L denotes the generator of the process (ξ(t)) t≥0 , the environment as seen by the walk. This technique of estimating additive functionals t 0 g(ξ(s)) ds by solving the equation g(ξ) ≈ a + b u(ξ) was introduced in [21]. In the context of random walks in random environments, it has been used in [1], [20] and [23], among other works. Symmetric random walk driven by SSEP the random walk and the SSEP by the Markov generator is a function of finitely many of the variables {η x } x∈Z ). The random walk jumps from a particle at rate 1 − λ and from a hole at rate 1 to one of its neighbors.
For k ∈ Z and η ∈ {0, 1} Z , let θ k η denote the element of {0, 1} Z defined by (θ k η) x = η x+k . We use this to define the environment process viewed from the walk ξ(t) = θ Xt η(t). This is a Markov process, and its generator L acts on local functions as follows: is the generator of the SSEP with rate 1.
Define the quenched probability P η (·) on Z × [0, ∞) as the probability measure of the random walk on underlying environment η = {η t , t ≥ 0}. By (2.1), we have for t, h ≥ 0, Define the annealed measure P(·) on the same space as P(·) = P η (·) dQ µ (η) (2.5) where Q µ is the distribution of SSEP {η(t)} t≥0 with the initial distribution η(0) ∼ µ, Our main theorem gives a quenched invariance principle of the walk with explicit scaling parameter(the variance). Theorem 2.1. Let (X t , η(t)) t≥0 be the Markov process generated by L joint , started from X 0 = 0 and η(0) ∼ µ. Then, for Q µ − almost every η, under the quenched measure P η , the sequence of processes converges in distribution, with respect to the J 1 Skorohod topology, to a standard Brownian motion, where .
This theorem will follow from the next one, which gives the asymptotic fraction of time that the walk spent on top of particles.  (2.9) Theorem 2.2 shows the convergence under the quenched measure, which automatically implies the same convergence result under the annealed measure. Moreover, the rate of convergence under the annealed measure has an upper bound estimation, which is also a key tool to prove Theorem 2.2. This rate of convergence result is shown as follows. (2.10)

Proofs
The key observation is that X t is a mean-zero martingale with respect to the filtration generated by (X t , η(t)) t≥0 . Its predictable quadratic variation is given by the formula for any t ≥ s ≥ 0 and all η. We claim that if lim t→∞ t −1 X t → a in probability, for some positive a > 0, then (2 − λ + λρ) ξ 0 (s) ds = 2ρ in probability, 2−λ+λρ . Although in Theorem 2.2 the convergence holds quenched, we will prove the convergence in the annealed measure first. Our proof will yield a estimate on the rate of convergence that is strong enough that allows us to deduce the quenched convergence from it.
Before we start our proofs, we remind the readers that there are some technical lemmas that will be used throughout the proofs. Those lemmas are introduced in section 4 as well as their proofs. But we will use them in section 3 without mentioning too much in order to make the proof less tedious.

Proof of the asymptotic limit of ξ(t) under the annealed measure
Our goal is to prove the following theorem.
We are going to choose n and depending on t in such a way that all three integrals on the right-hand side converge to 0 in probability, as t → ∞. It turns out one can choose in probability.
The proof strategy is to show that the integrand is in the range of the generator and use this to rewrite the integral as the sum of a martingale and a vanishing term. The martingale is then shown to vanish too, by means of an explicit bound on its quadratic variation.
Notice that the trivial pointwise bound is of order n 2 , which is much bigger than t. The idea is that when k is large the variables ξ k (t) − ρ are approximately independent and have mean zero. Recall that ξ x (t) = η x+Xt (t), where η(t) is a stationary SSEP and X t is the random walk. Then By Lemma 4.3, the first term is of order t 3 n −4 . It then follows from our assumption (3.10) that lim t→∞ t −1 n 2 P (|X t | > n) = 0, as we need.
To bound the second term, write The fourth line is by Lemma 4.1, the fifth line is by lemma 4.2, and the last line is by (3.10).
for some constant c 0 > 0 and t large enough. Let t → ∞, the right hand side converges to zero, this finishes the proof of (3.17).
The next lemma controls 1 t M t (ψ n, ).

Symmetric random walk driven by SSEP
Proof. There is an explicit formula for the predictable quadratic variation of M t (ψ n, ): Our goal is to prove lim t→∞ t −2 E M · (ψ n, ) t = 0. To bound the first term, notice that ψ n, ξ x,x+1 − ψ n, (ξ) 2 = 0 if |x| > n and no greater than 1 if |x| ≤ n, so the integrand is much smaller than 2tn. The second term demands more work while the third term has the similar proof as the second one. To start, we compute The expectation above is small by the same reason that (3.17) is small: the random variables ξ k (s), for large k, are approximately independent of mean ρ. We follow the same method of proof.
By Lemma 4.3, the first term is of order t 2 n 4 , so it vanishes as t → ∞. The second term is bounded, for any δ > 0, by Collect all the above upper bounds we have By the assumption 3.10, the upper bound vanishes as t → ∞.
Proof of Proposition 3.2. By Chebyshev inequality, for any > 0, notice that ξ(0) = η(0), there exists some constant c 2 > 0 such that (3.31) The last inequality uses the fact that {η k (0) − ρ} k∈Z is an i.i.d mean zero sequence. The cross terms above will vanish after taking the expectation.
Use this upper bound, together with (3.16), (3.20), and (3.30) for any > 0, where constant C 0 ( ) > 0 and due to assumption (3.10). Hence Proposition 3.2 is proved. The next proposition shows the limit of the second part of the decomposition (3.6). Proof. We show that the integrand is in the range of the generator and split the integral into a martingale term plus a vanishing term. Notice that and  Lϕ n, (ξ(r)) dr (3.39) is a martingale with respect to the filtration generated by (ξ(s)) s≥0 . To prove (3.34), we show that |ϕ n, | t and M · (ϕ n, ) t t 2 . For the first term, for some C > 0, so it follows from (3.10) that lim t→∞ t −1 |ϕ n, (ξ)| = 0 for any ξ ∈ {0, 1} Z . It remains to prove that t −1 M t (ϕ n, ) → 0 in probability. We prove this by controlling the second moment of M t (ϕ n, ) through its predictable quadratic variation We claim that,  It's easy to see that for some C > 0 independent of and n.
Proof. Define, for m > 0, the event We will prove that, for t   (3.10) and the assumption that t 1/2 m n the horizontal separation of these boxes is n + k − m t α for any α ∈ ( 1 2 , α). Therefore, applying Proposition (4.4) it holds that  One can get Theorem 3.1 immediately from propositions 3.2, 3.5, and 3.7.

Proof of the asymptotic limit of ξ(t) under the quenched measure
First recall (3.32), (3.47) and (3.53). By choosing adequate α, and m, one can get an explicit upper bound on the rate of convergence in (2.8).
Proof of Theorem 2.3. Let α = 0.6, = t 0.2 and m = t 0.55 . Then,for any > 0 and for large enough t, one can check for some C( ) > 0.
The next lemma shows how to get the convergence in probability under the quenched measure Q µ − a.s. from the annealed measure.  Proof. Define a sequence {t k } k≥1 as t k = k 16 . By (3.54), we have for k large enough, 16 15 . (3.56) By Chebyshev inequality, 16 15 .  For any t ≥ 1, it must lie in the interval [t k , t k+1 ) for some k. Notice that Y t has bounded increments, which means |Y s − Y r | ≤ 2|s − r| for any s, r > 0. This gives the upper bound (3.59) Choose any η ∈ A ,δ , there exists k η ( , δ) such that for all k > k η ( , δ) which finishes the proof since P µ (A ,δ ) = 1.
In the last part of this section we prove Theorem 2.2.
Proof of Theorem 2.2. From Lemma 3.8, we just need one more step to reach our final goal. To see this, for any > 0, let A , 1 n . (3.62) We have P µ (A ) = 1 since it is a intersection of countably many sets while each has probability 1. Choose any η ∈ A , for any n ≥ 1, holds for all t > t η ( , 1 n ). Thus t −1 |Y t | converge to zero in probability under P η .  Proof. For any λ > 0,   Proof. The first observation is that X is a martingale, so Doob's L p -inequality gives P sup s≤t |X s | ≥ γ ≤ 6 5 6 E X 6 t γ 6 . To bound the sixth moment, we compare our random walk with a simple symmetric walk: let Y 1 , . . . , Y n be i.i.d. random variables with P (Y 1 = ±1) = 1/2 and let J t denote the number of times that X jumps during the time interval [0, t].

Technical lemmas
Since J t is stochastically dominated by a mean t Poisson random variable, the last expectation is bounded by a multiple of t 3 .
The next lemma comes from [15]. To get the version stated below, one only needs to change the last line of the original proof, using (4.2).  f 1 is supported on B 1 , that is, if the trajectories η, η : Z × R + → {0, 1} satisfy η x (s) = η x (s) for all (x, s) ∈ B 1 then f 1 (η) = f 1 (η ). Assume f 2 is supported on B 2 . Finally, denote by P ρ the law of SSEP started from equilibrium at density ρ ∈ (0, 1), that is, started from the product measure ⊗ x∈Z Ber(ρ). Let E ρ be the expectation with respect to P ρ .
Then y ≥ H α implies for some C > 0.