Central Limit Theorem for a Class of Linear Systems

We consider a class of interacting particle systems with values in $[0,\8)^{\zd}$, of which the binary contact path process is an example. For $d \ge 3$ and under a certain square integrability condition on the total number of the particles, we prove a central limit theorem for the density of the particles, together with upper bounds for the density of the most populated site and the replica overlap.


The binary contact path process (BCPP)
We start with a motivating simple example.Let η t = (η t,x ) x∈Z d ∈ N Z d , t ≥ 0 be binary contact path process (BCPP for short) with parameter λ > 0. Roughly speaking, the BCPP is an extended version of the basic contact process, in which not only the presence/absence of the particles at each site, but also their number is considered.The BCPP was originally introduced by D. Griffeath [4].Here, we explain the process following the formulation in the book of T. Liggett [5, Chapter IX].Let τ z,i , (z ∈ Z d , i ∈ N * ) be i.i.d.mean-one exponential random variables and T z,i = τ z,1 + ... + τ z,i .We suppose that the process (η t ) starts from a deterministic configuration η 0 = (η 0,x ) x∈Z d ∈ N Z d with |η 0 | < ∞.At time t = T z,i , η t− is replaced by η t randomly as follows: for each e ∈ Z d with |e| = 1, if otherwise with probability λ 2dλ+1 , (all the particles at site z are duplicated and added to those on the site z = x + e), and with probability 1 2dλ+1 (all the particles at site z disappear).The replacement occurs independently for different (z, i) and independently from {τ z,i } z,i .A motivation to study the BCPP comes from the fact that the projected process (η t,x ∧ 1) x∈Z d , t ≥ 0 is the basic contact process [4].Let κ 1 = 2dλ − 1 2dλ + 1 and η t = (exp(−κ 1 t)η t,x ) x∈Z d .
Then, (|η t |) t≥0 is a nonnegative martingale and therefore, the following limit exists almost surely: where π d is the return probability for the simple random walk on Z d [4, Theorem 1].It is known that π d ≤ π 3 = 0.3405... for d ≥ 3 [7, page 103].
We denote the density of the particles by: Interesting objects related to the density would be ρ * t is the density at the most populated site, while R t is the probability that a given pair of particles at time t are at the same site.We call R t the replica overlap, in analogy with the spin glass theory.Clearly, (ρ * t ) 2 ≤ R t ≤ ρ * t .These quantities convey information on localization/delocalization of the particles.Roughly speaking, large values of ρ * t or R t indicate that the most of the particles are concentrated on small number of "favorite sites" (localization), whereas small values of them imply that the particles are spread out over a large number of sites (delocalization).
As a special case of Corollary 1.2.2 below, we have the following result, which shows the diffusive behavior and the delocalization of the BCPP under the condition (1.1): where C b (R d ) stands for the set of bounded continuous functions on R d , and ν is the Gaussian measure with

The results
We generalize Theorem 1.1.1 to a certain class of linear interacting particle systems with values in [0, ∞) Z d [5, Chapter IX].Recall that the particles in BCPP either die, or make binary branching.To describe more general "branching mechanism", we introduce a random vector K = (K x ) x∈Z d which is bounded and of finite range in the sense that . random vectors with the same distributions as K, independent of {τ z,i } z∈Z d ,i∈N * .We suppose that the process (η t ) t≥0 starts from a deterministic configuration The BCPP is a special case of this set-up, in which ) x∈Z d with probability λ 2dλ+1 , for each 2d neighbor e of 0. (1.6) A formal construction of the process (η t ) t≥0 can be given as a special case of [5, page 427, Theorem 1.14] via Hille-Yosida theory.In section 1.3, we will also give an alternative construction of the process in terms of a stochastic differential equation.We set Then, (|η t |) t≥0 is a nonnegative martingale. (1.9) The above martingale property can be seen by the same argument as in [5, page 433, Theorem 2.2 (b)].For the reader's convenience, we will also present a simpler proof in section 1.3 below.By (1.9), following limit exists almost surely: To state Theorem 1.2.1, we define where ((S t ) t≥0 , P x S ) is the continuous-time random walk on Z d starting from x ∈ Z d , with the generator Then, referring to (1.7)-(1.12), the following are equivalent: for all t > 0 and f : The main point of Theorem 1.2.1 is that the condition (a), or equivalently (b), implies the central limit theorem (c) (See also Corollary 1.2.2 below).This seems to be the first result in which the central limit theorem for the spatial distribution of the particle is shown in the context of linear systems.Some other part of our results ((a) ⇒ (b), and Theorem (ii) to show the central limit theorem for the "weighted" Markov chain, where the weight comes from the additive functional due to the Feynman-Kac formula (Lemma 2.2.2 below).
The above strategy was adopted earlier by one of the authors for branching random walk in random environment [10].There, the Markov chain alluded to above is simply the product of simple random walks on Z d , so that the central limit theorem with the Feynman-Kac weight is relatively easy.Since the Markov chain in the present paper is no longer a random walk, it requires more work.However, the good news here is that the Markov chain we have to work on is "close" to a random walk.In fact, we get the central limit theorem by perturbation from that for a random walk case.Some other remarks on Theorem 1.2.1 are in order: 1) The condition (1.13) guarantees a reasonable non-degeneracy for the transition mechanism (1.5).On the other hand, (1.14) follows from a stronger condition: which amounts to saying that the transition mechanism (1.5) updates the configuration by "at most one coordinate at a time".A typical examples of such K's are given by ones which satisfy: These include not only BCPP but also models with asymmetry and/or long (but finite) range.
Here is an explanation for how we use the condition (1.14).To prove Theorem 1.2.1, we use a certain Markov chain on Z d × Z d , which is introduced in Lemma 2.1.1 below.Thanks to (1.14), the Markov chain is stationary with respect to the counting measure on Z d × Z d .The stationarity plays an important role in the proof of Theorem 1.2.1-seeLemma 2.1.4below.
2) Because of (1.13), the random walk (S t ) is recurrent for d = 1, 2 and transient for 3) If, in particular,

4)
The dual process of (η t ) above (in the sense of [5, page 432]) is given by replacing the linear transform in (1.5) by its transpose: As can be seen from the proofs, all the results in this paper remain true for the dual process.
5) The central limit theorem for discrete time linear systems is discussed in [6].
We define the density and the replica overlap in the same way as (1.2)-(1.3).Then, as an immediate consequence of Theorem 1.2.1, we have the following Corollary 1.2.2Suppose (1.4), (1.13)- (1.14) Proof: The first statement is immediate from Theorem 1.2.1(c).Taking f (x) = δ x,0 in (1.16), we see that This implies the second statement.2 For a ∈ Z d , let η a t be the process starting from η 0 = (δ a,x ) x∈Z d .As a by-product of Theorem 1. 2 G(0) < 1.Then, The proof of Theorem 1.2.3 will be presented in section 3.2.We refer the reader to [11] for similar formulae for discrete time models.

SDE description of the process
We now give an alternative description of the process in terms of a stochastic differential equation (SDE), which will be used in the proof of Lemma 2.1.1 below.We introduce random The precise definition of the process (η t ) t≥0 is then given by the following stochastic differential equation: By (1.4), it is standard to see that (1.22) defines a unique process η t = (η t,x ), (t ≥ 0) and that (η t ) is Markovian.
Proof of (1.9): Since |η t | is obviously nonnegative, we will prove the martingale property.By (1.22), we have and hence (1) We have on the other hand that Plugging this into (1), we see that the right-hand-side of ( 1) is a martingale.
Remark: The matrix Γ introded above appears also in [5, page 442, Theorem 3.1], since it is a fundamental tool to deal with the two-point function of the linear system.However, the way we use the matrix will be different from the ones in the existing literature.
We now prove the Feynman-Kac formula for two-point function, which is the basis of the proof of Theorem 1.2.1: ) be the continuous-time Markov chain on y,e y∈Z d Γ x,e x,y,e y (f (y, y) − f (x, x)) , where Γ x,e x,y,e y is defined by (2.1).Then, for where V is defined by (2.2).
Proof: We first show that u(t, x, x) def = P [η t,x η t,e x ] solves the integral equation By (1.22), we have We have by (1.4) and (1.22) that, for any p ∈ N * , there exists C 1 ∈ (0, ∞) such that By iteration, we see that there exists C 2 ∈ (0, ∞) such that which, via Schwarz inequality, implies (4).The solution to (1) subject to (2) is unique, for each given η 0 .This can be seen by using Gronwall's inequality with respect to the norm u = x,e x∈Z d e −|x| |u(x, x)|.Moreover, the RHS of (2.4) is a solution to (1) subject to the bound (2).This can be seen by adapting the argument in [8, page 5,Theorem 1.1].Therefore, we get (2.4). 2 Remark: The following Feynman-Kac formula for one-point function can be obtained in the same way as Lemma 2.1.1: where κ 1 is defined by (1.7) and ((X t ) t≥0 , P x X ) is the continuous-time random walk on Z d starting from x, with the generator These imply the desired equivalence and (2.7). 2 We assume (1.14) from here on.Then, by (2.6), (X, X) is stationary with respect to the counting measure on Z d × Z d .We denote the dual process of (X, X) by (Y, Y ) = ((Y t , Y t ) t≥0 , P Remark: If we additionally suppose that P [K p x ] = P [K p −x ] for p = 1, 2 and x ∈ Z d , then, Γ x,e x,y,e y = Γ y,e y,x,e x for all x, x, y, y ∈ Z d .Thus, (X, X) and (Y, Y ) are the same in this case.
The relative motion Y t − Y t of the components of (Y, Y ) is nicely identified by: ) and ((S 2t ) t≥0 , P ) is a Markov chain.Moreover, its jump rate is computed as follows.For x = y, To prove Theorem 1.2.1, the use of Lemma 2.1.1 is made not in itself, but via the following lemma.It is the proof of this lemma, where the duality of (X, X) and (Y, Y ) plays its role.(2.9) In particular, for a bounded f : x,e x∈Z d (1) LHS of (2.9) = x,e x∈Z d P x,e x X, e X exp κ 2 t 0 δ 0 (X s − X s )ds η 0,Xt η 0, e Xt g(x, x).
We now observe that the operators are dual to each with respect to the counting measure on Z d × Z d .Therefore, RHS of (1) = RHS of (2.9).
Taking g(x, x) = f (x − x) in particular, we have by (2.9) and Lemma 2.1.3that LHS of (2.10) = x,e x∈Z d η 0,x η 0,e x P x,e x Y, e Y exp κ 2 Remark: In the case of BCPP, D. Griffeath obtained a Feynman-Kac formula for [4, proof of Theorem 1].However, this does not seem to be enough for our purpose.Note that the Feynman-Kac formulae in the present paper (Lemma 2.1.1 and Lemma 2.1.4)are stronger, since they give the expression for each summand of the above summation.

Central limit theorems for Markov chains
We prepare central limit theorems for Markov chains, which is obtained by perturbation of random walks.
Lemma 2.2.1 Let ((Z t ) t≥0 , P x ) be a continuous-time random walk on Z d starting from x, with the generator where we assume that Then, for any where m = x∈Z d xa x and ν is the Gaussian measure with Proof: By subtracting a constant, we may assume that R d f dν = 0. We first consider the It is easy to see from the central limit theorem for (Z t ) that for any x ∈ Z d , With this and the bounded convergence theorem, we have Next, we take For any ε > 0, there exist s ∈ (0, ∞) and B ∈ F s such that P Then, by what we already have seen, where f is the sup norm of f .Similarly, Since ε > 0 is arbitrary, we are done.2 Lemma 2.2.2Let Z = ((Z t ) t≥0 , P x ) be as in Lemma 2.2.1 and and D ⊂ Z d be transient for Z.On the other hand, let Z = (( Z t ) t≥0 , P x ) be the continuous-time Markov chain on Z d starting from x, with the generator where we assume that a x,y = a y−x if x ∈ D ∪ {y} and that D is also transient for Z.Furthermore, we assume that a function v : where ν is the Gaussian measure such that (2.11) holds.
Proof: Define Then, for s < t, where We now observe that where H D (Z) is defined similarly as H D ( Z).Hence, for x ∈ D and fixed s > 0, we have by Lemma 2.2.1 that Thus, letting t → ∞ first, and then s → ∞, in (2.12), we get the lemma. 2

A Nash type upper bound for the Schrödinger semi-group
We will use the following lemma to prove (1.16).The lemma can be generalized to symmetric Markov chains on more general graphs.However, we restrict ourselves to random walks on Z d , since it is enough for our purpose.
Lemma 2.3.1 Let ((Z t ) t≥0 , P x ) be continuous-time random walk on Z d with the generator: where we assume that the set {x ∈ Z d ; a x = 0} is bounded and contains a linear basis of R d , Then, there exists C ∈ (0, ∞) such that sup for all t > 0 and f : Proof: We adapt the argument in [1,Lemma 3.1.3].For a bounded function f : Z d → R, we introduce Then, (T t ) t≥0 extends to a symmetric, strongly continuous semi-group on ℓ 2 (Z d ).We now consider the measure x∈Z d h(x) By the assumptions on (a x ), we have the Sobolev inequality: (1) where c 1 ∈ (0, ∞) is independent of f .This can be seen via an isoperimetric inequality [9, page 40, (4.3)].We have on the other hand that We see from ( 1) and ( 2) that where c 2 ∈ (0, ∞) is independent of f .This implies that there is a constant C such that for all t > 0, e.g.,[3, page 75, Theorem 2.4.2],where • p→q,h denotes the operator norm from ℓ p,h (Z d ) to ℓ q,h (Z d ).Note that T h t 1→2,h = T h t 2→∞,h by duality.We therefore have via semi-group property that (2.10)

Lemma 2 . 1 . 4 2 t 0 δ 0
For a bounded g :Z d × Z d → R, x,e x∈Z d P [η t,x η t,e x ]g(x, x) = x,e x∈Z d η 0,x η 0,e x P x,e x Y, e Y exp κ (Y s − Y s )ds g(Y t , Y t ) .

P z e s 1 e= P z e s 1 e
Zs ∈D P e Zs f (( Z t−s − mt)/ √ t) : H D ( Z) = ∞ Zs ∈D P e Zs [H D ( Z) = ∞] R d f dν = P z e s : T D ( Z) < s R d f dν.

=
(Y t , Y t ) and the random walk (Z t ) on Z d × Z d with the generatorL Z f (x, x) = y,e y∈Z da x,e x,y,e y (f (y, y) − f (x, x)) with a x,e x,y,e y =    P [K e y−e x ] if x = y and x = y, P [K y−x ] if x = y and x = y, 0 if otherwise.Let D = {(x, x) ∈ Z d × Z d ; x = x}.Then,(2)a x,e x,y,e y = Γ y,e y,x,e x if (x, x) ∈ D ∪ {(y, y)}, since Γ y,e y,x,e x = P [(K y−x − δ y,x )δ e y,e x + (K e y−e x − δ e y,e x )δ y,x + (K y−x − δ y,x )(K e y−x − δ e y,x )δ x,e x ].Moreover, by (1.13),

2
.12) As before, C b (R d ) stands for the set of bounded continuous functions on R d .
(i) to represent the two-point function P [η t,x η t,e x ] in terms of a continuous-time Markov chain on Z d × Z d via the Feynman-Kac formula (Lemma 2.1.1 and Lemma 2.1.4below), The proof of Theorem 1.2.1, which will be presented in section 3.1, is roughly divided into two steps: 2δ x on Z d , and denote by (ℓ p,h (Z d ), • p,h ) the associated L p -space.Then, it is standard (e.g., proofs of [2, page 74, Theorem 3.10] and [8, page 16, Proposition 3.3]) to see that (T h t ) t≥0 defines a symmetric strongly continuous semi-group on ℓ 2,h (Z d ) and that for [e ∞ ] = h(x − x) ≤ h(0) < ∞.Since |η 0 | < ∞, it is enough to prove that for each x, x ∈ Z d t f t (Y t , Y t ) = 0.To prove this, we apply Lemma 2.2.2 to the Markov chain Z t def.
2(P ) as t ր ∞for f ∈ C b (R d ) such that R d f dν = 0. We set f t (x, x) = f ((x − m)/ √ t)f (( x − m)/ √ t).Then, by Lemma 2.1.4,P [U 2 t ] = x,e x∈Z d P [η t,x η t,e x ]f t (x, x) = x,e x∈Z d η 0,x η 0,e x P x,e x Y, e Y e t f t (Y t , Y t ) , where e t = exp κ 2 t 0 δ 0 (Y s − Y s )ds .Note that by Lemma 2.1.3and (a), e