Osaka University,

We consider a class of continuous-time stochastic growth models on $d$-dimensional lattice with non-negative real numbers as possible values per site. We remark that the central limit theorem proven in our previous work [Nagahata, Y., Yoshida, N.: Central Limit Theorem for a Class of Linear Systems, Electron. J. Probab. Vol. 14, No. 34, 960--977. (2009)] can be extended to wider class of models so that it covers the cases of potlatch/smoothing processes.

(Ω, F, P ) be a probability space. We write P [X] = X dP and P [X : A] = A X dP for a r.v.(random variable) X and an event A.

The binary contact path process (BCPP)
We start with a motivating simple example. Let η t = (η t,x ) x∈Z d ∈ N Z d , t ≥ 0 be binary contact path process (BCPP for short) with parameter λ > 0. Roughly speaking, the BCPP is an extended version of the basic contact process, in which not only the presence/absence of the particles at each site, but also their number is considered. The BCPP was originally introduced by D. Griffeath [4]. Here, we explain the process following the formulation in the book of T. Liggett [5,Chapter IX]. Let τ z,i , (z ∈ Z d , i ∈ N * ) be i.i.d. mean-one exponential random variables and T z,i = τ z,1 + ... + τ z,i . We suppose that the process (η t ) starts from a deterministic configuration η 0 = (η 0,x ) x∈Z d ∈ N Z d with |η 0 | < ∞. At time t = T z,i , η t− is replaced by η t randomly as follows: for each e ∈ Z d with |e| = 1, (all the particles at site z are duplicated and added to those on the site z = x + e), and (all the particles at site z disappear). The replacement occurs independently for different (z, i) and independently from {τ z,i } z,i . A motivation to study the BCPP comes from the fact that the projected process (η t,x ∧ 1) x∈Z d , t ≥ 0 is the basic contact process [4]. Let κ 1 = 2dλ − 1 2dλ + 1 and η t = (exp(−κ 1 t)η t,x ) x∈Z d .
Then, (|η t |) t≥0 is a nonnegative martingale and therefore, the following limit exists almost surely: where π d is the return probability for the simple random walk on Z d [4, Theorem 1]. It is known that π d ≤ π 3 = 0.3405... for d ≥ 3 [7, page 103]. We denote the density of the particles by: Interesting objects related to the density would be ρ * t is the density at the most populated site, while R t is the probability that a given pair of particles at time t are at the same site. We call R t the replica overlap, in analogy with the spin glass theory. Clearly, (ρ * t ) 2 ≤ R t ≤ ρ * t . These quantities convey information on localization/delocalization of the particles. Roughly speaking, large values of ρ * t or R t indicate that the most of the particles are concentrated on small number of "favorite sites" (localization), whereas small values of them imply that the particles are spread out over a large number of sites (delocalization).
As a special case of Corollary 1.2.2 below, we have the following result, which shows the diffusive behavior and the delocalization of the BCPP under the condition (1.1):

The results
We generalize Theorem 1.1.1 to a certain class of linear interacting particle systems with values in [0, ∞) Z d [5,Chapter IX]. Recall that the particles in BCPP either die, or make binary branching. To describe more general "branching mechanism", we introduce a random vector K = (K x ) x∈Z d which is bounded and of finite range in the sense that Let τ z,i , (z ∈ Z d , i ∈ N * ) be i.i.d. mean-one exponential random variables and T z,i = τ z,1 + ... + τ z,i . Let also K z,i = (K z,i x ) x∈Z d (z ∈ Z d , i ∈ N * ) be i.i.d. random vectors with the same distributions as K, independent of {τ z,i } z∈Z d ,i∈N * . We suppose that the process (η t ) t≥0 starts from a deterministic configuration The BCPP is a special case of this set-up, in which (δ x,0 + δ x,e ) x∈Z d with probability λ 2dλ+1 , for each 2d neighbor e of 0. (1.6) A formal construction of the process (η t ) t≥0 can be given as a special case of [5, page 427, Theorem 1.14] via Hille-Yosida theory. In section 1.3, we will also give an alternative construction of the process in terms of a stochastic differential equation. We set Then, (|η t |) t≥0 is a nonnegative martingale. (1.9) The above martingale property can be seen by the same argument as in [5, page 433, Theorem 2.2 (b)]. For the reader's convenience, we will also present a simpler proof in section 1.3 below. By (1.9), following limit exists almost surely: (1.10) To state Theorem 1.2.1, we define (1.14) Then, referring to (1.7)-(1.12), the following are equivalent: The main point of Theorem 1.2.1 is that the condition (a), or equivalently (b), implies the central limit theorem (c) (See also Corollary 1.2.2 below). This seems to be the first result in which the central limit theorem for the spatial distribution of the particle is shown in the context of linear systems. Some other part of our results ((a) ⇒ (b), and Theorem 1.2.3 below) generalizes [4, Theorem 1]. However, this is merely a by-product and not a central issue in the present paper. The proof of Theorem 1.2.1, which will be presented in section 3.1, is roughly divided into two steps: (ii) to show the central limit theorem for the "weighted" Markov chain, where the weight comes from the additive functional due to the Feynman-Kac formula (Lemma 2.2.2 below).
The above strategy was adopted earlier by one of the authors for branching random walk in random environment [10]. There, the Markov chain alluded to above is simply the product of simple random walks on Z d , so that the central limit theorem with the Feynman-Kac weight is relatively easy. Since the Markov chain in the present paper is no longer a random walk, it requires more work. However, the good news here is that the Markov chain we have to work on is "close" to a random walk. In fact, we get the central limit theorem by perturbation from that for a random walk case. Some other remarks on Theorem 1.2.1 are in order: 1) The condition (1.13) guarantees a reasonable non-degeneracy for the transition mechanism (1.5). On the other hand, (1.14) follows from a stronger condition: which amounts to saying that the transition mechanism (1.5) updates the configuration by "at most one coordinate at a time". A typical examples of such K's are given by ones which satisfy: These include not only BCPP but also models with asymmetry and/or long (but finite) range.
Here is an explanation for how we use the condition (1.14). To prove Theorem 1.2.1, we use a certain Markov chain on Z d × Z d , which is introduced in Lemma 2.1.1 below. Thanks to (1.14), the Markov chain is stationary with respect to the counting measure on Z d × Z d . The stationarity plays an important role in the proof of Theorem 1.2.1-see Lemma 2.1.4 below.
2) Because of (1.13), the random walk (S t ) is recurrent for d = 1, 2 and transient for d ≥ 3. Therefore, κ 2 2 G(0) < 1 is possible only if d ≥ 3. As will be explained in the proof, κ 2 2 G(0) < 1 is equivalent to 3) If, in particular, is the simple random walk. Therefore, the condition (a) becomes By (1.6), the BCPP satisfies (1.13)-(1.14). Furthermore, κ 2 = 1 and we have (1.18) with c = λ 2dλ+1 . Therefore, (1.19) is equivalent to (1.1). 4) The dual process of (η t ) above (in the sense of [5, page 432]) is given by replacing the linear transform in (1.5) by its transpose: As can be seen from the proofs, all the results in this paper remain true for the dual process.

5)
The central limit theorem for discrete time linear systems is discussed in [6].
We define the density and the replica overlap in the same way as (1.2)-(1.3). Then, as an immediate consequence of Theorem 1.2.1, we have the following Proof: The first statement is immediate from Theorem 1.
This implies the second statement. 2 For a ∈ Z d , let η a t be the process starting from η 0 = (δ a,x ) x∈Z d . As a by-product of Theorem 1.2.1, we have the following formula for the covariance of (|η a ∞ |) a∈Z d . For BCPP, this formula was obtained by D. Griffeath [4,Theorem 3].
The proof of Theorem 1.2.3 will be presented in section 3.2. We refer the reader to [11] for similar formulae for discrete time models.

SDE description of the process
We now give an alternative description of the process in terms of a stochastic differential equation (SDE), which will be used in the proof of Lemma 2.1.1 below. We introduce random The precise definition of the process (η t ) t≥0 is then given by the following stochastic differential equation: By (1.4), it is standard to see that (1.22) defines a unique process η t = (η t,x ), (t ≥ 0) and that (η t ) is Markovian.
Proof of (1.9): Since |η t | is obviously nonnegative, we will prove the martingale property. By (1.22), we have We have on the other hand that Plugging this into (1), we see that the right-hand-side of (1) is a martingale. Remark: The matrix Γ introded above appears also in [5, page 442, Theorem 3.1], since it is a fundamental tool to deal with the two-point function of the linear system. However, the way we use the matrix will be different from the ones in the existing literature.
We now prove the Feynman-Kac formula for two-point function, which is the basis of the proof of Theorem 1.2.1: where Γ x,e x,y,e y is defined by (2.1). Then, for (t,
Proof: We first show that u(t, x, x) def = P [η t,x η t,e x ] solves the integral equation where F x,e x,y (s, ξ, η) Therefore, x,e x,y,e y u(s, y, y)ds We next show that x,e x∈Z d |u(t, x, x)| < ∞ for any T ∈ (0, ∞).
We have by (1.4) and (1.22) that, for any p ∈ N * , there exists C 1 ∈ (0, ∞) such that By iteration, we see that there exists C 2 ∈ (0, ∞) such that which, via Schwarz inequality, implies (4). The solution to (1) subject to (2) is unique, for each given η 0 . This can be seen by using Gronwall's inequality with respect to the norm u = x,e x∈Z d e −|x| |u(x, x)|. Moreover, the RHS of (2.4) is a solution to (1) where κ 1 is defined by (1.7) and ((X t ) t≥0 , P x X ) is the continuous-time random walk on Z d starting from x, with the generator if and only if (1.14) holds. In addition, (1.14) implies that These imply the desired equivalence and (2.7). 2 We assume (1.14) from here on. Then, by (2.6), (X, X) is stationary with respect to the counting measure on Z d × Z d . We denote the dual process of (X, Thanks to (2.6), L X, e X and L Y, e Y are dual operators on ℓ 2 (Z d × Z d ).
Remark: If we additionally suppose that P [K p x ] = P [K p −x ] for p = 1, 2 and x ∈ Z d , then, Γ x,e x,y,e y = Γ y,e y,x,e x for all x, x, y, y ∈ Z d . Thus, (X, X) and (Y, Y ) are the same in this case.
The relative motion Y t − Y t of the components of (Y, Y ) is nicely identified by: (1.12)) have the same law.
Proof: Since (Y, Y ) is shift invariant, in the sense that Γ x+v,e x+v,y+v,e y+v = Γ x,e x,y,e y for all ) is a Markov chain. Moreover, its jump rate is computed as follows. For x = y, (1.14) To prove Theorem 1.2.1, the use of Lemma 2.1.1 is made not in itself, but via the following lemma. It is the proof of this lemma, where the duality of (X, X) and (Y, Y ) plays its role.
We now observe that the operators are dual to each other with respect to the counting measure on Z d × Z d . Therefore, RHS of (1) = RHS of (2.9).
Taking g(x, x) = f (x − x) in particular, we have by (2.9) and Lemma 2.1.3 that LHS of (2.10) = x,e x∈Z d [4, proof of Theorem 1]. However, this does not seem to be enough for our purpose. Note that the Feynman-Kac formulae in the present paper (Lemma 2.1.1 and Lemma 2.1.4) are stronger, since they give the expression for each summand of the above summation.

Central limit theorems for Markov chains
We prepare central limit theorems for Markov chains, which is obtained by perturbation of random walks.
, P x ) be a continuous-time random walk on Z d starting from x, with the generator where we assume that Then, for any where m = x∈Z d xa x and ν is the Gaussian measure with (2.11) Proof: By subtracting a constant, we may assume that R d f dν = 0. We first consider the for some s ∈ (0, ∞). It is easy to see from the central limit theorem for (Z t ) that for any x ∈ Z d , With this and the bounded convergence theorem, we have Next, we take B ∈ σ[Z u ; u ∈ [0, ∞)]. For any ε > 0, there exist s ∈ (0, ∞) and B ∈ F s such that P Then, by what we already have seen, where f is the sup norm of f . Similarly, Since ε > 0 is arbitrary, we are done. 2 where we assume that a x,y = a y−x if x ∈ D ∪ {y} and that D is also transient for Z. Furthermore, we assume that a function v : Then where ν is the Gaussian measure such that (2.11) holds.

Proof: Define
Then, for s < t, We now observe that where H D (Z) is defined similarly as H D ( Z). Hence, for x ∈ D and fixed s > 0, we have by Lemma 2.2.1 that Therefore, Thus, letting t → ∞ first, and then s → ∞, in (2.12), we get the lemma. 2

A Nash type upper bound for the Schrödinger semi-group
We will use the following lemma to prove (1.16). The lemma can be generalized to symmetric Markov chains on more general graphs. However, we restrict ourselves to random walks on Z d , since it is enough for our purpose.
, P x ) be continuous-time random walk on Z d with the generator: where we assume that the set {x ∈ Z d ; a x = 0} is bounded and contains a linear basis of R d , Then, there exists C ∈ (0, ∞) such that for all t > 0 and f : Then, (T t ) t≥0 extends to a symmetric, strongly continuous semi-group on ℓ 2 (Z d By the assumptions on (a x ), we have the Sobolev inequality: Since |η 0 | < ∞, it is enough to prove that for each x, x ∈ Z d lim t→∞ P x,e x Y, e Y e t f t (Y t , Y t ) = 0.
To prove this, we apply Lemma 2.2.2 to the Markov chain Z t def.
= (Y t , Y t ) and the random walk (Z t ) on Z d × Z d with the generator Finally, the Gaussian measure ν ⊗ ν is the limit law in the central limit theorem for the random walk (Z t ). Therefore, by (1) We apply Lemma 2.3.1 to the right-hand-side to get (1.16).