Zero-range processes with rapidly growing rates

We provide two methods to construct zero-range processes with superlinear rates on Z. In the first method these rates can grow very fast, if either the dynamics and the initial distribution are translation invariant or if only nearest neigbour translation invariant jumps are permitted, in the one-dimensional lattice. In the second method the rates cannot grow as fast but more general dynamics are allowed. AMS 2010 Mathematics Subject Classifications. Primary 60K35, 82C22


Introduction
The zero-range process was introduced by Spitzer [10] as a Markov process on a N S 0 , where N 0 is the set of non-negative integers, and S is a denumerable set. In this process particles are indistinguishable, and a particle leaves a given site x at rate g(n), where n is the number of particles present at x. Once a particle jumps from x it moves to a site y chosen according to a transition probability matrix p(x, y) on S. The choice of target site y is independent of the time at which the jump occurs and of the past of the process. Throughout this paper S will be an integer lattice and p(x, y) will be the transition matrix of a random walk on that lattice. On occasions we will write p(z) for p(0, z).
The existence of the dynamics was proved initially by Holley [6] and Liggett [7]. Their results were extended by Andjel [1] who adapted to the zero range process a technique introduced by Liggett and Spitzer [8]. Andjel assumes that the rates satisfy a Lipschitz condition sup n≥0 |g(n + 1) − g(n)| < ∞, thus imposing that the rates grow at most linearly. More recently, Balázs, Rassoul-Agha, Seppäläinen and Sethuraman [3] construct the zero-range process with totally asymmetric dynamics p(x, y) = 1(y − x = 1) and nearest neighbour jumps in the one dimensional lattice Z, under the assumption that the jump rates are nondecreasing and grow at most exponentially. Under these conditions, they prove that the process is Markov and admits a one parameter family of extremal invariant measures. Their proofs are based on a representation of the model as a system of columns with monotonically increasing heights, for which the totally asymmetric assumption on the dynamics is crucial.
In this article we introduce two methods to construct zero-range processes with superlinear rates on integer lattices, and identify the associated martingales. The first method allows for quite general rate functions g, but requires either nearest neighbour transition probabilities on the one-dimensional lattice Z, or that both the dynamics and the initial distribution be translation invariant on Z d . The second method can be applied to quite general random walks on Z d , but is more restrictive on the rate functions.

Notation and results
Throughout the article the set of sites will be the integer lattice, S = Z d . Given a transition matrix p(·, ·) on Z d and rate function g : N 0 → [0, +∞) such that g(0) = 0, our goal is to construct an associated Markov process on the state space X := N Z d 0 (2.1) endowed with the product topology. Elements η ∈ X will be called configurations, with η = η(x) : x ∈ Z d , η(x) ∈ N 0 , the number of particles at site x. We also define the set of configurations with finitely many particles. We say that a function f : X → R is local if there exists a finite set A ⊂ Z d such that f (η) = f (ξ) whenever η(x) = ξ(x) for all x ∈ A. We call the smallest such set the support of f . Let g : N ∪ {0} → R ≥0 , g(0) = 0. The formal generator of our dynamics is where f : X → R is a bounded local function, and η x,y (z) =      η(x) − 1, z = x and η(x) ≥ 1, η(y) + 1 z = y and η(x) ≥ 1, η(z) otherwise. (2.4) Informally, at rate g(k) a site x containing k particle looses one that jumps to site y with probability p(x, y). We will say that a process η t , t ≥ 0 on a subset of X is a solution of the martingale problem associated to L if for any local bounded function f is a martingale. We also say that the process satisfies the integrated forward equation if for any f as above In some cases we can show a stronger result, the forward equation, that is: (2. 6) We are interested in the situation when the rates are non-decreasing and diverge at ∞, g(n) ≤ g(n + 1), n ∈ N 0 , and lim n→∞ g(n) = ∞. (2.7) Condition (2.7) will imply that the processes we construct are attractive, that is, the coordinate-wise partial order of configurations η, ξ ∈ X, η ≤ ξ ⇐⇒ η x ≤ ξ x ∀ x ∈ Z d is preserved by the dynamics. This means that there exists a coupled process (η t , ξ t ), t ≥ 0 with initial value (η, ξ) such that P η t ≤ ξ t ∀t ≥ 0 = 1 and both {η t , t ≥ 0} and {ξ t , t ≥ 0} follow (2.3); the coupling in this case is said to be increasing. One such coupling is the basic coupling, which tries to match the two marginal processes as much as possible, and supplements the rates to get the right marginal distributions. Its generator is given by f : X × X → R a local, bounded function. This partial order on X induces a partial order on the set of probability measures on X: given two such measures µ and ν we say that µ ≤ ν if for any bounded local, increasing function f . The zero-range process started from an initial configuration η ∈ X f is a well defined continuous time Markov process with bounded rates on a countable state space. We will denote by S(t) the semigroup associated to L acting on configurations with finitely many particles, When the initial configuration η ∈ X \X f , we can consider an increasing sequence η n → ξ, η n ∈ X f for all n ≥ 1, and apply basic coupling to obtain a limiting process η t = lim n→∞ η n t , t ≥ 0; this paper is concerned with finding conditions under which this process is well defined for all times, and identifying its martingales and invariant measures.
The first result shows that the process can indeed be constructed starting from a translation invariant measure µ on X. In order to state it, we need to introduce a family of auxiliary measures associated to µ. Let us first define [µ] n on X as Proposition 2.1 Let {g(n)} n≥0 be as in (2.7), and consider translation invariant transition probabilities {p(x, y)} x,y∈Z d . Let µ be a translation invariant probability measure on X such that η(0)dµ(η) < ∞. Then, for all t ≥ 0, the sequence [µ] n S(t) converges as n → ∞ to a probability measure µ t on X satisfying: iii) Semigroup property: for s, t ≥ 0, µ t+s = (µ t ) s , (2.13) and if µ n is an increasing sequence of probability measures on X f that converge weakly to µ, then iv) lim n µ n S(t) = µ t . (2.14) It follows from this proposition that if µ is a translation invariant measure with finite mean, then the process started from almost any configuration with respect to µ will not suffer explosions. Unfortunately, we cannot deduce from this that the same holds for any given unbounded deterministic initial condition.
A natural question is whether equality holds in part ii) of this proposition. In §4.1 we answer the question affirmatively when p(x, y) corresponds to a nearest neighbour random walk on Z.
The parameter φ is called the fugacity, and the measure exists as long as With the hypotheses (2.7) the measures are well defined for all choices of φ, and they have finite moments of all orders. The particle density is given by which turns out to be strictly increasing in the parameter φ, by Jensen's inequality. We also point out that lim φ→0 R(φ) = 0 and (2.18) Finally, for any φ > 0 we have It is known that the measures {µ φ } φ>0 are invariant for the zero-range dynamics when this is well defined, see e.g. [1,10]. The following result states that this remains so under our hypotheses.
For η 0 ∈ X and n ∈ N let The process η n t with initial value η n 0 = η n is well defined, and using basic coupling (2.8) to simultaneously build them we obtain η n t (x) ≤ η n+1 t (x) for all x, t, n. We can now let where the process η t takes values in (N 0 ∪ {∞}) Z d . The rest of the article focuses on finding a subset Y ⊆ X and conditions on the rates and jump distributions so that {η t , t ≥ 0} is a Markov process on Y .
In § 4 we consider the one dimensional case X = N Z 0 with nearest neighbour transitions, p(x, y) = 0 if |x − y| > 1. Then, we let be the space of configurations with bounded densities, and prove: , and let {p(x, y)} x,y∈Z be the transition probability matrix of a nearest neighbour random walk. If η 0 ∈ Y , then {η t , t ≥ 0} is a Markov process on Y .
Next, we show that this process solves the martingale problem associated to the generator (2.3). Note that when p(·, ·) is symmetric our proof requires that the rate function g be bounded by an exponential function. Theorem 2.3 Let d = 1, let {g(n)} n≥0 be as in (2.7) and let {p(x, y)} x,y∈Z be the transition probability matrix of a nearest neighbour random walk. i) If p(0, 1) − p(0, −1) = 0 then (η t , t ≥ 0) is a solution of the martingale problem associated to L and (2.5) holds. ii) If g is bounded by an exponential function, g(n) ≤ ce θn , for some c, θ > 0, and all n ∈ N, then (η t , t ≥ 0) is a solution of the martingale problem associated to L and (2.6) holds.
In §5 we find an alternative set of conditions ensuring the good definition of the process; these are more restrictive on the jump rates, but allow for general finite range transitions and any dimension. In order to derive the results of that section we need to construct our process with a different limiting procedure. Given η ∈ X we enumerate the particles of η in an arbitrary manner. Then we let x i be the position of the i-th particle and for each N ∈ N we define Then, for a given continuous time random walk on Z d and z ∈ Z d we let τ 0 be the hitting time of the origin, P z the law of the random walk starting from z and Finally, for η ∈ X, t ≥ 0 and z ∈ Z d we let In §5 we will see that m z (t, η) is an upper bound for the expected number of particles reaching the origin in the time interval [0, t] for the initial configuration η.
The following result is a version of Theorem 2.3 with different hypotheses on the jump rates and transition probabilities, for general dimension d ≥ 1.
ii) there exist positive θ, c such that g(n) ≤ ce θn for any n ∈ N 0 . Then is a martingale for any local, bounded function f : X → R. Moreover (2.6) holds.
An application of this theorem is given by the following corollary.
Corollary 2.1 Assume that the initial configuration of particles has a finite upper density ρ = lim sup m→∞ x ≤m η 0 (x). Let p(x, y) be translation invariant, finite-range transition probabilities with mean zero: If the rates satisfy (2.7), and g(n + 1) − g(n) ≤ Cn a , a < 2 d , then the conclusions of Theorem 2.4 hold.
Finally, we note that by the law of large numbers, for any φ > 0, the measure µ φ in (2.15) is supported on Y . Then, once the zero-range process is well defined, for instance under the hypotheses of Theorems 2.2 or 2.4, Theorem 2.1 identifies a family of translation invariant invariant measures.
The proofs in this paper are presented as follows: In § 3 we prove Propostion 2.1, Theorem 2.1 and a lemma establishing some properties of the invariant measures µ φ . In § 4 we restrict the setting to Z with nearest neighbour transition matrices p(x, y) = 0 if |x − y| > 1. We first show that the process started from an arbitrary configuration in Y does not undergo explosions, then that it satisfies the Markov property and after that a conservation of mass property for translation invariant initial distributions. Finally, in the last part of that section we prove Theorem 2.3. In § 5 we prove Theorem 2.4 and Corollary 2.1. We conclude the paper stating some open problems in § 6.

Translation invariant initial distributions
In this section we first prove Proposition 2.1, where the zero-range process is constructed for translation invariant initial distributions having a finite mean. We then prove Theorem 2.1 concerning invariant measures.

Proof of proposition 2.1.
To prove ii) let {[µ] n } n∈N be the family of probability measures associated to µ as in (2.10). We now show that . This is increasing in n and therefore converges to a limit c t (x) ∈ [0, ∞]. A simple coupling argument using the translation invariance of µ shows that for all x, y ∈ Z d , all n ∈ N and t ≥ 0 we have Then c t (x) ≤ c t (y) and exchanging the roles of x and y we get the opposite inequality. Hence c t (x) does not depend on x and we rename it c t . Fix ǫ > 0.
Then there exists n 0 such that for all n ≥ n 0 It then follows from (3.1) that: But the left hand side of (3.2) is bounded above by where the first equality follows from the fact that the number of particles of a finite initial configuration is conserved. It now follows from (3.2) and (3.3) that and letting m go to infinity we get Since ǫ is arbitrary we conclude that As M is any number strictly smaller than c t , this implies that c t ≤ η(0)dµ < ∞. Using again the assumption that η(0)dµ < ∞, we see that the sequence {[µ] n S(t)} n∈N is tight, and from the fact that it is increasing it follows that it must converge to a measure µ t with η(0)dµ t ≤ η(0)dµ and ii) is proved. Pick x ∈ Z d and let T x denote translation by x. Define µT x by means of Taking limits as n goes to infinity we get µ t T x ≥ µ t . Since the opposite inequality can be proved in the same way, i) follows.
Let now {µ n } n∈N be an increasing sequence of probability measures on X f converging weakly to µ. For all k ≥ 1 we have and therefore Taking limits on k we get The opposite inequality follows from thus proving iv).
In order to obtain iii), note that Since [µ] n S(t) increases to µ t , the result follows from iv).
We now turn to the proof of Theorem 2.1. Recall the definition (2.15) of the family of translation invariant, product measures {µ φ } φ>0 . Given a measure µ on X we will consider its projection on X n := N A standard computation shows that the measures Π n (µ φ ) are invariant for the periodic zero-range process on X n with transition probability matrix p n (·, ·). Call S n (t) the semigroup associated to this process. Now define a new process on X n . In this new process particles jump as in the original process following the transition matrix p(·, ·) but when a particle jumps to a point off [−n, n] d it vanishes. Call S n (t) its semigroup. Standard coupling techniques yield and Fix φ > 0 and let Note that this function is decreasing in n. Let Using coupling and the fact that Hence d t (y) ≤ d t (x) and role reversing x and y we conclude (3.7) Coupling the processes with semigroupsŜ n andS n and starting them from the same random initial configuration distributed according to Π n (µ φ ), we see that the rate at which (3.7) increases with t is, at all times, bounded above by But since d t,n (x) is decreasing in n and its limit does not depend on x, this can only happen if d t (x) = 0. Together with (3.6) this implies that the finite dimensional distributions of Π n (µ φ )S n (t) increase as n → ∞ to the finite dimensional distributions of µ φ . It now follows from (3.5) and part ii) of Proposition 2.1 that We finish this section with a lemma describing some simple properties of the invariant measures µ φ .
and for any α > 0 lim φ→∞ lim sup Proof of lemma 3.1. The first statement follows immediately from the divergence of g(k). For the second statement, first note that for any M > 0 Therefore for any 0 < p < 1 there exists a φ(p) such that for any φ ≥ φ(p) we have Bernoulli with parameter p. But the right hand side above is equal to Bernoulli with parameter 1−p. Using the expression for the large deviation rate of Bernoulli random variables, see for instance [2], we see that for any K > 0 lim sup

Dimension 1, nearest neighbour transitions
Throughout this section we assume that d = 1 and that {p(x, y)} x,y∈Z corresponds to a translation invariant, nearest neighbour random walk on Z, Let X f and Y be as in (2.2) and (2.22) respectively. Since we will be following the evolution of individual particles, it will be convenient to consider elements of Y as increasing limits of elements of X f . Hence, for η ∈ Y and n ∈ N we define Note that for all ζ, ψ ∈ X f we have r(ζ, ψ) ≤ j(ζ, ψ). Given initial configurations ζ 0 , ψ 0 consider the coupled processes ζ t , ψ t on X 2 f . We claim that j(ζ t , ψ t ) only increases when a ψ particle jumps off 0. To justify this last statement, first note that if a ψ and a ζ particle jump together the value of j remains unchanged. Then look at jumps of a ζ particle not accompanied by a ψ particle occurring at some time s, and consider the following cases, 1. The ζ particle jumps from k < 0 to k − 1 . In this case for any n ≤ 0 and any m ≥ 0 the expression m x=n (ζ(x) − ψ(x)) either remains unchanged or decreases by one unit. 2. The ζ particle jumps from k < 0 to k + 1. Since no ψ particle jumped, it must be the case that just before the jump we had ) and therefore j does not increase. 3. The ζ particle jumps from k > 0 to k + 1 . In this case for any n ≤ 0 and any m ≥ 0 the expression m x=n (ζ(x) − ψ(x)) either remains unchanged or decreases by one unit. 4. The ζ particle jumps from k > 0 to k − 1. Since no ψ particle jumped, it must be the case that just before the jump we had ) and therefore j does not increase. 5. The ζ particle jumps from 0 to either 1 or −1. In this case j either remains unchanged or decreases by one unit.
Next look at jumps of a ψ particle not accompanied by a ζ particle occurring at some time s, and consider the following cases, 1. The ψ particle jumps from k < 0 to k + 1 . In this case for any n ≤ 0 and any m ≥ 0 the expression m x=n (ζ(x) − ψ(x)) either remains unchanged or decreases by one unit. 2. The ψ particle jumps from k < 0 to k − 1. Since no ζ particle jumped, it must be the case that just before the jump we had ) and therefore j does not increase. 3. The ψ particle jumps from k > 0 to k − 1 . In this case for any n ≤ 0 and any m ≥ 0 the expression m x=n (ζ(x) − ψ(x)) either remains unchanged or decreases by one unit. 4. The ψ particle jumps from k > 0 to k + 1. Since no ζ particle jumped, it must be the case that just before the jump we had ) and therefore j does not increase.
The only remaining case is when a ψ particle jumps off 0. In this case j either remains unchanged or increases by one unit.
Therefore, if we denote by N t (ψ) the number of ψ particles that jumped off 0 on [0, t], we get and Let φ be large enough so that lim 1 n n x=1 ξ(x) > γ for µ φ -almost all ξ. Consider zero-range processes η n · and ξ n · having initial configurations η n 0 = η n and ξ n 0 = ξ n , with ξ distributed according to µ φ . By (4.4) we get Taking limits as n goes to infinity, we see that η t (0) < ∞ a.s. will follow from i) j(η, ξ) < ∞ a.s., ii) lim n ξ n t (0) < ∞ a.s., and by Tonelli's Theorem. Since the process is monotone in n and g is increasing, the RHS in (5.2) is bounded above by t 0 E ξ g(ξ s (0) dµ φ (ξ) ds = t g(ξ(0))dµ φ (ξ) < ∞ from the invariance of µ φ and the fact that E µ φ g(ξ(0)) < ∞ (2.19). With our choice of φ we get j(η, ξ) < ∞ for µ φ -almost all ξ. This proves i). Using the invariance of µ φ , write To prove that η t ∈ Y , t > 0, we apply the second inequality in (4.3) to η n 0 and ψ n 0 and take limits in n to obtain which implies the desired result. We now prove the Markov property. For this purpose, it is helpful to recall the following graphical construction, first developed by Harris in [5], which for convenience we describe in our particular setting of nearest neighbour, 1-dimensional zero-range processes.
Graphical representation. Independently for each bond (x, x+1), x ∈ Z, consider an intensity 1 Poisson point process Γ x (dy, dt) on the positive quadrant y ≥ 0, t ≥ 0, and a sequence of i.i.d. uniform random variables U x i ∼ U [0, 1], i ≥ 1. We will now give an explicit construction of the process for a finite initial configuration η ∈ X f , a rate function g(n) and an underlying nearest neighbour (p, q) random walk. Assume that the (p, q) zero-range process {η s = η (p,q) s , s < t} has been built up to time t−, and that there have been exactly j-jumps off x by t−. Then, if the Poisson point process Γ x has an atom at (y, t) and η(x) ≥ 1, the site x will loose a particle if g η(x) ≥ y. If that is the case, the particle will jump to the right if U x j+1 ≤ p, and to the left otherwise. The advantage of this method is that it allows us to construct on the same probability space, zero-range processes for all initial finite-particle configurations, all nearest neighbour dynamics (p, q) and all jump rates g(k), k ≥ 1. Denote by w = {Γ x } x∈Z the collection of Poisson marks. For x ∈ Z and s < t, let Γ x s,t be the Poisson points falling on R ≥0 × (s, t] and w s,t = {Γ x s,t } x∈Z . Then, given an initial configuration η 0 ∈ X f , we can describe the state of the process η t at time t as a function of its state η s at time s, and the updates w s,t occurring on (s, t], The Markov property follows immediately. We now show that it is possible to take limits in (4.6) to extend it to initial configurations in Y .
In order to prove the theorem, it will be enough to show that the process {η t , t ≥ 0} satisfies, for any s < t, On the other hand, for each fixed n ∈ N, and taking limits in n this yields the opposite inequality, The result follows.

Mass conservation
The next lemma states that when the initial configuration is distributed according to a translation invariant measure, mass is preserved. where particles jumping off [−n, n] are lost, this process follows the semigroupS n (t) defined in the proof of Theorem 2.1. Apply the basic coupling to two versions of this process, η n t and ξ n t , started from initial configurations distributed as Π n (µ) and Π n (µ φ ) respectively, with φ chosen so that µ η(0) < µ φ η(0) . Denote by Ψ n (t) and Φ n (t) the number of η n and ξ n particles that jump from n to n + 1 (and are thereby lost) over the time interval [0, t]. Let (η(y) − ξ(y))] + . Now note that J n (η n s , ξ n s ) − Φ n (s) + Ψ n (s) can only decrease in time. Hence Ψ n (t) ≤ Φ n (t)−J n (η n t , ξ n t )+J n (η n 0 , ξ n 0 ) ≤ Φ n (t)+J n (η n 0 , ξ n 0 ), P µ×µ φ −a.s.. (η(y) − ξ(y))] + .

Now note that
hence the same holds for Ψ n (t)/n in P µ -probability. Similarly, if we denote by Ψ −n (t) the number of η-particles jumping from −n to −(n + 1) over [0, t], it follows that Ψ −n (t)/n converges to 0 in P µ -probability. We thus get and in particular lim n (1/n) n −n η n t (x) = ρ in P µ -probability. Since η t (x) ≥ η n t (x) P µ a.s. we get lim n (1/n) n −n η t (x) ≥ ρ in P µ -probability. But E µ η t (x) does not depend on x by part i) of Proposition 2.1, hence the last inequality implies that E µ η t (0) ≥ ρ.

Proof of Theorem 2.3
To prove the theorem we will need two lemmas and a proposition which we state here and prove later:  .7), and consider {p(x, y)} x,y∈Z as in (4.1) with p = q. Then, for η ∈ Y , the distribution P η of the process {η t , t ≥ 0}, η 0 = η, satisfies for all x ∈ Z and t > 0.

Lemma 4.4 Exponentially bounded rates
Let d = 1, {g(n)} n≥0 as in (2.7), and consider {p(x, y)} x,y∈Z as in (4.1). Assume further that the rate function is exponentially bounded: there exists λ > 0 such that Then for η ∈ Y , Y the set in (2.22), the distribution P η of the process η t , t ≥ 0 : for any r ∈ [1, ∞), x ∈ Z, and t > 0.

9)
then the process satisfies the forward equation: for any local bounded function f : X → R.
Proof of theorem 2.3. We start showing that {η t , t ≥ 0} is a solution of the martingale problem under the hypothesis of either item of the theorem. For each fixed n ∈ N and initial configuration η 0 ∈ Y , the process {η n t ; t ≥ 0} is supported on the countable state space η ∈ X; x η(x) = |x|≤n η 0 (x) , and the transition rates are bounded above by g |x|≤n η 0 (x) < ∞, so this is a Markov chain without explosions. Therefore, for any local, bounded function f : X → R the process is a martingale with quadratic variation where ∇ x,y f (η) := f (η x,y ) − f (η) and η x,y is as in (2.4). Let A be the support of f and letĀ = y ∈ Z, inf x∈A y − x ≤ 1 . Then Due to (4.7) and (4.8), in either item of the statements of the theorem, we can take the limit as n → ∞ in (4.11) and conclude that the sequence M n t (f ) converges a.s. and in L 1 to It then follows from the main result of [9] that M t (f ) is a martingale. Now, for item i) of the theorem, (2.5) follows from Fubini's Theorem and Lemma 4.3, and for item ii) of the theorem, (2.6) follows from Lemma 4.4 and Proposition 4.1.

Proof of Lemmas and 4.4 and of Proposition 4.1
In order to make the presentation as clear as possible, the proof of the asymmetric case, Lemma 4.3, is first presented for the simpler, totally asymmetric dynamics, and generalised on a second step.
We will now compare two zero-range processes starting from elements η, ξ ∈ X f . The particles of η will be classified as γ and ρ particles. This means that at any time t and any site x, we will have η t (x) = γ t (x) + ρ t (x). This comparison will be made thanks to an auxiliary process (γ t , ρ t , ξ t ) t≥0 on X 3 f . Rather than writing down a long generator we state the rates of this process for an arbitrary configuration (γ, ρ, ξ). To do so we introduce some further notation: for γ, ρ ∈ X f , define and We now state the rates of the auxiliary process, 1. For x ∈ Z, at rate pg γ(x) ∧ ξ(x) the process jumps to (γ x,x+1 , ρ, ξ x,x+1 ).
To derive some of the properties of this process it is convenient to distinguish ρparticles from each other. To do so, we label them with positive integers and adopt the convention that whenever a ρ-particle has to jump from a site x the jumps is performed by the particle having the lowest label among those present at x. If there are k ρ-particles in total at time 0 we label them as 1, . . . , k in an arbitrary manner and then each time a ρ-particle is created we attribute to it the lowest available label in N. We now let Ψ(t) be the number of ρ-particles in the system at time t, and denote by Z i the total number of returns (that is, up to time ∞) to the origin of the ith ρ-particle. We can now state some properties of the process (γ t , ρ t , ξ t ), t ≥ 0 .
ii) The process (ξ t ) t≥0 is a zero-range process with rate function g.
iii) Assume p = q. If the initial configuration is such that ρ(x) = 0 for all x = 0 then the conditional distribution of Z 1 , . . . , Z k given {Ψ(t) = k} corresponds to i.i.d. geometrically distributed random variables.
The first two properties are immediate consequences of the jump rates. Note that they imply that the total number of particles is conserved. Hence, for any initial configuration in X 3 f the jump rates are bounded. The third property is a consequence of the following facts, which are derived from the jump rates.
1. The evolution of γ and ξ particles does not depend on the presence or evolution of ρ particles.
2. The creation of ρ particles only depends on the evolution of γ and ξ particles.
3. Due to the convention adopted for the jumps of ρ particles, these perform i.i.d. (p, q) random walks.
Our next lemma follows immediately from these considerations.
We recall the mapping j : Lemma 4.6 Let (γ t , ρ t , ξ t ), t ≥ 0 be the Markov Process on X 3 f with dynamics determined by the rates 1 − 10 above. Denote by N (t) the number of ξ-particles jumping off 0 in the time interval [0, t]. Then for all t ≥ 0 Proof of lemma 4.6. First note that j(γ s , ξ s ) can increase by at most one unit at any given time, and that this can only occur when a ξ-particle jumps off 0. We omit the proof of this assertion since it follows the same arguments as the proof of Lemma 4.1. Then, note that j(γ s , ξ s ) decreases by one unit when a ρ-particle is created. Therefore, and the lemma follows from the fact that j(γ t , ξ t ) is non-negative.
From (4.17), the previous inequality, and Lemma 4.5, we get and using that in the auxiliary process the ξ-particles evolve as in a zero-range process, we conclude (4.18) Let us now fix η ∈ Y and pick α > 0 and β > 0 such that η(x) .
The first term of the right hand side above is equal to Since µ α is invariant, this is equal to To obtain an upper bound for the second term in (4.19) we use Lemma 4.6 as follows: To complete the proof we show that j(η n , ξ)d(Π n (µ α ))(ξ) is bounded uniformly in n. To do so, note that It now sufices to prove that j(η, ξ)d(µ α (ξ)) is finite. This is done as follows: let k be such that η(x) < β.
Then, define If ξ is distributed according to µ α the random variables ξ(x), x ∈ Z are i.i.d. and by Lemma 3.1 have finite exponential moments. Hence, we can apply standard large deviation results to conclude that µ α (L(ξ) ≥ ℓ) decays exponentially with ℓ. Now define M (ξ) = max{L(ξ), k}. Then j(η, ξ) ≤ and the proof is completed.
The proof of Lemma 4.3 relied on the fact that the number of visits to the origin of a random walk with non-vanishing drift has finite expectation. This fails in the symmetric case, and to prove that the conclusion of the lemma still holds in this case, we will restrict the family of rate functions to those having at most exponential growth.
Proof of lemma 4.4. It is enough to prove the lemma for x = 0. Recall the graphical representation from § 4. We will use it to simultaneously construct the zero-range process for all nearest neighbour dynamics determined by jump probabilities (p, q).
Fix n ∈ N and let η n be the truncated configuration in (2.20). Under the graphical construction and not counting multiple visits, the number of particles initially to the left of the origin that at any point during [0, t] have reached 0 is maximal for the totally asymmetric dynamics (p, q) = (1, 0), whereas the number of particles initially to the right of the origin that ever reach it during [0, t] is maximal for the opposite totally asymmetric dynamics (0, 1). Indeed, enumerate particles in the initial η n configuration according to their distance to the origin and any arbitrary order for particles occupying the same site. Let then X i,n be the position of the i-th particle in η n at time 0, and let us denote by X (p,q) i,n (t) its position at time t under the (p, q) dynamics. Furthermore, let us stipulate that on the event that there is a jump out of a site in the graphical construction, the particle with the highest (lowest) index at the site is removed if the direction of the jump is to the right (left). Then it is easy to check that the graphical construction ensures that In particular, if a particle initially to the left of the origin ever reached it during [0, t] for the (p, q) dynamics, the same must hold for the (1, 0) dynamics, with an analogous statement holding for the particles initially to the right of the origin and the (0, 1) dynamics. For the rest of the proof, we continue using the superscript (p, q) to specify which particular dynamics is being referred to.
Fix t > 0. For the (p, q) dynamics, we have ♯ particles that reach 0 over [0, t] ≤ ♯ i ∈ Z, X i,n (0) = 0 Let r ≥ 1. Due to the bound g(k) ≤ e λk , k ∈ Z, and the previous observations, E η n g η n,(p,q) (4.21) We need to show that the last two factors on the right above are uniformly bounded in n. We treat the first, the proof for the second one is completely analogous.

Alternative construction of the zero-range process
In this section we provide an alternative construction of the zero-range process again under the assumption that the rate function g is non-decreasing. This construction is less general than the construction of §3 and 4 in the sense that it requires more restrictive assumptions on the rate function g(n), for instance for dimension d and mean-zero, finite-range jump dynamics it requieres g(n + 1) − g(n) ≤ Cn a with a < 2 d , see Theorem 2.1. On the other hand, it also works on dimensions greater than 1 and does not require nearest-neighbour jump probabilities.
We start with some definitions. Let {p(x, y)} x,y∈Z d be the transition probabilities of a translation invariant random walk in Z d , i.e. p(x, y) = p(y − x) with {p(z)} z∈Z d such that p(z) ≥ 0, z ∈ Z d , and z p(z) = 1. Let {X t ; t ≥ 0} be the continuous-time random walk generated by the transition probabilities {p(x, y)} (x,y)∈Z d . Let P z the law of {X t ; t ≥ 0} with initial condition X 0 = z and the hitting time of the origin by the random walk, and F z (t) := P z (τ 0 ≤ t). (5.1) Notice that it may happen that τ 0 = +∞ with positive probability. We recall some notation from §2. Given η ∈ X and {x i } i∈N an enumeration of the particles of η, let Finally, for η ∈ X, t ≥ 0 and z ∈ Z d we let Lemma 5.1 Let η 0 ∈ X be an initial configuration of particles and let {x i 0 } i∈N ba an enumeration of the particles of η 0 . Let t ≥ 0 and z ∈ Z d . If m z (t, η 0 ) is finite for any z ∈ Z d , then η s (x) < ∞ for all s > 0 and x ∈ Z d , and it satisfies log E[e θηs(z) ] ≤ (e θ − 1)m z (t, η 0 ) for any s ≤ t, any θ > 0 and any z ∈ Z d .
Proof. Note that the sequence of initial configurations {η N 0 } N ∈N is increasing, and therefore one can use the basic coupling as explained in (2.8) to construct a sequence {η N t ; t ≥ 0} of zero-range processes with initial conditions η N 0 , such that η N t (x) is increasing in N for any t ≥ 0. Therefore, the limit exists in [0, ∞]. By monotonicity this limit is the same as in (2.21). Our aim is to prove that it is finite. For any N ∈ N and any t ≥ 0, η N t and η N +1 t differ in only one site x N +1 t and by only one unit. Conditioned on the trajectory of {η N t ; t ≥ 0}, the process {x N +1 t ; t ≥ 0} is a time-inhomogeneous random walk with transition rates r N +1 t (x, y) = p(y − x) g(η N t (x) + 1) − g(η N t (x)) and initial position x N +1 0 . Since the zero-range process η N t has exactly N particles, t (x, y) ≤ p(y − x)h(N + 1).
In particular, {x N t ; t ≥ 0} can be coupled with a random walk {X N t ; t ≥ 0} with transition probabilities {p(z)} z∈Z d starting at x N 0 in such a way that both walks visit exactly the same sites in exactly the same order, and such that the walk {x N t ; t ≥ 0} always visits sites after the walk {X N h(N )t ; t ≥ 0}. Moreover, we can define these couplings in such a way that the walks {X N t ; t ≥ 0} N ∈N are independent. Define τ N z := inf{t ≥ 0; X N t = z},τ N z := inf{t ≥ 0; x N t = z} and notice that Now we observe that the number of particles at site z at time t is bounded by the number of particles that passed by z up to time t. Therefore, for any z ∈ Z d and any N ∈ N, Taking expectations, we see that Therefore, {η s ; 0 ≤ s ≤ t} is well defined. Notice that η t satisfies η t (z) ≤ i∈N 1(τ N z ≤ h(i)t).
The right-hand side of this estimate is a sum of independent random variables with Bernoulli laws of parameter p i = F x i 0 −z (t). Therefore, log E η 0 [e θηt(z) ] ≤ i∈N log(1 + p i (e θ − 1)) ≤ (e θ − 1)m z (t, η 0 ), which finishes the proof of the lemma.

The martingale problem and the forward equation
In this section we show that under the conditions stated in Theorem 2.4, the process constructed in Lemma 5.1 satisfies the martingale problem.
Proof of Theorem 2.4. Recall that according to Lemma 5.1, the process {η t ; 0 ≤ t ≤ T } is well defined as the increasing limit of the processes {η N t ; 0 ≤ t ≤ T } N ∈N , which are the zero-range processes with initial configuration η N 0 given by (5.2). Since the process η N t has a finite number of particles, for any local, bounded function f : X → R, s. to f (η t )−f (η 0 ) and also in L p . Let A ⊆ Z d be the support of f , that is, A is the smallest subset of Z d such that f (η) = f (ξ) whenever η and ξ agree on A, and defineĀ = {y ∈ Z d , p(x − y) > 0 for some x ∈ A}. Since f is local, A andĀ are finite. We have that where δ x is the configuration with exactly one particle at site x and no particles at other sites. From Lemma 5.1, log E η N 0 [g(η N t (z)) p ] ≤ c p (e pθ − 1)m z (t, η 0 ) for any 0 ≤ t ≤ T , any N ∈ N and any z ∈ Z d . From (2.27), this implies that there exists a constant C = C(f, c, θ, p) such that is well defined and it is a martingale, as we wanted to show. Now (2.6) follows from an argument analogous to the one applied to prove Proposition 4.1.
Notice that for k ≥ η 0 (z), Therefore, The sum is finite if h(k) ≤ ck a for a < 2 d − 2 p . Therefore, we have proved that in this case m z (t, η 0 ) ≤ C(η 0 (z) + t p/2 ).
Taking p arbitrarily large, the corollary is proved.

Open problems
We finish this paper stating some open problems.
1. Does equality hold in item ii) of Proposition 2.1? 2. By Proposition 2.1 we see there is a large class of initial configurations for which the process does not explode. Are all configurations with a finite asymptotic density in that class? Theorem 2.2 states that this is the case when p(x, y) corresponds to a nearest neighbour one-dimensional random walk, but its proof cannot be generalized and new ideas are required.
3. In the context of Theorem 2.3, does the integrated forward equation hold in the symmetric case for any increasing function g(k)? Does the backward equation hold if g(k) is bounded by an exponential? Regarding this last question, in [3] the backward equation is proved up to some finite time t which depends on the initial configuration when d = 1 and p(0, 1) = 1.