Random walk in a birth-and-death dynamical environment

We consider a particle moving in continuous time as a Markov jump process; its discrete chain is given by an ordinary random walk on ${\mathbb Z}^d$ , and its jump rate at $({\mathbf x},t)$ is given by a fixed function $\varphi$ of the state of a birth-and-death (BD) process at $\\mathbf x$ on time $t$; BD processes at different sites are independent and identically distributed, and $\varphi$ is assumed non increasing and vanishing at infinity. We derive a LLN and a CLT for the particle position when the environment is 'strongly ergodic'. In the absence of a viable uniform lower bound for the jump rate, we resort instead to stochastic domination, as well as to a subadditive argument to control the time spent by the particle to give $n$ jumps; and we also impose conditions on the initial (product) environmental initial distribution. We also present results on the asymptotics of the environment seen by the particle (under different conditions on $\varphi$).


Introduction
In this paper, we analyse the long time behavior of random walks taking place in an evolving field of traps.A starting motivation is to consider a dynamical environment version of Bouchaud's trap model on Z d .In the (simplest version of the) latter model, we have a continuous time random walk (whose embedded chain is an ordinary random walk) on Z d with spatially inhomogeneous jump rates, given by a field of iid random variables, representing traps.The greater interest is for the case where the inverses of the rates are heavy tailed, leading to subdiffusivity of the particle (performing the random walk), and to the appearance of the phenomenon of aging.See [13] and [3].
In the present paper, we have again a continuous time random walk whose embedded chain is an ordinary random walk (with various hypotheses on its jump distribution, depending on the result), but now the rates are spatially as well as temporally inhomogeneous, the rate at a given site and time is given by a (fixed) function, which we denote by ϕ, of the state of a birth-and-death chain (in continuous time; with time homogeneous jump rates) at that site and time; birth-and-death chains for different sites are iid and ergodic.
We should not expect subdiffusivity if ϕ is bounded away from 0, so we make the opposite assumption for our first main result, which is nevertheless a Central Limit Theorem for the position of the particle (so, no subdiffusivity there, either), as well as a corresponding Law of Large Numbers.
CLT's for random walks in dynamical random environments have been, from a more general point of view, or under different motivations, previously established in a variety of situations; we mention [5], [2], [10], [20] for a few cases with fairly general environments, and [9], [19], [14] in the case of environments given by specific interacting particle systems; [6] and [7] deal with a case where the jump times of the particle are iid.There is a relatively large literature establishing strong LLN's for the position of the particle in random walks in space-time random environments; besides most of the references given above, which also establish it, we mention [1] and [4].[22] derives large deviations for the particle in the case of an iid space-time environment.
These papers assume (or have it naturally) in their environments an ellipticity condition, from which our environment crucially departs, in the sense of our jump rates not being bounded away from 0. Jumps are generally also taken to be bounded, a possibly merely technical assumption in many respects, which we in any case forgo.It should also be said that in many other respects, these models are quite more general, or more correlated than ours 1 .
So, we seem to need a different approach, and that is what we develop here.Our argument requires monotonocity of ϕ, and "strong enough" ergodicity of the environmental chains (translating into something like a second moment condition on its equilibrium distribution).
The main building block for arguing our CLT, in the case where the initial environment is identically 0, is a Law of Large Numbers for the time that the particle takes to make n jumps; this in turn relies on a subadditivity argument, resorting to the Subadditive Ergodic Theorem; in order to obtain the control the latter theorem requires on expected values, we rely on a domination of the environment left by the particle at jump times (when starting from equilibrium); this is a stochastic domination, rather than a strong domination, which would be provided by the infimum of ϕ, were it positive.We extend to more general, product initial environments, with a unifom exponentially decaying tail (and also restricting in this case to spatially homogeneous environments), by means of coupling arguments.
We expect to be able to establish various forms of subdiffusivity in this model when the environment either is not ergodic or not "strongly ergodic" (with, say, heavy tailed equilibrium measures).This is under current investigation.[7] has results in this direction in the case where the jump times of the particle are iid.
Another object of analysis in this paper is the long time behavior of the environment seen by the particle at jump times.We show convergence in distribution under different hypotheses (but always with spatially homogeneous environments, again in this case), and also that the limiting distribution is absolutely continuous with respect to the product of environmental equilibria.We could not bring the domination property mentioned above to bear for this result in the most involved instances of a recurrent embedded chain, so we could not avoid the assumption of a bounded away from 0 ϕ (in which case, monotonicity can be dropped), and a "brute force", strong tightness control this allows.This puts us back under the ellipticiy restriction on the rates2 , adopted in many results of the same nature that have been previously obtained, as in many of the above mentioned references, to which we add [8].
- -------------The remainder of this paper is organized as follows.In Section 2 we define our model in detail, and discuss some of its properties.Section 3 is devoted to the formulations and proofs of the LLN and CLT under an environment started from the identically 0 configuration.The main ingredient, as mentioned above, a LLN for the time that the particle takes to give n jumps, is developed in Subsection 3.1, and the remaining subsections are devoted for the conclusion.In Section 4 we extend the CLT for more general (product) initial configurations of the environment (with a uniform exponential moment).In Section 5 we formulate and prove our result concerning the environment seen from the particle (at jump times).Three appendices are devoted for auxiliary results concerning birth-and-death processes and ordinary (discrete time) random walks.

The model
For d P N ˚:" Nzt0u and S Ă R d , let D pR `, Sq denote the set of càdlàg trajectories from R `to S. We represent by 0 P E and 1 P E, E " N d , Z d , N Z d , respectively, the null element, and the element with all coordinates identically equal to 1.
We will use the notation M " BDP pp, qq to to indicate that M is a birth-and-death process on N with birth rates p " pp n q nPN and death rates q " pq n q nPN˚.We will below consider indepent copies of such a process, and we will assume that p n , q n P p0, 1q for all n, p n `qn " 1 and This condition is well known to be equivalent to ergodicity of such a process.We will also assume that p n ď q n for all n and inf n p n ą 0. See Remark 3.13 at the end of Section 3.
We now make an explicit construction of our process, namely, the random walk in a birth-anddeath (BD) environment.Let ω " pω x q xPZ d be an independent family of BDP's as prescribed in the paragraph of (2.1) above, each started from its respective initial distribution µ x,0 , independently of each other; we will denote by µ x,t the distribution of ω x ptq, t P R `, x P Z d ; ω plays the role of random dynamical environment of our random walk, which we may view as a stochastic process pωptqq tPR Let P μ0 denote the law of ω.
Let now π be a probability on Z d z t0u, and let ξ :" tξ n u nPN˚b e an iid sequence of random vectors taking values in Z d z t0u, each distributed as π; ξ is asumed independent of ω.
Next, let M be a Poisson point process of rate 1 in R d ˆR`, independent of ω and ξ.For each x " px 1 , . . ., x d q P Z d , let where C x " and that by well known properties of Poisson point processes, M x : x P Z d ( is an independent collection such with M x a Poisson point process of rate 1 in C x ˆr0, `8q.
Given ω P A and ϕ : N Ñ p0, 1s, set Note that the projection of N x on txu ˆR`i s a inhomogeneous Poisson point process on txu ˆRẁ ith intensity function given by λ x prq " ϕpω x prqq, x P Z d , r ě 0. (2.5) Let us fix Xp0q " x 0 , x 0 P Z d , and define Xptq, t P R `, as follows.Let τ 0 " 0, and set where by convention inf H " 8.For t P p0, τ 1 q, Xptq " Xp0q, and, if τ 1 ă 8, then X pτ 1 q " Xp0q `ξ1 . (2.7) For n ě 2, we inductively define For t P pτ n-1 , τ n q, we set Xptq " X pτ n-1 q, and, if τ n ă 8, then X pτ n q " X pτ n´1 q `ξn . (2.9) In words, pτ n q nPN are the jump times of the process X :" pXptqq tPR `, which in turn, given ω P A, is a continuous time random walk on Z d starting from x 0 with jump rate at x at time t given by ϕpω x ptqq, x P Z d .Moreover, when at x, the next site to be visited is given by x `y, with y generated from π, x, y P Z d .We adopt D `R`, Z d ˘as sample space for X.
Let us denote by P ω x 0 the conditional law of X given ω P A. We remark that, since N x Ă M x for all x P Z d , it follows from the lack of memory of Poisson processes that, for each n P N ˚, given that τ n´1 ă 8, P ω x 0 -almost surely pP ω x 0 -a.s.), τ n ´τn´1 ě Z n , with Z n a standard exponential random variable.Thus, τ n Ñ 8 P ω x 0 -a.s. as n Ñ 8, i.e., X is non-explosive.Thus, given ω P A, the inductive construction of X proposed above is well defined for all t P R `.We also notice that given the ergodicity assumption we made on ω, we also have that X gives P ω x 0 -a.s.infinitely many jumps along all of its history for almost every realization of ω.
Let us denote by x " px n q nPN the embedded (discrete time) chain of X.We will henceforth at times make reference to a particle which moves in continuous time on Z d , starting from x 0 , and whose trajectory is given by X; in this context, Xptq is of course the position of the particle at time t ě 0. For simplicity, we assume x irreducible.
Remark 2.1.At this point it is worth pointing out that, given ω, X is a time inhomogeneous Markov jump process; we also have that the joint process pXptq, ωptqq tPR `is Markovian.
We may then realize our joint process in the triple pΩ, F, P μ0 ,x 0 q, with μ0 , x 0 as above, where is the appropriate product σ-algebra on Ω, and where M an N are measurable subsets from A e D `R`, Z d ˘, respectively.We will call P ω x 0 the quenched law of X (given ω), and P μ0 ,x 0 the annealed law of X.
We will say that a claim about X holds P x 0 ,μ 0 -a.s.if for P μ0 -almost every ω (for P μ0 -a.e.ω), the claim holds P ω x 0 -q.c.We will also denote by E μ0 , E ω x 0 and E μ0 ,x 0 the expectations with respect to P μ0 , P ω x 0 and P μ0 ,x 0 , respectively.We reserve the notation P µ (resp., P n ) and E µ (resp., E n ) for the probability and its expectation underlying a single birth-and-death process (as specified above) starting from a initial distribution µ on N (resp., starting from n P N).
Furthermore, in what follows, without loss, we will adopt x 0 " 0, and omit such a subscript, i.e., P ω :" P ω 0 and P μ0 :" P μ0 ,0 . (2.11) We will also omit the subscript μ0 when it is irrelevant.And from now on we will indicate the law of the joint process starting from ωp0q " w and x 0 " 0. Let now ∆ n :" τ n ´τn´1 , n P N ˚.We observe that and, for n P N, I n : R `Ñ R `, n P N, is well defined and is invertible P-q.c.under our conditions on the parameters of ω (which ensure its recurrence).We may thus write and (2.17)

Alternative construction
We finish this section with an alternative construction of X, based in the following simple remark, which will be used further on.Let ω and ξ as above be fixed, and set T 0 " 0 and, for n P N ˚, Lemma 2.2.Under the conditions on the parameters of ω assumed in the paragraph of (2.1), we have that tT n : n P N ˚u is a rate 1 Poisson point process on R `, independent of ω and ξ.
Proof.It is enough to check that, given ω and ξ, p∆ n q nPN˚a re the event times of a Poisson point process, which are thus independent of each other; the conclusion follows readily from the fact that ı " e ´t, t P R `.
We thus have an alternative construction of X, as follows.Let ω, ξ be as described at the beginning of the section.Let also V " pV n q nPN be an indepent family of standard exponential random variables.Then, given ω, set Xp0q " x 0 " 0 and τ 0 " 0, and define For all t P p0, τ 1 q, Xptq " Xp0q and and for t P pτ n´1 , τ n q, Xptq " Xpτ n´1 q and Xpτ n q " Xpτ n´1 q `ξn " x n . (2.21) We have thus completed the alternative construction of X.Notice that we have made use of ω and ξ, as in the original construction, but replaced M of the latter construction by V as the remaining ingredient.The alternative construction comes in handy in a coupling argument we develop in order to prove a law of large numbers for the jump times of X.
3 Limit theorems under P 0 In this section state and prove two of our main results, namely a Law of Large Numbers and a Central Limit Theorem for X under P 0 3 and under the following extra conditions on ϕ: ϕ is non increasing, ϕp0q " 1, and lim nÑ8 ϕpnq " 0. (3.1) The statements are provided shortly, and the proofs are presented in the second and third subsections below, respectively.The main ingredient for these results is a Law of Large Numbers for the jump time of X, which in turn uses a stochastic domination result for the distribution of the environment seen by the particle at jump times; both results, along with other preliminay material, are developed in the first subsection below.
In order to state the main results of this section, we need the following preliminaries and further conditions on p, q.Let ν denote the invariant distribution of ω 0 , such that, as is well known, ν n " const ś n i"1 p i´1 q i , for n P N, where the latter product is conventioned to equal 1 for n " 0. Next set ρ n " pn qn , R n " and let R 0 " 1.These quantities are well defined and, in particular, it follows from (2.1) that the latter sum is finite for all n ě 1.
We will require the following extra condition on p, q, in addition to those imposed in the paragraph of (2.1) above: we will assume We note that it follows from our previous assumptions on p, q that (3.2) is stronger than (2.1), since S n ě R n for all n.The relevance of this condition is that it implies the two conditions to be introduced next.Let w denote the embedded chain of ω 0 , and, for n ě 0, let T n denote the first passage time of w by n, namely, T n " infti ě 0 : w i " nu, with the usual convention that inf H " 8. Condition (3.2) is equivalent, as will be argued in Appendix A, to either E ν pT 0 q ă 8 or E 1 pT 2 0 q ă 8. 4(3. 3) It may be readily shown to be stronger than asking that ν have a finite first moment, and a finite second moment of ν implies it, under our conditions on p, q 5 .Conditions (3.3) will be required in our arguments for the following main results of this section -they are what we meant by 'strongly ergodic' in the abstract.See Remark 3.13 at the end of this section.
Theorem 3.1 (Law of Large Numbers for X).Assume the above conditions and that Ep}ξ 1 }q ă 8.
Then there exists µ P p0, 8q such that Here and below } ¨} is the sup norm in Z d .
Theorem 3.2 (Central Limit Theorem for X).Assume the above conditions and that Ep}ξ 1 } 2 q ă 8 and Epξ 1 q " 0.Then, for P 0 -a.e.ω, we have that where Σ is the covariance matrix of ξ 1 , and µ is as in Theorem 3.1.
In the next section we will state a CLT under more general initial environment conditions (but restricting to homogeneous cases of the environmental BD dynamics).As for the mean zero assumption in Theorem 3.2, going beyond it would require substantially more work than we present here, under our approach; see Remark 3.12 at the end of this section.

Law of large numbers for the jump times of X
In this subsection, we prove a Law of Large Numbers for pτ n q nPN under P 0 ; this is the key ingredient in our arguments for the main results of this section; see Proposition 3.11 below.Our strategy for proving the latter result is to establish suitable stochastic domination of the environment by a modified environment, leading to a corresponding domination for jump times; we develop this program next.
We start by recalling some well known definitions.Given two probabilities on N, υ 1 and υ 2 , we indicate by υ 1 ĺ υ 2 that υ 1 is stochastically dominated by υ 2 , i.e., υ 1 pN z A k q ď υ 2 pN z A k q , A k :" t0, . . ., ku , @ k P N. (3.6) We equivalently write, in this situation, X 1 ĺ υ 2 , if X 1 is a random variable distributed as υ 1 .Now let Q denote the generator of ω 0 (which is a Q-matrix), and consider the following matrix where ψ : N Ñ r1, 8q is such that ψpnq " 1{ϕpnq for all n, with ϕ as defined in the paragraph of (2.4) above.Notice that Q ψ is also a Q-matrix, and that it generates a birth-and-death process on N, say ω0 , with transition rates given by this is a positive recurrent process, with invariant distribution ν ψ on N such that with a similar convention for the product as for ν.One may readily check that ν ψ ĺ ν, since ψ is increasing.The relevance of ω0 in the present study issues from the following strightforward result.Recall (2.15).
Here and below e tQ 1 denotes the semigroup associated to an irreducible and recurrent Q-matrix Q 1 on N. We have an immediate consequence of Lemma 3.5, as follows.
Corollary 3.6.If µ is a probability on N such that µ ĺ ν, then, for all t P R `, µe tQ ĺ ν. (3.12) We present now a few more substantial domination lemmas, leading to a key ingredient for justifying the main result of this subsection.Lemma 3.7.Let Q ψ be as in (3.7,3.8)We now make use in (3.19) of Kolmogorov's backward equations for Y, given by P 1 0,j ptq " ´pψ 0 P 0,j ptq `pψ 0 P 1,j ptq " p ψ 0 pP 1,j ptq ´P0,j ptqq; (3.20) P 1 n,j ptq " q ψ n P n´1,j ptq ´ψn P n,j ptq `pψ n P n`1,j ptq, " q ψ n pP n´1,j ptq ´Pn,j ptqq ´pψ n pP n,j ptq ´Pn`1,j ptqq, n ě 1, (3.21) j ě 0. Setting d n :" P n pY t ď lq ´Pn`1 pY t ď lq, n P N, we find that provided which we claim to hold; see justification below.We note that d n ě 0 for all n, l and t, as can be justified by a straightforward coupling argument.It follows that since ψ is nondecreasing, where the third equality follows by reversibility of Y .We thus have that PpY t ď lq is nondecreasing in t for every l; we thus have that Let now V " pV i q iPN˚b e a sequence of independent standard exponential random variables, and consider the embedded chain Ỹ " `Ỹ k ˘kě0 of pY t q tPR `, and Hn " inftk ě 0 : Ỹk " nu.Notice that Ỹ is distributed as w, and Hn is distributed as T n , introduced at the beginning of the section.Let V and Ỹ be independent.Let us now introduce an auxiliary random vaiable H 1 n " V i , and note that, given that Y 0 " n `1, H n st ĺ ϕ n`1 H 1 n ; it follows from this and the Markov inequality that by the ergodicity assumption on ω 0 , and similarly

Now, by the Markov property
Thus, and, similarly as above, we find that ř ně1 ν n ψ n pd 1 n´1 `d1 n q ă 8, and (3.23) is established.
In other words, if ω 0 p0q " ν , then ω 0 pτ 1 q ĺ ν. (3.31) Let us now assume that μx,0 ĺ ν for every x P Z d .Based on the above domination results, we next construct a modification of the joint process pX, ωq, to be denoted p X, ωq, in a coupled way to pX, ωq, so that ω has less spatial dependence than, and at the same time dominates ω in a suitable way.The idea is to let X have the same embedded chain as X, and jump according to ω as X jumps according to ω; we let ω evolve with the same law as ω between its jump times, and at jump times we replace ω at the site where X jumped from by a suitable dominating random variable distributed as ν.Details follow.
We first construct a sequence of environments between jumps of X, as follows.Let pX, ωq be as above, starting from Xp0q " 0, ωp0q " μ0 , then, enlarging the original probability space if necessary, we can find iid random variables ω 0 x p0q, x P Z d , distributed according to ν, such that ω 0 x p0q ě ω x p0q, x P Z d .We let now ω 0 evolve for t ě 0 in a coupled way with ω in such a way that ω 0 x ptq ě ω x ptq, x P Z d .Let now τ1 be obtained from ω 0 in the same way as τ 1 was obtained from ω, using the same M for ω 1 as for ω (recall definition from paragraph of (2.2)); τ1 is the time of the first jump of X, and set Xpτ 1 q " x 1 .Notice that τ1 ě τ 1 .
We now iterate this construction, inductively: given ξ, let us fix n ě 1, and suppose that for each 0 ď j ď n ´1, we have constructed τj , and ω j ptq, t ě τj , with tω j x pτ j q, x P Z d u iid with marginals distributed as ν.We then define τn from ω n´1 pτ n´1 q in the same way as τ 1 was defined from ω 0 p0q, but with the random walk originating in x n´1 , and with the marks of M in the upper half space from τn´1 ; τn is the time of the n-th jump of X, and we set Xpτ n q " x n .
Next, from (3.31), we obtain W n ě ω n´1 x n´1 pτ n q such that tW n ; ω n´1 x pτ n q, x ‰ x n´1 u is an iid family of random variables with marginals distributed as ν, and define a BDP pp, qq pω n ptqq těτn starting from tω n x pτ n q " ω n´1 x pτ n q, x ‰ x n´1 ; ω n x n´1 pτ n q " W n u so that ω n x n´1 ptq ě ω n´1 x n´1 ptq, ω n x ptq " ω n´1 x ptq, x ‰ x n´1 , t ě τn .We finally define ωptq " ω n ptq for t P rτ n , τn`1 q, n ě 0. This coupled construction of pω, ωq has the following properties.Lemma 3.8.
Proof.The first two items are quite clear from the construction, so we will argue only the third item, which is quite clear for n " 0 and 1 (the latter case was already pointed out in the description of the construction, above); for the remaining cases, let n ě 1, and suppose, inductively, that τ n ď τn ; there are two possibilities for τ n`1 : either τ n`1 ď τn , in which case, clearly, τ n`1 ď τn`1 , or τ n`1 ą τn ; in this latter case, τ n`1 (resp., τn`1 ) will correspond to the earliest Poisson point (of M) in Q n :" rc xnpdq , c xnpdq `ϕpω xn prqq rěτn (resp., Qn :" rc xnpdq , c xnpdq `ϕpω xn prqq rěτn ).By (3.32) and the monotonicity of ϕ, we have that Qn Ă Q n , and it follows that τ n`1 ď τn`1 .
The next result follows immediately.
Corollary 3.9.For n ě 1 The following result, together with (3.35), is a key ingredient in the justification of the main result of this subsection.
E ν pτ 1 q ă 8 (3.36) Proof.Let us write For k P N, set k " k ˆ1.Conditioning in the initial state of the environment at the origin, we have, for each δ ą 0 and each t P R `, Thus, where W is a ν-distributed random variable; one readily checks that (3.2) implies that W has a first moment.It remains to consider the latter summand in (3.39).
For that, let us start by setting W 0 " infts ą 0 : ω 0 psq " 0u, and defining and making Y 1 " Z 1 `W1 .Note that Z 1 is an exponential random variable with rate p 0 , and W 1 is the hitting time of the origin by a BDP pp, qq on N starting from 1; under P 0 , W 0 " 0, clearly.For i ě 1, let us suppose defined Y 1 , . . ., Y i´1 , and let us further define and Y i " Z i `Wi .By the strong Markov property, it follows that Z i e W i are distributed as Z 1 e W 1 , respectively, and Z i , W i , i ě 1 are independent, and thus pY i q iě1 is iid.Now set T 0 " W 0 and forn ě 1, T n " T n´1 `Yn .Moreover, for t P R `, let us define C t " 1 tT n ď tu.Note that for k P N and a ą 0, we have and, given α P p0, 1q, P k pC t ă tatuq " P k pC t ă tatu, T 0 ă αtq `Pk pC t ă tatu, T 0 ě αtq By well known elementary large deviation estimates, we have that as soon as a ă p 0 , which we assume from now on.To conclude, it then suffices to show that The latter integral is readily seen to be bounded above by α ´1E ν pT 0 q, and the first condition in (3.3) implies the second assertion in (3.47).The first integral in (3.47) can be written as where Ȳj " Y j ´b, b " EY 1 " EY j , j ě 1, ζ " p1 ´α ´abq{a.Now we have that the expression in (3.48) is finite by the Complete Convergence Theorem of Hsu and Robbins (see Theorem 1 in [15]), as soon as a, α ą 0 are close enough to 0 (so that ζ ą 0), and W 1 has a second moment (and thus so does Y 1 ), but this follows immediately from the first condition in (3.3).
We are now ready to state and prove the main result of this subsection.Proof.We divide the argument in two parts.We first construct a superadditive triangular array of random variables tL m,n : m, n P N, m ď nu so that L 0,n equals τ n under P 0 .Secondly, we verify that t´L m,n : m, n P N, m ď nu satisfies the conditions of Liggett's version of Kingman's Subadditive Ergodic Theorem, an application of which yields the result.
A triangular array of jump times Somewhat similarly as in the construction leading to Lemma 3.8 (see description preceding the statement of that result), we construct a sequence of environments ωm , m ě 0, coupled to ω, in a dominated way (rather than dominating, as in the previous case), as follows.
Let ωp0q " 0, and set ω0 " ω.Consider now τ 1 , τ 2 , . .., the jump times of X, as define above.For m ě 1, we define pω m ptqq těτm as a BDP pp, qq starting from ωm pτ m q " 0, coupled to ω in rτ m , 8q so that ωm x ptq ď ω x ptq (3.51) for all t ě τ m and all x P Z d .Let Xm be a random walk in environment ωm starting at time τ m from x m , with jump times determined, besides ωm , the Poisson marks of M in the upper half space from τ m , in the same way as the jump times of X after τ m are determined by pωptqq těτm and the Poisson marks of M in the upper half space from τ m , and having subsequent jump destinations given by x j , j ě m.Now set τ m 0 " τ m and let τ m 1 ,τ m 2 , . . .be the successive jump times of Xm .Finally, for n ě m, set L m,n " τ m n´m ´τm .L m,n is the time X takes to give n ´m jumps.Notice that L 0,n " τ n .
Properties of tL m,n , 0 ď m ď n ă 8u We claim that the following assertions hold.
( n , and it follows that τ m n`1´m ď τ n`1 .Finally, one readily checks from (3.52) that µ ě E 0 pτ 1 q; the latter expectation can be readily checked to be strictly positive, and the argument is complete.

Proof of the Law of Large Numbers for X under P 0
We may now prove Theorem 3. We now claim that the first term on the right hand side of (3.59) (after multiplication by γ) vanishes in probability as t Ñ 8 under P 0 .Indeed, let us write ξ k " pξ k,1 , . . ., ξ k,d q, k P N. Given ą 0, let us set δ " 3 ; we have that Remark 3.12.A meaningful extension of our arguments for the above CLT to the non mean zero case would require understanding the fluctutations of pN t q, and their dependence to those of a centralized X, issues that we did not pursue for the present article, even though they are most probably treatable by a regeneration argument (possibly dispensing with the domination requirements of our argument for the mean zero case, in particular that ϕ be decreasing).
Another extension is to prove a functional CLT; for the mean zero case treated above, that, we believe, requires no new ideas, and thus we refrained to present a standard argument to that effect (having already gone through standard steps in our justifications for the LLN and CLT for X).Remark 3.13.It is quite clear from our arguments that all that we needed to have from our conditions on p, q is the validity of both conditions in (3.3), and thus we may possibly relax to some extent (3.2), and certainly other conditions imposed on p, q (in the paragraph of (2.1)), with the same approach, but we have opted for simplicity and cleanness, within a measure of generality.
Remark 3.14.For the proof of Lemma 3.13, a mainstay of our approach, we relied on the reversibility of the birth-and-death process, the positivity of d n , and the increasing monotonicity of ψ; see the upshot of the paragraph of (3.24).It is natural to think of extending the argument for other reversible ergodic Markov processes on N; one issue for longer range cases is the positivity of d n ; there should be examples of long range reversible ergodic Markov processes on N where positivity of d n may be ascertained by a coupling argument, and we believe we have worked out such an example, but it looked too specific to warrant a more general formulation of our results (and the extra work involved in such an attempt), so again we felt content in presenting our approach in the present setting.
Remark 3.15.Going back to the construction leading to Lemma 3.8, for 0 ď m ď n, let Lm,n denote the time ω m takes to give n ´m jumps.Then it follows from the properties of ω, ω m , m ě 0, as discussed in the paragraphs preceding the statement of Lemma 3.8, that t Lm,n , 0 ď m ď n ă 8u is a subadditive triangular array, and a Law of Large Numbers for τ n under P ν would follow, once we establish ergodicity of t Lnk,pn`1qk , n P Nu, other conditions for the application of the Subadditive Ergodic Theorem being readily seen to hold.This would require a more susbstantial argument than for the corresponding result for tL nk,pn`1qk , n P Nu, made briefly above (in the second paragraph below (3.55)), since independence is lost.Perhaps a promising strategy would be one similar to that which we undertake in next section, to the same effect; see Remark 4.3.For this, if for nothing else, we refrained from pursuing this specific point in this paper.
Remark 3.16.The restriction of positivity of ϕ, made at the beginning, is not really crucial in our approach.It perhaps makes parts of the arguments clearer, but our approach works if we allow for ϕpnq " 0 for n ě n 0 for any given n 0 ě 1 -in this case, we note, the auxiliary process Y introduced in the proof of Lemma 3.7 is a birth-and-death process on t0, . . ., n 0 ´1u.

Other initial conditions
In this section we extend Theorem 3.2 to other (product) initial conditions.In this and in the next section, we will assume for simplicity that the BD process environments are homogeneous, i.e., p n " p, with p P p0, 1{2q.In this context, we use the notation BDP pp, qq for the process, where q " 1 ´p.We hope that the arguments developed for the inhomogeneous case, as well as subsequent ones, are sufficiently convincing that this may be relaxed -although we do not pretend to be able to propose optimal or near optimal conditions for the validity of any of the subsequent results.
As we will see below, our argument for this extension does not go through a LLN for the position of the particle, as it did in the previous section, we do not discuss an extension for the LLN, rather focusing on the CLT. 7e will as before assume that the initial condition for the environment is product, given by μ0 " Â xPZ d µ x,0 , and we will further assume that µ x,0 ĺ μ, with µ a probability measure on N with an exponentially decaying tail, i.e., there exists a constant β ą 0 such that μprn, 8qq ď const e ´βn (4.1) for all n ě 0. Notice that this includes ν, in the present homogeneous BDP case.Again, it should hopefully be quite clear from our arguments that these conditions can be relaxed both in terms of the homogeneity of μ, as the decay of its tail, but we do not seek to do that presently, or to suggest optimal or near optimal conditions.Our strategy is to first couple the environment starting from μ0 to the one starting from 0, so that for each x P Z d , each respective BD process evolves independently one from the other until they first meet, after which time they coalesce forever.
One natural second step is to couple two versions of the random walks, one starting from each of the two coupled environments in question, so that they jump together when they are at the same point at the same time, and see the same environment.One quite natural way to try and implement such a strategy is to have both walks have the same embedded chains, and show that they will (with high probability) eventually meet at a time at and after which they only see the same environments.Even though this looks like it should be true, we did not find a way to control the distribution of the environments seen by both walks in their evolution (in what might be seen as a game of pursuit) in an effective way.
So we turned to our actual subsequent strategy, which depends on the dimension (and requires different further conditions on π, the distribution of ξ, in d ě 2).In d ď 2, we modify the strategy proposed in the previous paragraph by letting the two walks evolve independently when separated, and relying on recurrence to ensure that they will meet in the afore mentioned conditions; there is a technical issue arising in the latter point for general π (within the conditions of Theorem 3.2), which we resolve by invoking a result in the literature, which is stated for d " 1 only, so for d " 2 we need to restrict π to be symmetric.See Remark 4.4 below.
In d ě 3, we of course do not have recurrence, but, rather, transience, and so we rely on this, instead, to show that our random walk will eventually find itself in a cut point of its trajectory such that the environment along its subsequent trajectory is coalesced with a suitably coupled environment starting from 0; this allows for a comparison to the situation of Theorem 3.2.The argument requires the a.s.existence of infinitely many cut points of px n q, and, to ascertain that, we rely on the literature, which states boundedness of the support of π as a sufficient condition (but no symmetry).

Theorem 4.1 (Central Limit Theorem for X).
Under the same conditions of Theorem 3.2, and assuming the conditions on μ0 stipulated in the paragraph of (4.1) above hold, then we have that for P μ0 -a.We present the proof of Theorem 4.1 in two arguments, spelling out the above broad descriptions, in two subsequent subsections, one for d ď 2, and another one for d ě 3. We first state and prove a lemma which enters both arguments, concerning successive coalescence of coupled versions of the environments, one started from 0, and the other from μ0 , over certain times related to displacements of px n q.
Consider two coalescing versions of the environment, ω and ω, the former one starting from 0, and the latter starting from μ0 as above, such that ωx ptq ď ω x ptq for all x and t, and for x P Z d , let T x denote the coalescence time of ωx and ω x , i.e., T x " inf ts ą 0 : ωx psq " ω x psqu . (4.3) Now let X and X be versions of the random walks on Z d in the respective environments, both starting from 0pP Z d q.Let us suppose, for simplicity, that they have the same embedded chain px n q.
For n P N, let B n denote t´2 n , ´2n `1, . . ., 2 n ´1, 2 n u d , let Hn (resp., H n ) denote the hitting time of Z d zB n by X (resp., X), and consider the event Ån (resp.,A n ) that T x ď Hn (resp., T x ď H n ) for all x P B n`1 .Let also h n denote the hitting time of Z d zB n by px n q.Lemma 4.2.
P 0 p Åc n infinitely oftenq " P μ0 pA c n infinitely oftenq " 0 (4.4) Proof.Under our conditions, the argument is quite elementary, and for this reason we will be rather concise.Let us first point out that both Hn and H n are readily seen to be bounded from below stochastically by Hn :" ř hn i"1 E i , where E 1 , E 2 , . . .are iid standard exponential random variables, which independent of h n and of ω and ω.
It follows readily from Kolmogorov's Maximal Inequality that for all n P N Pph n ď 2 n q " P ´max and by the above mentiond domination and elementary well known large deviation estimates, we find that P 0 p Hn ď 2 n´1 q _ P μ0 pH n ď 2 n´1 q ď Pp Hn ď 2 n´1 q ď const 2 ´n.(4.6) We henceforth treat only the first probability in (4.4); the argument for the second one is identical.The probability of the event that Hn ď 2 n´1 and T x ą H n for some x P B n`1 is bounded above by const 2 dn PpT 0 ą 2 n´1 q. (4.7) It may now be readily checked that T 0 is stochastically dominated by the hitting time of the origin by a simple symmetric random walk on Z in continuous time with homogeneous jump rates equal to 1, with probability p to jump to the left, initially distributed as μ.Thus, given δ ą 0 where H 1 , H 2 are iid random variables distributed as the hitting time of the origin by a simple symmetric random walk on Z in continuous time with homogeneous jump rates equal to 1, with probability p to jump to the left, starting from 1. H 1 is well known to have a positive exponential moment; it follows from elementary large deviation estimates that we may choose δ ą 0 such that the latter term on the right hand side of (4.8) is bounded above by const e ´b2 n for some constant b ą 0 and all n.Using this bound, and substituting (4.1) in (4.8), we find that PpT 0 ą 2 n´1 q ď const e ´b1 2 n (4.9) for some b 1 ą 0 and all n, and (4.4) upon a suitable use of the Borel-Cantelli Lemma.
Remark 4.3.As vaguely mentioned in Remark 3.15 at the end of the previous section, a seemingly promising strategy for establishing the ergodicity of t Lnk,pn`1qk , n P Nu would be to approximate an event of F m1 , the σ-field generated by t Lnk,pn`1qk , n ě m 1 u, by one generated by a version of an environment starting from 0 at time L0,mk , coupled to the original environment in a coalescing way as above, with suitable couplings of the jump times and destinations, with fixed m P N ˚and m 1 " m.Ergodicity would follow by the independence of the latter σ-field and F ḿ, the σ-field generated by t Lpn´1qk,nk , 1 ď n ď mu.We have not attempeted to work this idea out in detail; if we did, it looks as though we might face the same issues arising in the extension of the CLT, as treated in the present section, thus possibly not yielding a better result than Theorem 4.1.

Proof of Theorem for d ď 2
We start by fixing the coalescing environments ω and ω, as above, and considering two independent random walks, denoted X and X 1 in the respective environments ω and ω.The jump times of X and X 1 are obtained from M and M 1 , as in the original construction of our model, where M and M 1 are independent versions of M.
For the jump destinations of X and X 1 , we will change things a little, and consider independent families ξ " t ξz , z P Mu and ξ 1 " tξ 1 z , z P M 1 u of independent versions of ξ 1 .The jump destination of X at the time corresponding to an a.s.unique point z of M is then given by ξz , and correspondingly for X 1 .
Let D " `Dpsq :" Xpsq ´X1 psq, s ě 0 ˘, which is clearly a continuous time jump process, and consider the embedded chain of D, denoted d " pd n q nPN .We claim that under the conditions of Theorem 4.1 for d ě 2, d is recurrent, that is, it a.s.returns to the origin infinitely often.
Before justifying the claim, let us indicate how to reach the conclusion of the proof of Theorem 4.1 for d ď 2 from this.We consider the sequence of return times of D to the origin, i.e., σ0 " 0, and for n ě 1, σn " inf s ą σn´1 : Dpsq " 0 and Dps´q ‰ 0 ( .(4.10) It may be readily checked, in particular using the recurrence claim, that this is an infinite sequence of a.s.finite stopping times given ω, ω, such that σn Ñ 8 as n Ñ 8.Then, for each n P N, we define a version of X 1 , denoted X n , coupled to X and X 1 as follows: X n psq " X 1 psq for s ď σn , and for for s ą σn , the jump times and destinations of X n are defined from ω as before, except that we replace the Poisson marks of M 1 in the half space above σn by the corresponding marks of M, and we use the corresponding jump destinations of ξ.It may be readily checked that X n is a version of X 1 , and that starting at σn , and as long as X n and X see the same respective environments, they remain together.
It then follows from Lemma 4.2 that there exists a finite random time N such that Xptq and X 1 ptq each sees only coupled environments for t ą N , and thus so do Xptq and X n ptq for t ą σn ą N .It then follows from the considerations above that given ω, ω, n P N and x P R ˇˇP ´X1 ptq ?t ă x ¯´P ´Xptq ?t ă x ¯ˇˇ" ˇˇP ´Xnptq ?t ă x ¯´P ´Xptq ?t ă x ¯ˇď P `tt ą σn ą N u c ˘ď P pσ n ě tq `P pN ě σn q, (4.11) and it follows that the limsup as t Ñ 8 of the left hand side of (4.11) is bounded above by the latter probability in the same expression.The result (for X 1 ) follows since (it does for X, by Theorem 3.2, and) n is aribitrary.
In order to check the recurrence claim, notice that if π, the distribution of ξ 1 , is symmetric, then d is readily seen to be a discrete time random walk on Z d with jump distribution given by π, and the claim folllows from well known facts about mean zero random walks with finite second moments for d ď 2. This completes the argument for Theorem 4.1 for d " 2.
For d " 1 and asymmetric π, d is no longer Markovian, but we may resort to Theorem 1 of [11] to justify the claim as follows.Let us fix a realization of ω, ω, M and M 1 (such that no two marks in M Y M 1 have the same time coordinate, which is of course an event of full probability).Let us now dress d up as a controlled random walk (crw) (conditioned on ω, ω, M and M 1 ), in the language of [11]; see paragraph before the statement of Theorem 1 therein.
There are two kinds of jump distributions for d (p " 2, in the notation of [11]): F 1 denotes the distribution of ξ 1 , and F 2 denotes the distribution of ´ξ1 .In order to conform to the set up of [11], we will also introduce two independent families of (jump) iid random variables (which will in the end not be used), namely, ξ " t ξz , z P Mu and ξ 2 " tξ 2 z , z P M 1 u, independent of, but having the same marginal distributions as, ξ (and ξ 1 ).
Let us see how the choice between each of the two distributions is made at each step of d.This is done using the indicator functions ψ n , introduced and termed in [11] the choice of game at time n ě 1, inductively, as follows.
Given ω, ω, M and M 1 , let ζ 1 denote the earliest point of N0 Y N 1 0 , where Nx , N 1 x , x P Z d , are defined from pω, Mq and pω, M 1 q, respectively, as N x was defined from pω, Mq at the beginning of Section 2, and let η 1 denote the time coordinate of ζ 1 , and set ψ 1 " 1 `1tζ 1 P N 1 0 u, and (4.12) Notice that X 1 1 and X 2 1 are independent and distributed as F 1 and F 2 , respectively, and that a.s.
For n ě 2, having defined ζ j , η j , ψ j , X i j , j ă n, i " 1, 2, let ζ n denote the earliest point of N Xpη n´1 q pη n´1 q Y N 1 X 1 pη n´1 q pη n´1 q, where for x P Z d and t ě 0, Nx ptq, N 1 x ptq denote the points of Nx , N 1 x with time coordinates above t, respectively.Let now η n denote the time coordinate of ζ n , and set ψ n " 1 `1tζ n P N 1 X 1 pη n´1 q pη n´1 qu, and Notice that tX i j ; 1 ď j ď n, i " 1, 2u are independent and X 1 j and X 2 j are distributed as F 1 and F 2 , respectively, for all j.Morever, a.s.
Xpη n q " X ψn n 1tψ n " 1u `Xpη n´1 q1tψ n " 2u, (4.16) We then have that for n ě 1, d n " ř n j"1 X ψ j j .One may readily check that (given ω, ω, M and M 1 ) d is a crw in the set up of Theorem 1 of [11], an application of which readily yields the claim, and the proof of Theorem 4.1 for d ď 2 is complete.Remark 4.4.We did not find an extension of the above mentioned theorem of [11] to d " 2, or any other way to show recurrence of pd n q for general asymmetric π within the conditions of Theorem 4.1.

Proof of Theorem 4.1 for d ě 3
We now cannot expect to have recurrence of d, quite on the contrary, but transience suggests that we may have enough of a regeneration scheme, and we pursue precisely this idea, in order to implement which, we resort to cut times of the trajectory of px n q, to ensure the existence of infinitely many of which, we need to restrict to boundedly supported π's.
We will be rather sketchy in this subsection, since the ideas are all quite simple and/or have appeared before in a similar guise.
We now discuss a key concept and ingredient of our argument: cut times for x " px n q.First some notation: for i, j P N, i ď j, let xri, js :" pK q PN˚i s a sequence of cut times for px n q; under our conditions, it is ensured to be an a.s.well defined infinite sequence of finite entries, according to Theorem 1.2 of [16].
We will have three versions of the environment coupled in a coalescent way, as above, with different initial conditions: ω, starting from 0; ω, starting from μ0 ; and ω, starting from ν; in particular, we have that ωx ptq ď ω x ptq, ωx ptq for all x P Z d and t ě 0. We may suppose that the initial conditions of ω and ω are independent.
We now consider several coupled versions of versions of our random walk, starting with two: X, in the environment ω, as in the statement of Theorem 4.1; and X, in the environment ω.X and X are constructed from the same x and V, following the alternative construction of Subsection 2.1.Let ς and ς be the time X and X take to give K jumps, respectively.It may be readily checked, similarly as in Section 3 -see (3.34, 3.56) -, from the environmental monotonicity pointed to in the above paragraph and the present construction of X and X, that ς ď ς for all P N ˚.
Finally, for each P N ˚, we consider three modifications of X and X, namely, X , X and X 1 , defined as follows: X ptq "

#
Xptq, for t ď ς , evolves in the environment ω, for t ą ς ; (4.20) X ptq " # Xptq, for t ď ς , evolves in the environment ω, for t ą ς ; (4.21) for t ď ς , evolves in the environment ωp¨´ς `ς q, for t ą ς . ( Let U denote the first time after which X and X see the same environments ω, ω, ω (from where they stand at each subsequent time).Lemma 4.2 ensures that U is a.s.finite.Let us consider the event A ,t :" tt ą ς ą U u.It readily follows that in A ,t Xptq " X ptq " X 1 pt `ς ´ς q and Xptq " X ptq. (4.23) Given ω, ω, ω, let P ω,ω,ω denote the probability measure underlying our coupled random walks.Since ν is invariant for the environmental BD processes, it follows readily from our construction that P ω,ω,ω pX P ¨q and P ω,ω,ω pX 1 P ¨q have the same distribution (as random probability measures).
In order to complete the proof, it remains to establish (4.28).For that, we first note that where K is the radius of the support of π, and N t , we recall from Subsection 3.2, counts the jumps of X up to time t.Thus, the probability on the left hand side of (4.28) is bounded above by where u ą 0 is arbitrary.One may readily check from our conditions on ϕ that N t ´Npt´uq `is stochastically dominated by a Poisson distribution of mean u for each t, and it follows that the first term in (4.33) vanishes as t Ñ 8 for a.e.ω, ω, ω; (4.28) then follows since u is arbitrary and δ is finite a.s.

Environment seen from the particle
We finally turn, in the last section of this paper, to the behavior of the environment seen from the particle at jump times.Our aim is to derive the convergence of its distribution as time/the number of jumps diverges, and to compare the limiting distribution with the product of invariant distributions of the marginal BD processes.The main result of this section, stated next, addresses these issues under different subsets of the following set of conditions on the parameters of our process.
We note that in neither case we require monotonicity of ϕ8 .
As anticipated, we focus on the homogeneously distributed case of the environment (i.e., we assume, as in the previous section, that p n " p P p0, 1{2q) starting from a product of identical distributions on N with a positive exponential moment, and we will additionally assume that π either has non zero mean, or has a larger than 2 moment.
Let ω be a family of iid homogeneous ergodic BD processes on N indexed by Z d , starting from μ0 as in the paragraph of (4.1) of Section 4, and let X be a time inhomogeneous random walk on Z d starting from 0 in the environment ω, as in the prrevious sections.Let us recall that τ n denotes the time of the n-th jump of X, n ě 1, and consider x pnq " ω Xpτn´q`x pτ n q, x P Z d .
(5.5) pnq :" t x pnq, x P Z d u represents the environment seen by the particle right before its n-th jump.
Theorem 5.1.Assume the condition stipulated on μ0 in Section 49 and suppose that Ep}ξ 1 }q ă 8, and that, of the conditions listed above, at beginning of this section, either 1, 2 1 or 3 hold.Then 1. pnq converges in P μ0 -distribution (in the product topology on N Z d ) to :" t x , x P Z d u, whose distribution does not depend on the particulars of initial distribution 10 .
is absolutely continuous with respect to ν.

Remark 5.2.
There may be a way to adapt our approach in this section to relax/modify Condition 4 to some extent, by, say, requiring a slow decay of ϕ at 8, perhaps adding monotonicity, as in the previous sections.But we do not feel that a full relaxation of that condition, even if imposing monotonicity, is within the present approach, at least not without substantial new ideas (to control tightness to a sufficient extent).
Remark 5.3.As with results in previous sections, we do not expect our conditions for the above results to be close to optimal; again, our aim is to give reasonably natural conditions under which we are able to present an argument in a reasonably simple way.A glaring gap in our conditions is the mean zero ξ 1 , not bounded away from zero ϕ case, even assuming monotonicity of ϕ, as we did for the results in previous sections; notice that the domination implied by (3.32,3.33)holds for jump times of X, not of X, and is thus not directly applicable, nor did we find an indirect application of it, or another way to obtain enough tightness for the environment at jump times of X to get our argument going in that case.
As a preliminary for the proof of Theorem 5.1, we consider the (prolonged) backwards in time random walk starting from Xpτ n ´q (and moving backwards in time) y " ty , P Nu such that y 0 " 0 and for P N ˚y where ξ 1 i " ´ξi , i ě 1, and we have prolonged ξ to non positive integer indices in an iid way.Notice that, for all n P N ˚, y is a random walk starting from 0 whose (iid) jumps are distributed as ´ξ1 (and thus its distribution does not depend on n); notice also that y " x n´1´ ´xn´1 for 0 ď ď n ´1.
It is indeed convenient to use a single backward random walk z (with the same distribution as y) for all n.So in many arguments below, we condition on the trajectory of z (which appears as a superscript in (conditional) probabilities below).
Proof of Theorem 5.1.
We devote the remainder of this section for this proof.Let M be an arbitrary positive integer, and consider the random vector M pnq :" t x pnq, }x} ď M u. (5.7) To establish the first assertion of Theorem 5.1, it is enough to show that M pnq converges in distribution as n Ñ 8.
We start by outlining a fairly straightforward argument for the first assertion of Theorem 5.1 under Condition 3. In this case, we are again (as in the argument for the case of d ě 3 of Theorem 4.1 above), under the conditions for which we have cut times for the trajectory of z.It follows that there a.s.exists a finite cut time T M such that the trajectory of z after T M never visits tx, }x} ď M u.Then, assuming that the environment is started from ν, we have that, as soon as n ą T M , the conditional distribution of M pnq given z equals that of ˇ M,z , which is defined to be the tx, }x} ď M u-marginal of the environment of a process pX, ωq, with ω started from ν and x started from z T M , seen at the time of the T M -th jump of X around the position it occupied immediately before that jump, with x " z T M ´ , 0 ď ď T M .Notice that the result of the integration of the distribution of ˇ M,z with respect to the distribution of z does not depend on n; we may denote by ˇ M the random vector having such (integrated) distribution.It is thus quite clear that M pnq converges to ˇ M in distribution as n Ñ 8.That this also holds under the more general assumption on the initial environment stated in Theorem 5.1 can be readily argued via a coupling argument between μ0 and ν, as done in Section 4. This concludes the proof of Theorem 5.1 under Condition 3.
Below, a similar argument, not however using cut times, will be outlined for the case where Condition 1 holds -see Subsubsection 5.2.3.
In order to obtain convergence of M pnq when Epξ 1 q " 0, and either d ď 2 or π has unbounded support, we require a bound on the tail of the distribution of single-site marginal distributions of the environment at approriate jump times of X, to be specified below.(We also need Condition 2.) In order to find such bound, we felt the need to further impose Condition 4. A bound of the same kind will also enter our argument for the second assertion of Theorem 5.1.We devote the next subsection for obtaining this bound, and the two subsequent subsections for the conclusion of the proof Theorem 5.1.

Bound on the tail of the marginal distribution of the environment
Lemma 5.4.Let x P Z d , m ě 1 and suppose R is a stopping time of z such that R ě m a.s., and on tm ď R ă 8u we have that z R " x and z R´i ‰ x, i " 1, . . ., m.Then, assuming that Epξ 1 q " 0, Ep}ξ 1 } 2 q ă 8 and that (5.4) holds, there exist a constant α ą 0 and m 0 ě 1 such that for all m ě m 0 , outside of an event involving z alone of probability exponentially decaying in m, we have that P z μ0 pω x´z n´1 pτ pn´R`m 2 q `q ą plog mq 2 qq ď e ´αplog mq 2 , ( for all n ě 0, where P z μ0 denotes the conditional probability P μ0 p¨|zq.
Proof.Given n ě 0, let us denote ω x´z n´1 by ω 1 x .For n ď R from Lemma C.3.For this reason, we may restrict to the event where n ě R. Let R " θ 0 , θ 1 , θ 2 , . . .be the successive visits of z to x starting at R; this may be a finite set of times; set I 0 " 0, and, for k ą 0, set I k " infti ą I k´1 : θ i ´θi´1 ą kmu, I k " rθ I k´1 , . . ., θ I k ´1s XZ, I 1 k " pθ I k ´1, . . ., θ I k q X Z; and let I 1 " Y kě1 I 1 k .Notice that z ‰ x on I 1 , and |I 1 k | ě km, k ě 1. (5.9) Now let us consider |I k |.We will bound the upper tail of its distribution.This will be based on a bound to the upper tail of the distribution of I k ´Ik´1 .A moment's thought reveals that, over all the cases of π comprised in our assumptions, the worst case is the one dimensional, recurrent case.For this case, and thus for all cases, Proposition 32.3 of [21] yields that as soon as m is large enough, where (here and below) c is a positive real constant, not necesarily the same in each appearance.(Here and below, we omit the subscript in the probability symbol when it may be restricted to the distribution of z only.)We now note that |I k | ĺ kmpI k ´Ik´1 q, and it follows that P `|I k | ą pkmq 3 ˘ď e ´ckm . (5.11) It readily follows that, setting J " J px, mq " X kě1 |I k | ď pkmq 3 ( , we have that as soon as m is large enough.Now, given z, let K be such that 0 P I K Y I 1 K .We will assume that z P J and n ě R, and bound the conditional distribution of ω 1 x pτ n´R q given such z, via coupling, as follows.It is quite clear from the characteristics of X that, given the boundedness assumption on ϕ, its jump times can be stochastically bounded from above and below by a exponential random variables with rates 1 and δ :" inf ϕ, respectively, independent of ω.
Let us for now consider the succesive continuous time intervals I k , I 1 k , 1 ď k ă K in the timeline of X, during which z jumps in I k , I 1 k , 1 ď k ď K, respectively, if there is any such interval 11 .We recall that time for X and z moves in different directions.Let T k " |I k |, T 1 k " |I 1 k | denote the respective interval lengths.
Remark 5.5.We note that, given our assumed bounds on ϕ, whenever 1 ď k ă K, we have that T 1 k may be bounded from below by the sum of km independent standard exponential random variables.If z P J , then, for 1 ď k ď K, we have that T k may be bounded from above by the sum of pkmq 3  iid exponential random variables of rate δ.
Whenever K ě 2, we introduce, enlarging the probability space if necessary, for each 1 ď k ă K, versions of ω 1 x evolving at I 1 k , namely ω eq k and ω k , coupled to ω 1 x so that, at τ n´θ I k , ω eq k is in equilibrium, and ω k equals the maximum of ω in I k`1 , and ω eq k , ω k and ω 1 x evolve independently on I 1 k until any two of them meet, after which time they coalesce.Notice that it follows, in this case, that ω k ě ω 1 x pτ n´θ I k q.We now need an upper bound for the distribution of ω k at time τ n´θ I k assuming it starts from equilibrium at time τ n´θ I k `1 .From the considerations on Remark 5.5, we readily find that it is bounded by the max of a BDP starting from equilibrium during a time of length given by the sum of pkmq 3 iid exponential random variables of rate δ.In Appendix C we give un upper bound for the tail of the latter random variable -see Lemma C.1 -, which implies from the above reasoning that ÿ wě0 P z `ωk pτ n´θ I k q ą plogpkmqq 2 |ω 1 x pτ pn´θ I k`1 ´1q `q " w ˘νpwq ď e ´cplogpkmqq 2 , (5.13) 1 ď k ă K, where ν, we recall, is the equilibrium distribution of the underlying environmental BDP.As follows from the proof of Lemma C.1 -see Remark C.2 -, (5.13) also holds when we replace ν by any distribution on N with an exponentially decaying tail; so, in particular, it holds for k " K ´1 if we replace ν by the distribution of ω 1 x `τpn´θ I K ´1q `˘, which may be checked to have such a tail -see Lemma C.3.
We now consider the events A k " tω k pτ n´θ I k q ď plogpkmqq 2 u, k " 1, . . ., K, and also the events A 1 k " tduring I 1 k , both ω eq k and ω k visit the origin}, k " 1, . . ., K ´1.Remark 5.6.In A 1 k , ω 1 x and ω eq k (and ω k ) coincide at time τ n´θ I k ´1 .Given the drift of the BDP towards the origin, and the fact that I 1 k ě km, and also the lower bound on ϕ, by a standard large deviation estimate, we have that From the reasoning in the latter two paragraphs above, (given z P J , and minding Remark 5.6) we readily find that and the same may be readily seen to hold also for K " 1.Notice that the above bound is uniform in K ě 1.
Combining this estimate with (5.12), and from the fact that in B K we have that ω 1 x pτ n´R q ď plog mq 2 , and thus, given that from R ´m 2 to R ´1, z does not visit x, and resorting again to a coupling of ω 1 x to suitable ω eq 0 and ω eq 0 on the time interval " τ pn´Rq `, τ pn´R`m 2 q `‰, similarly as the ones of ω 1 x to ω eq k and ω eq k on I 1 k , and to the lower bound on ϕ, as well as to a standard large deviation estimate, as in the argument for (5.14) above, the result for the case where ω 1 x starts from equilibrium follows.If the initial distribution of ω 1 x is not necessarily the equilibrium one, but satisfies the conditions of Section 4 -see paragraph of (4.1) -, then, by the considerations at the end of the paragraph of (5.13) above, the ensuing arguments are readily seen to apply.

Conclusion of
the proof of the first assertion of Theorem 5.1 5.2.1 First case: Epξ 1 q " 0 and d " 1 In this case we also assume, according to the conditions of Theorem 5.1 , that Ep|ξ 1 | 2`ε q ă 8, for some ε ą 0; we may assume, for simplicity, that ε ď 2.
Given z and M P N, consider the following (discrete) stopping times of z: ϑ 1 " inftn ą 0 : ) where Υ `1 " max |z n |, n ă ϑ ( .For ě 1, ϑ indicates the times where |z| exceeds the previous maximum (above M ), say at respective values x , and ϑ the return time after that to either a value below x , if x ą 0, or a value above ´x , if x ă 0. Notice that we may have ϑ `1 " ϑ for some 12 .Figure 2 illustrates realizations of these random variables.
for some constant c ą 0, and it follows from (5.20) that for b ą 0 so that the latter probability vanishes as m Ñ 8 for b ą 1{ε.We now notice that, pointing out to Appendix B for the definitions of T 0 and T 0 , that `χ1 :" Υ `1´|z ϑ |, ˘is distributed as `max 0ďiăT 0 z i , T 0 ˘, if z ϑ ą 0, and as `´min 0ďiăT 0 z i , T 0 ˘, if z ϑ ă 0. In the former case, we have for k ě 1 where the latter passage follows from Kolmogorov's Maximal Inequality, and the same bound holds similarly in the latter case.We thus have that, for b ą 0, and, since Υ " M `ř ´1 i"1 pχ i `χ1 i q, it follows that

`ω1
x pτ pn´R`m 2 q `q ą plog mq 2 ˘ď e ´αplog mq 2 , (5.28) and thus that x pτ pn´R`m 2 q `q ą plog mq 2 ¯ď e ´cplog mq 2 , ( where c is a positive number, depending on α and b only.It readily follows from the argumemts in the proof of Lemma 5.4 (namely, the coupling of ω 1 x and ω eq ) that x pτ pn´ ϑ Lm `m 2 q `q ą plog mq 2 ¯ď e ´cplog mq 2 .
(5.30) Now let z P J X tΥ Lm ď m 1`b u, and choose n 0 ě ϑ Lm so large that for n ě n 0 we have ˘, with ω eq x coupled to ω x so that they move independently till first meeting, after which time they coalesce.
Remark 5.7.We have that `ωeq x pτ n´ ϑ Lm `1q ˘xPt´Υ Lm ,...,Υ Lm u´z n´1 " νm .This follows from the fact that in the period from τ n´ ϑ Lm `m 2 to τ n´ ϑ Lm `1 the jump time lengths of X depend solely on `ωx pτ n´ ϑ Lm `m 2 q ˘xRt´Υ Lm ,...,Υ Lm u´z n´1 , and on the birth-and-death processes evolving on timelines of Zzt´Υ Lm , . . ., Υ Lm u ´zn´1 .
Arguing similarly as in the proof of Lemma 5.4 -see Remark 5.6 -, we find that P z μ0 ´ωx pτ n´ ϑ Lm `1q ‰ ω eq x pτ n´ ϑ Lm `1q for some x P t´Υ Lm , . . ., Υ Lm u ´zn´1 ¯ď e ´cm .(5.32) Now, given z P J X tΥ Lm ď m 1`b u, let eq pnq :" p eq x pnqq xPZ represent the environment at time n of a time inhomogeneous Markov jump process pXptq, ωptqq starting at time τ n´ ϑ Lm from ν (with X starting at that time from z ϑ Lm ´1, and jumping, forwards in time, along the backward trajectory of z).Remark 5.8.Notice that the distribution of eq pnq does not depend on n.
Since, given z, the distribution of M pnq depends only on environments at timelines of sites in t´Υ Lm , . . ., Υ Lm u ´zn´1 from time τ n´ ϑ Lm to τ n , we have, for z P J X tΥ Lm ď m 1`b u, and in the complement of the event under the probability sign on the left hand side of (5.32), and resorting to an obvious coupling, that M x pnq " eq x pnq for |x| ď M .Now, finally, given ε ą 0, we may choose m large enough, and then n 0 large enough so that, for z P J X tΥ Lm ď m 1`b u, and by (5.32), we have that the distance 13 of the conditional distribution of M pnq given z to that of the conditional distribution of ` eq x pnq ˘|x|ďM given z is smaller than ε.We conclude from Remark 5.8 that the sequence in n of conditional distributions of M pnq given z is Cauchy, and thus the same holds for the sequence of unconditional distributions.It readily follows from the above arguments that the limit is the same no matter what are the details of ν0 satisfying the conditions in the paragraph of (4.1).5.2.2 Second case: Epξ 1 q " 0 and d ě 2 Let us first note that each coordinate of z performs mean zero walks with the same moment condition as in the d " 1 case, so arguments for that case apply to, say, the first coordinate, and we get control of the location of that coordinate at (backward) time ϑ Lm : it is with high probability in t´m 1`b , . . ., m 1`b u if m is large, according to (5.25).But now we need to control the location of the other coordinates; and we naturally seek a similar polynomial such control as for the first coordinate.
In order to achieve that, we will simply show that, with high probability, we have polynomial control on the size of ϑ Lm , and that follows from standard arguments once we condition on the event that Υ Lm ď m 1`b -which has high probability according to (5.25).Indeed, on that event ϑ Lm is stochastically dominated by ϑ ˚" inftn ą 0 : |z n | ą m 1`b u, the hitting time by z of the complement of t´m 1`b , . . ., m 1`b u.It is well known that under our conditions we have that ϑ ˚ď m 2p1`b`δq with high probability for all δ ą 0 -see Theorem 23.2 in [21], and thus, again recalling a well known result (see Theorem 23.3 in [21]), we have that the max over the j " 2 to d, and times from 0 to ϑ Lm , of the absolute value of j-th coordinate of z is bounded by m 1`b`δ with high probability for any δ ą 0.
With this control over the maximum dislocation of |z| from time 0 to ϑ Lm , we may repeat essentially the same argument as for d " 1 (with minor and obvious changes).
13 associated to the usual product topology 5.2.3Last case: Epξ 1 q ‰ 0 A similar, but simpler approach works in this case as in the previous case.
Let us assume without loss that Epξ 1 p1qq ă 0 -so that Epξ 1 1 p1qq ą 0, where the '1' within parentheses indicate the coordinate.We consider the quantities introduced in Subsubsection 5.2.1 for zp1q instead of z, and let L 8 " inft ě 1 : " 8u.It is quite clear that this is an a.s.finite random variable.Now, given a typical z, as soon as n ě L 8 , we have that M pnq is again distributed as eq pnq given above; see paragraph right below (5.32); by Remark 5.8, this distribution does not depend on n; notice that the latter definition and property make sense and hold in higher dimensions as well.The result follows for M pnq conditioned on z, and thus also for the unconditional distribution.
Notice that we did not need a positive lower bound for ϕ (and neither a finite uper bound).
Remark 5.9.We note that the above proof established, in every case, the convergence of the conditional distribution of pnq given z as n Ñ 8 to, say, z .
Remark 5.10.It is natural to ask about the asymptotic environment seen by the particle at large deterministic times.A strategy based on looking at the environment seen at the most recent jump time, which might perhaps allow for an approach like the above one, seems to run into a sampling paradox-type issue, which may pose considerable difficulties in the inhomogeneous setting.We chose not to pursue the matter here.

Proof of the second assertion of Theorem 5.1
It is enough, taking into account Remark 5.9, to show the result for the limit of the conditional distribution of pnq given z, which we denote by z , for z in an event of arbitrarily large probability, as follows.
For N ě 0, let Q N " tx P Z d : }x} ď N u and T N " inftk ě 0 : }z k } ě N u.One may check that, by our conditions on the tail of π and the Law of Large Numbers, we have that for some a ą 0 there a.s.exists N 0 such that for all N ě N 0 we thave that T N ą aN .For x P Z d , let R x " inftk ě 0 : z k " xu, and let R " tx P Z d : R x ă 8u.
Consider now the event JN :" X xPR,}x}ěN J px, a}x}q, with J p¨, ¨q as in the paragraph of (5.12).It follows from (5.12) that P `J c N ˘ď e ´cN , (5.33) for some positive constant c (again, not the same in every appearance).Lemma 5.4 and the remark in the above paragraph then ensure that for x P R such that }x} ě N ě a ´1m 0 _N 0 and a.e.z P JN , we have that P z μ0 `ωx´z n´1 pτ pn´Rx`a }x} 2 q `q ą plog a}x}q 2 ˘ď e ´cpplog a}x}q 2 q ; (5.34) it readily follows fromm Lemma C.3 that the same bound holds for x R R; For }x} ě N ě a ´1m 0 _ N 0 , let us couple ω x from τ pn´Rx`a }x} 2 q `onwards, in a coalescing way, as done multiple times above, to ω eq x , a BDP starting at τ pn´Rx`a }x} 2 q `from ν, its equilibrium distribution.We assume that ω eq x , }x} ě N , are independent.It readily follows, from arguments already used above, that, setting C N " X }x}ěN tω x´z n´1 pτ n q " ω x´z n´1 pτ n qu, we have that N ˘ď e ´cN . (5.35) For N ě 0 and n ą T N , let us consider ω z eq pnq " pω z eq pnq, ωz eq q, where ωz eq pnq is z pnq restricted to Q N , and ωz eq is distributed as the product of ν over N Z d zQ N , independently of ωz eq pnq.The considerations of the previous paragraph imply that we may couple z pnq and ω z eq pnq so that they coincide outside a probability which vanishes as N Ñ 8 uniformly in n ą T N .Now set ω z eq :" pω z eq , ωz eq q, where ωz eq is z restricted to Q N , and ωz eq is distributed as the product of ν over N Z d zQ N , independently of ωz eq .From the first item of Theorem 5.1, we have that òn Λ :" N Z d with initial distribution μ0 :" Â xPZ d µ x,0 and trajectories living on A :" D pR `, Nq Z d .

Figure 3 :
Figure 3: Schematic depiction of a stretch of the (backward) trajectory of z P tΥ Lm ď m 1`b u.

N. For l P N, PpY t ď lq "
. Then, for all t P R `, Proof.Let Y " pY t q tPR `denote the birth-and-death process generated by Q ψ started from ν. Set P n,j ptq :" PpY t " j | Y 0 " nq, t P R `, n, j P 3.54) is quite clear, (3.53) follows immediately upon remarking that L nk,pn`1qk , n P N, are, quite clearly, independent random variables, and (3.55) follows readily from (3.35) and (3.36).So, it remains to argue (3.52), which is equivalent to τ m n´m ď τ n , 0 ď m ď n ă 8. (3.56)We make this point similarly as for (3.34), above.(3.56) is immediate for m " 0. Let us fix m ě 1.Then (3.56) is immediate for n " m, and for n " m `1 it follows readily from the fact that ωxm ptq ď ω xm ptq, t ě τ m .
For the remaining cases, let n ě m `1, and suppose, inductively, that τ m n´m ď τ n ; there are two possibilities for τ m n`1´m : either τ m n`1´m ď τ n , in which case, clearly, τ m n`1´m ď τ n`1 , or τ m n`1´m ą τ n ; in this latter case, τ n`1 (resp., τ m n`1´m ) will correspond to the earliest Poisson point (of M) in Q 1 n :" rc xnpdq , c xnpdq `ϕpω xn prqq rěτn (resp., Qn :" rc xnpdq , c xnpdq `ϕpω m xn prqq rěτn ).By (3.51) and the monotonicity of ϕ, we have that Qn Ą Q 1 1.For t P R `, let N t " inf tn ě 0 : τ n ă tu.Proof of the Central Limit Theorem for X under P 0We now prove Theorem 3.2.Let γ " 1{µ, and write By the Central Limit Theorem obeyed by px n q, we have that, under P, as t Ñ 8, By(3.57), it then suffices to consider the first term on the right hand side of (3.61), which may be readily seen to be bounded above by Maximal Inequality in the latter passage; the claim follows since is arbitrary.And the CLT follows readily from the claim and (3.60).