Subadditive Theorems in Time-Dependent Environments

We prove time-dependent versions of Kingman's subadditive ergodic theorem, which can be used to study stochastic processes as well as propagation of solutions to PDE in time-dependent environments.


Introduction and Main Results
During the last half-century, Kingman's subadditive ergodic theorem [4] and its versions (in particular, by Liggett [6]) have been a crucial tool in the study of evolution processes in stationary ergodic environments, including first passage percolation and related models as well as processes modeled by partial differential equations (PDE) which satisfy the maximum principle.Typically, the theorem is used to show that propagation of such a process in each spatial direction has almost surely some deterministic asymptotic speed.This can also often be extended to existence of a deterministic asymptotic propagation shape when the propagation involves invasion of one state of the process (e.g., the region not yet affected by it) by another (e.g., the already affected region).
Kingman's theorem concerns a family {X m,n } (n > m ≥ 0) of random variables on a probability space which satisfy the crucial subadditivity hypothesis X m,n ≤ X m,k + X k,n for all k ∈ {m + 1, . . ., n − 1}, ( together with E[X 0,n ] ∈ [−Cn, ∞) for some C ≥ 0 and each n ∈ N. Also, {X m,n } is stationary in the sense that the joint distribution of {X m+n,m+n+k | (n, k) ∈ N 0 × N} is independent of m ∈ N 0 .It then concludes that X := lim n→∞ X 0,n n exists almost surely, and Moreover, X is a constant if {X m,n } is also ergodic, that is, any event defined in terms of {X m,n } and invariant under the shift (m, n) → (m + 1, n + 1) has probability either 0 or 1.
A typical use of such a result in the study of PDE is described in Example 5.1 below.We let X m,n be the time it takes for a solution to the PDE to propagate from me ∈ R d to ne ∈ R d (see the example for details), with e some fixed unit vector (i.e., direction).Subadditivity is then guaranteed by the maximum principle for the PDE, and Kingman's theorem may therefore often be used to conclude existence of a deterministic propagation speed in direction e, in an appropriate sense and under some basic hypotheses.
However, this approach only works when the coefficients of the PDE are either independent of time or time-periodic.The present work is therefore motivated by our desire to apply subadditivity-based techniques to PDE with more general time dependence of coefficients (and to other non-autonomous models), in particular, those with finite temporal ranges of dependence as well as with decreasing temporal correlations.Despite this being a very natural question, we were not able to find relevant results in the existing literature.We thus prove here the following two results, and also provide applications to a time-dependent first passage percolation model (see Examples 5.2 and 5.3 below).In the companion paper [8] we apply these results to specific PDE models (as described in Example 5.1), specifically reaction-diffusion equations and Hamilton-Jacobi equations.
Our first main result in the present paper applies when the process in question (or rather the environment in which it occurs) has a finite temporal range of dependence, with F ± t being the sigma-algebras generated by the environment up to and starting from time t, respectively.It mirrors Kingman's theorem, with a weaker stationarity hypothesis (3) below (analogous to [6]) but under the additional hypothesis (6).The latter is the natural requirement that if the process propagates from some "location" m to another location n, starting at some time t, it cannot reach n later than the same process which starts form m at some later time t + s, at least when s is sufficiently large.In the case of PDE, maximum principle will often guarantee this if the time-dependent propagation times X t m,n ≥ 0 (i.e., from location m to n, starting at time t ∈ [0, ∞)) are defined appropriately (see Example 5.1).We also note that (1) below is the natural version of (1.1) in the time-dependent setting.
Theorem 1.1.Let (Ω, P, F ) be a probability space, and for all t ≥ s ≥ 0. For any t ≥ 0 and integers n > m ≥ 0, let X t m,n : Ω → [0, ∞) be a random variable.Let there be C ≥ 0 such that the following statements hold for all such t, m, n. (1) for all k ∈ {m + 1, . . ., n − 1}; (2) Moreover, if C ∈ N and X t m,n are all integer-valued, then it suffices to have c = 0 in (6).Remarks. 1.Of course, it suffices to assume (1) and ( 6) only almost surely.2. There would be little benefit in using different C in ( 5) and ( 6) because (5) clearly holds with any larger C, while iterating (6) yields (6) for all s ∈ [kC, kC + kc] and any k ∈ N.
Our second main result allows for an infinite temporal range of dependence of the environment, provided this dependence decreases with time in an appropriate sense, and we then also need a uniform bound in place of (2).Theorem 1.2.Assume the hypotheses of Theorem 1.1, but with (2) and (5) replaced by (2 * ) X 0 0,1 ≤ C; (5 * ) lim s→∞ φ(s) = 0, where and if there is α > 0 such that lim s→∞ s α φ(s) = 0, then also Moreover, if C ∈ N and X t m,n are all integer-valued, then it suffices to have c = 0 in (6).Remarks. 1. Again, using different C in (2 * ) and ( 6) would not strengthen the result.2. We will actually prove this result with φ(s) being instead the supremum of 3. We will also show that without assuming lim s→∞ s α φ(s) = 0, we still have Organization of the Paper and Acknowledgements.We prove Theorem 1.1 in Section 2 and the claims in Theorem 1.2 in Sections 3 and 4. Section 5 contains some applications of these results.
We thank Patrick Fitzsimmons and Robin Pemantle for useful discussions.YPZ acknowledges partial support by an AMS-Simons Travel Grant.AZ acknowledges partial support by NSF grant DMS-1900943 and by a Simons Fellowship.

Finite Temporal Range of Dependence
Let us first prove a version of Theorem 1.1 with N 0 -valued random variables and C = 0 in (5).Theorem 1.1 will then easily follow.Let us denote {X = s} := {ω ∈ Ω | X(ω) = s}.Theorem 2.1.Let (Ω, P, F ) be a probability space, and For any integers t ≥ 0 and n > m ≥ 0, let T t m,n : Ω → N 0 be a random variable.Let there be C, C ′ ∈ N such that the following statements hold for all such t, m, n.
for all k ∈ {m + 1, . . ., n − 1}; 3) The proof of (2.3) is similar to the proof of [1, Lemma 6.7], although there the analogs of T t m,n were bounded random variables; the idea goes back to [4], where the analogs of T t m,n were t-independent.For any integers n > m > 0, (4') shows that for any i, j ∈ N 0 we have Summing this over i ∈ N 0 , we find that T Fekete's subadditive lemma thus implies that the equality in (2.3) holds.
For any n ∈ N, let t n 0 := 0 and ξ n 0 := T 0 0,n , and then for i ∈ N define recursively in,(i+1)n .By iteratively applying (1'), we get for any k ∈ N, Similarly as above, it follows from (3')-(5') that for any j 0 , j 1 , . . ., j k−1 ∈ N 0 we have Summing this over all indices but i shows that ξ n i has the same law as T 0 0,n for each i.This, (2'), and (2.4) with n = 1 then show that for any k ∈ N, (2.5) Also, the above computation shows that ξ n 0 , . . ., ξ n k−1 are jointly independent random variables for all n and k, so the strong law of large numbers yields lim Thus (2.4) and the equality in (2.3) yield that for any ε > 0 there is Now fix any l ∈ {0, . . ., n ε − 1} and note that (1') yields for all k ∈ N 0 , Since T T 0 0,knε knε,knε+l has the same distribution as T 0 0,l , we obtain from (2.5) that Borel-Cantelli Lemma then implies that lim sup k→∞ m for all k ∈ N.However, this and (3') imply that Z t m is independent of F − t+Ck for all k ∈ N, while (4') shows that it is also measurable with respect to the σ-algebra generated by s≥t F − s .This shows that there is a constant In view of (2.3), to prove (2.2) it remains to show that Our proof of this is related to the approach of Levental [5] in the t-independent case, which is in turn based on [3].However, t-dependence complicates the situation here, which is why we first needed to show that Z t m is in fact (t, m, ω)-independent to conclude (2.8) (in [5], it was sufficient to allow ω-dependence at first).Fix any ε > 0, and denote which also depends on ε but we suppress this in the notation).It follows from Z t m = Q a.e. that almost surely we have N t m < ∞ for all (t, m) ∈ N 2 0 , and (3') yields that N t m has the same distribution as (2.9) Let now t 0 := 0 and r 0 := 0, and for k ≥ 0 define recursively Fix any n ∈ N. We will now use {r k } k≥1 to divide the "propagation" from 0 to n into several "steps".Since this sequence is strictly increasing for each ω ∈ Ω, the random variable (note that, e.g., T We now want to take expectation on both sides of (2.11).From (4') we see that for any i, j ∈ N 0 we have and N j i are F + j -measurable, from (5'), (3'), and (2.9) we obtain (2.12) Finally, we claim that E T t Kn r Kn ,n ≤ C ′ M 2 ε ; this together with (2.10) and (2.12), and then taking ε → 0, will yield (2.8).To this end we note that 1 Since {t Kn = j} ∈ F − j and T j n−l,n is F + j -measurable, we obtain from (5'), (3'), and (2.5), Therefore indeed ) holds and the proof is finished.
Proof of Theorem 1.1.Let us first assume that c ≥ 1 and define Let us redefine F − t to be F − t−C for t ≥ C and {∅, Ω} for t ∈ [0, C) (i.e., shift F − t to the right by C) and let C ′ := E ⌈X 0 0,1 + C⌉ .After restricting t to N 0 , it is clear that T t m,n satisfies hypotheses (2')-(6') of Theorem 2.1, with max{⌈C⌉, 1} in place of C.And (1') also holds because if n > k > m ≥ 0 are integers, then (1) and (6) Hence (2.2) proves (1.2) with the last numerator being E ⌈X 0 0,n + C⌉ .Note that this argument also applies in the setting of the last claim in Theorem 1.1 and without ⌈•⌉.
To get (1.2) as stated and for any c > 0, let

Time-Decaying Dependence I
In this section we will prove the first claim in Theorem 1.2 and the corresponding integervalued claim.Let us first prove a version of the latter with weaker (2 * ) and stronger (5 * ).Theorem 3.1.Let (Ω, P, F ) be a probability space, and {F ± t } t∈N 0 two filtrations satisfying (2.1).For any integers t ≥ 0 and n > m ≥ 0, let X t m,n : Ω → N 0 be a random variable.Let there be C ∈ N such that for all such t, m, n we have (1) and (3) from Theorem 1.1, and Proof.From (5 * * ) we know that for each ε > 0, there is Let us then define (again suppressing ε in the notation for the sake of clarity) As before, for any ε > 0 and n ∈ N, let t n 0 := 0 and ξ n 0 := T 0 0,n , and then for i ∈ N define recursively By (1) we have T 0 0,kn ≤ k−1 i=0 ξ n i for each k ∈ N. Also, since (4") yields it follows from (3) and (3.1) that which exists by (3.2).Then (4") shows that for any integers n > m > 0 and i, j ∈ N 0 we get (3.9) For any n ∈ N write n = kn ε + l, where k ∈ N 0 and l ∈ {0, . . ., n ε − 1}.By applying (1) and the above computations recursively, we obtain Since ε > 0 was arbitrary, this and (3.2) show that Next we claim that there is C * > 0 such that for any ε ∈ (0, 1], n ∈ N, and i = j we have We postpone the proof of (3.11) to the end of the proof of (i).Since t n k = k−1 i=0 ξ n i , we now have Chebyshev's inequality then yields Since E[t n k ] = k−1 i=0 µ n i , this and (3.4) imply For any N ∈ N write N = kn + l, where k ∈ N 0 and l ∈ {0, • • • , n − 1}.Then (1) yields ) and (3.7), as well as If we then take n = n ε in (3.12) and then N → ∞ (so that k → ∞), for each δ > 0 we obtain lim sup Since µ nε 0 = E[X 0 0,nε ] + C ε and lim ε→0 n ε = ∞ by (3.13), taking ε → 0 in this estimate and using (3.10) and (3.13) shows that lim N →∞ P Let us now assume that there is δ > 0 and a sequence n k → ∞ such that Since for all large enough k we have But (3.10) and (3.14) also show that for all large enough k we have Hence for all large enough k we obtain which contradicts (3.15).It follows that lim n→∞ P X 0 0,n n − X < −δ = 0 for each δ > 0, so this and (3.14) yield (1.3).
It therefore remains to prove (3.11).Similarly as in (3.4), for any (i, n) ∈ N 0 × N and with ξn as well as which yields this estimate with C * := 4E X 0 0,1 + C 1 2 .
To prove the second claim in (3.11), we apply (4 * ) to get that for any i, i ′ , j, j ′ , k, l ∈ N 0 satisfying l > k that (3.17) kn,(k+1)n − C ε ≥ 0, it follows from the above, (3), (3.1), and (3.5) that , where we used that the summands are zero whenever j ′ < i ′ + i.Also note that (3.6) yields Now the second claim in (3.11) follows by (3.10), and the proof of (i) is finished.
Next we adjust this proof to obtain the integer-valued version of the first claim in Theorem 1.2.We will use in it the following lemma.
The same estimate holds for the sum over U − , finishing the proof.Proof.This proof follows along the same lines as the one of Theorem 3.1, with some minor adjustments.From (1), (2 * ), and (3) we see that for any integers t ≥ 0 and n > m ≥ 0 we have With the φ considered here, let C ε ∈ N be such that and let T t m,n , X, t n i , ξ n i , µ n i be defined as before.Then (3.19), Lemma 3.2, and (3) yield instead of (3.4).Similarly, we obtain Using ( 1) and (3.21) in place of (3.4), we now get in place of (3.7), with C ′ ε := E X 0 0,1 + C ε + Cε.Next, similarly to (3.21) and using Lemma 3.2 and (3.19), we can replace (3.9) by With this, we again obtain (3.10).The proof of (3.11) is also adjusted similarly to (3.21).We now obtain , which yields the first claim in (3.11) as before (with a different C * ).In the proof of the second claim, we use (3.22) in place of (3.5), as well as ξn k ≤ Cn (due to (3.19)).We also use the same adjustment as in (3.21), but now replacing the sum over k by the sum over (i, i ′ , j ′ ) (with A (i,i ′ ,j ′ ) j := {T j ′ ln,(l+1)n = j} when we use Lemma 3.2).This and (3.19) show that This, (3.23) applied with i = k, l, and ξn 0 ≤ Cn then yield the second claim in (3.11) with Now, the proof of (3.12), but with (3.4), (3.7), and (3.9) replaced by (3.21), (3.24), and (3.25), shows that where This then implies (3.14) as before, and the rest of the proof is identical to the proof of Theorem 3.1.
We can now prove the first claim in Theorem 1.2 similarly to the proof of Theorem 1.1.
Proof of the first claim in Theorem 1.2.Let us first assume that c ≥ 1.Let and restrict t to N 0 .Similarly to the proof of Theorem 1.1, we find that T t m,n satisfies hypotheses (1), ( 3), (4 * * ), (6 * * ) of Theorem 3.3 (with X t m,n replaced by T t m,n ), but with max{⌈C⌉, 1} in place of C in (6 * * ).Hence iteration of (6 * * ) shows that it also holds for T t m,n and C ′ := 2 max{⌈C⌉, 1} in place of C. From (2 * ) for X t m,n we see that T t m,n also satisfies (2 * ) with C ′ in place of C.
Let now φ be as in Remark 2 after Theorem 1.2.Note that if we define φ(s) as in that remark but only with s, t 0 , t 1 , • • • ∈ N 0 , then φ ≤ φ.Therefore our hypothesis lim s→∞ φ(s) = 0 implies the last hypothesis in Theorem 3.3 as well.That theorem for T t m,n now yields (1.3).For c ∈ (0, 1), we let G ± t and Y t m,n be as in the proof of Theorem 1.1.The above argument with (G ± t , Y t m,n , SC, Sc) in place of (F ± t , X t m,n , C, c) then again concludes (1.3).Finally, in the setting of the last claim in Theorem 1.2 we can just apply Theorem 3.3 directly to X t m,n (with φ above).

Time-Decaying Dependence II
In this section we will prove the second claim in Theorem 1.2, as well as the corresponding integer-valued claim.
Proof of the second claim in Theorem 1.2.Similarly to the proof of the first claim in Theorem 1.2, this again follows from the corresponding integer-valued claim.Hence, without loss, we can restrict t to N 0 and assume that X As in the proof of Theorem 3.1, let T t m,n := X t m,n + C ε some C ε ∈ N that is a multiple of C and (3.20) also holds.Then (1'), (3'), (6') from Theorem 2.1 hold and so does (4") from the proof of Theorem 3.1, while (2') is replaced by T 0 0,1 ≤ C + C ε , and (5 * ) also holds.For any n ∈ N, define t n i and ξ n i as at the start of the proof of Theorem 3.1.From (4") we again get (3.17) for any i, i ′ , j, j ′ , k, l ∈ N 0 , and the argument after (3.17) again shows that if l > k, then j Then the argument from the proof of the second claim in (3.11) in the proof of Theorem 3.3 (which uses Lemma 3.2) shows that for any ν, ν ′ ∈ N we have From (1') we see that for any n ∈ N we have From (4.1) we see that there is ε-independent n K ∈ N such that for all n ≥ max{C ε K, n K }, From ( 1), (2 * ), and (3) we get T )n for these n and all i ∈ N 0 .This means that if only one of the numbers is positive, then (4.4) yields The same estimate holds if each of these numbers is less than C+1 K .These facts, (4.3), (3'), and (4.5) now imply that for any n ≥ max{C ε K, n K } we have We can now apply this estimate iteratively with Kn, K 2 n, . . . in place of n and obtain for any n ≥ max{C ε K, n K } and q ∈ N, This of course also yields The hypothesis shows that there is A ∈ N such that C ε ≤ Aε −A for all ε ∈ (0, 1).Let Then for any q ∈ N, (4.6) with ε := 2 −q and n : By the Borel-Cantelli Lemma we then obtain lim sup Now apply (4.8) with C ′ taking all the values in Then for any large n, there is So by ( 1) and (2 * ) we have By taking K → ∞, we conclude (4.2).It remains to prove We will do this with only assuming lim s→∞ φ(s) = 0 (rather than lim s→∞ s α φ(s) = 0), and without the use of the proof of (4.2).This will then also prove Remark 3 after Theorem 1.2.For any t, m, n, j ∈ N 0 with j ≥ n, let Z t+Ck m is non-decreasing in k ∈ N by (6) (with c = 0), and since the law of Z t m is independent of (t, m) by ( 3 .By (3) and Ergorov's Theorem, there δ-dependent n, j ∈ N 2 with j ≥ n such that for any t ∈ N 0 we have Hence Cov Y 0 0;n,j , Y Ck 0;n,j ≤ C 2 φ(C(k − j)), which contradicts with (4.11) if we take k large enough (because (5 * ) holds).
Therefore Z 0 0 is indeed almost everywhere equal to some constant Q ∈ [0, X].Then (4.9) is just X ≤ Q, so we only need to prove this.For any ε > 0 and K ∈ N, let us define Note that to prove X ≤ Q, it suffices to show that holds for each ε > 0 and K ∈ N, with some n-independent M ′ K,ε .This is because after dividing (4.15) by K and taking n → ∞, we obtain from (4.1), Taking K → ∞ and then ε → 0 now yields X ≤ Q, so we are indeed left with proving (4.15).This is done similarly to the argument in the proof of (2.8), with KQ in place of Q. Fix ε > 0 and K ∈ N, let Q ε := KQ + ε (as at the start of that proof), and let T t m,n be from (4.13).Note that for any t, m ∈ N 0 we have lim inf n→∞ T t m,m+n n = KQ almost surely because Z t m = Q almost everywhere.Define N t m , M ε , t k , r k , S n as in the proof of (2.8), and follow that proof, with two adjustments near the end where (5') was used.The first is the estimate on From (4") we have for any i, j ∈ N 0 that {r k = i & t k = j} ∈ F − j−Cε , and T j i,i+1 and N j i are F + j -measurable.Hence we can use (5 * ), (4.14), and Lemma 3.2 instead of (5') (as well as (3') and (2.9) as before) to obtain This, (4.17), and (2.10) now show (4.15), and the proof is finished.

PDE and First Passage Percolation in Time-Dependent Environments
Our main motivation for this work was its application in the proofs of homogenization for reaction-diffusion equations and G-equations with time-dependent coefficients [8].However, our results can be used to study propagation of solutions to even more general PDE.
Example 5.1.Consider some PDE on [0, ∞) × R d with space-time stationary coefficients, for which the maximum principle holds.Assume that (5) resp.(5 * ) holds when F ± t are σalgebras generated by the coefficients restricted to [0, t] × R d and [t, ∞) × R d , respectively.Fix some compactly supported "bump" function u 0 : R d → [0, ∞), and for any (t so that X t ′ (x ′ , y) can be thought of as the time it takes for u t ′ ,x ′ to propagate from x ′ to y, starting at time t ′ .Let us also assume that u 0 was chosen so that for some C ≥ 0 and all t ′ ≥ C we have u 0,0 (t ′ , •) ≥ u 0 .
Fix any t ∈ [0, ∞) and unit vector e ∈ S d−1 , and let X t,e m,n := X t (me, ne).Then ( 4) is obvious from the definition of X t,e m,n , while maximum principle, space-time stationarity of coefficients, and u 0,0 (t ′ , •) ≥ u 0 for all t ′ ≥ C yield (1), (3), and (6).Hence if (2) resp.(2 * ) holds, Theorem 1.1 resp.1.2 can be used to show that the limit lim n→∞ X 0,e 0,n n (5.1) exists and equals a constant (almost surely or in probability).Of course, its reciprocal then represents the deterministic asymptotic speed of propagation in direction e for this PDE.
In fact, if X t ′ (x ′ ,y) |x ′ −y| is bounded below and above by positive constants c 0 ≤ c 1 whenever |x ′ − y| ≥ 1, then (2) and (2 * ) clearly hold, asymptotic propagation speeds in all directions are between 1 c 1 and 1 c 0 , and the PDE even has a deterministic asymptotic shape of propagation (called Wulff shape).Indeed, a version of a standard argument going back to [2,7] (see [8]) can typically be used to show that there is a convex open set S ⊆ R d , containing and contained in the balls centered at the origin with radii 1 c 1 and 1 c 0 , respectively, such that if S t (ω) := {x ∈ R d | X 0 (0, x) ≤ t}, then for any δ > 0 we have either for almost every ω ∈ Ω and all large-enough t ≥ 0 (depending on ω and δ) or with probability converging to 1 as t → ∞.
We refer the reader to our companion paper [8] for further details and specific applications of Theorems 1.1 and 1.2 to homogenization for reaction-diffusion and Hamilton-Jacobi PDE.
We next provide an application of our results to a different model, first passage percolations in time-dependent environments.Let V d be the set of edges of the lattice Z d , that is, each v ∈ V d connects two points A, B ∈ Z d which share d − 1 of their d coordinates and differ by 1 in the last coordinate (these can be either directed edges or not).Let us consider a traveler moving on the lattice Z d from point A to B. He can move along any path γ made of a sequence of edges v γ 1 , v γ 2 , . . ., v γ nγ , where each v γ i connects some points A i−1 and A i , with A = A 0 and B = A nγ .Let us denote by Γ A,B the set of all such paths.Let us assume that the travel time for any edge v, if it is reached by the traveler at time t, is some number τ t v ≥ 0. For any γ ∈ Γ A,B and any time t 0 , define recursively (for i = 1, 2, . . ., n γ ) the times T t 0 γ := t nγ − t 0 .That is, t i is the time of arrival at the point A i , and T t 0 γ is the travel time along γ when the starting time is t 0 .Finally, let be the shortest travel time from A to B when starting at time t.
When the travel times are independent of t, this is of course the standard first passage percolation model.Let us consider one of the following two setups when time-dependence is included.Let ξ t v ≥ 0 be some number, and let τ t v be either the first time such that then F − t and F + t+C are independent for each t ≥ 0 because random variables α(ω) := η(v 1 , ω i ) and β(ω) := η(v 2 , ω j ) are independent for any v 1 , v 2 ∈ V d and any distinct i, j ∈ N 0 .The above discussion now shows that Theorem 1.1 applies to X t,e m,n above for any e ∈ Z d , so 1 n X t,e 0,n converges to some ω-independent constant almost surely.Moreover, for any (A, B, t) ∈ Z 2d × [0, ∞) (and with L above) we clearly have 1/L (0) and contained in B 1 L (0), such that if S t (ω) is the set of all A ∈ Z d with X 0 (0, A) ≤ t (for t ≥ 0 and ξ s v from (5.7)), then for almost every ω ∈ Ω we have that for any δ > 0 and all large-enough t ≥ 0 (depending on ω and δ), (5.9) That is, S is again the deterministic asymptotic shape of all points reachable from the origin in time t (as t → ∞ and after scaling by t).
Example 5.3.Consider a Poisson point process with parameter λ > 0 on R, defined on some probability space (Ω ′ , F ′ , P ′ ), and let N t be the corresponding counting process (i.e., N t is the number of points in the interval (0, t]).We now let Ω := Ω ′ × Ω N 0 0 have the product probability measure, and for ω = (ω ′ , ω 0 , ω 1 , . . . ) ∈ Ω we let ξ t v (ω) := η(v, ω Nt ).That is, now the interval after which the speeds ξ t v change has an exponential distribution.The speeds are again space-time stationary, and (5 * ) holds with φ(s) := e −λs when F ± t are defined via (5.5) and (5.6).Indeed, if G t,s := {N t+s = N t } for t, s ≥ 0, then P[G t,s ] = e −λs and events E and F ∩ G c t,s are independent whenever E ∈ F − t and F ∈ F + t+s (see below).This includes F = Ω, which yields for general E ∈ F − The above discussion therefore shows that Theorem 1.2 applies to X t,e m,n above for any e ∈ Z d , so 1 n X 0,e 0,n converges to some ω-independent constant almost surely.And just as before, we can again also conclude (5.8) and (5.9).
It remains to prove independence of E and F ∩ G c t,s for any E ∈ F − t and F ∈ F + t+s .Let us denote v 0 , v 1 , . . .all the edges in V d and for m, J ∈ N 0 let Y J m (ω) := (η(v 0 , ω m ), . . ., η(v J , ω m )).By Dynkin's π-λ Theorem, it suffices to show that P[E ∩ F ∩ G c t,s ] = P[E]P[F ∩ G c t,s ] for E = Y J Nt i ∈ A i for i = 1, . . ., n and F = Y J Nt i ∈ A i for i = n + 1, . . ., 2n , ) has the same distribution as T 0 0,n−m .Thus from (1') we obtain .16) Since Var[ξ n i ] = Var[ ξn i ], to prove the first claim in (3.11), it suffices to show E[ X 0 0,n 2 ] ≤ C * 2 n 2 for some C * > 0 and all n ∈ N. We can use T 0 0,n ≤ n−1 i=0 ξ 1 i (due to (1)) and (3.16) to obtain
t m,n only takes values in N 0 .The first claim in Theorem 1 ), we almost surely have Z t+Ck m = Z t m for all k ∈ N. Moreover we claim that Z 0 0 is almost everywhere constant (which implies that Z t m is a.e.equal to the same constant for each (t, m) ∈ N 0 ).If this is not the case, let c := Var[Z 0 0 ] > 0. From (1), (2 * ), and (3) we have max Z 0 0 , Y t 0;n,j , X t 0,n n ≤ C for all t, n, j ∈ N 0 with j ≥ n ≥ 1

4 )
In the first case, one can think of ξ t+s v as the instantaneous travel speed along v at time t + s, which changes due to changing road conditions (so τ 0 ξ t+s v ds is distance traveled in time τ ).

) where |e| 1 :
= |e 1 |+• • •+|e d | is the L 1 norm, so the deterministic limit (5.1) is from [ 1 L |e| 1 , L|e| 1 ].Let us denote by B 1 r (0) the ball in R d with respect to the L 1 norm, with radius r and centered at the origin.Then as in Example 5.1, we can show that there is convex open S ⊆ R d , containing B 1 t and F ∈ F + t+s , 0 ≤ P[F ∩ G t,s ∩ E] ≤ P[G t,s ∩ E] = P[G t,s ]P[E].Therefore |P[F ∩ G t,s |E] − P [F ∩ G t,s ]| ≤ P[G t,s ] and so |P[F |E] − P [F ]| ≤ P[F ∩ G c t,s |E] − P [F ∩ G c t,s ] + P[G t,s ] = e −λs .