Subsequential tightness for branching random walk in random environment

We consider branching random walk in random environment (BRWRE) and prove the existence of deterministic subsequences along which their maximum, centered at its mean, is tight. This partially answers an open question in arXiv:1711.00852. The method of proof adapts an argument developed by Dekking and Host for branching random walks with bounded increments. The question of tightness without the need for subsequences remains open.


Introduction, model and main result
We consider branching random walk in (spatial, time independent) random environment and focus on the study of its maximum. From [3], which proves a shape theorem for a BRWRE on Z d , d ≥ 1, one can infer that the maximum satisfies a law of large numbers. Further, a functional central limit theorem for the maximum is proven in [2]. The goal of this paper is to prove tightness along a subsequence for the maximum recentered around its quenched mean. This is motivated by, and partially answers, the third open question in [2]. We only consider the case of a single starting particle.
We begin by introducing the model given in [2] in some more detail. Let (ξ(x)) x∈Z be an i.i.d. collection of random variables on a probability space (Ω, F, P) with 0 < ei := ess inf ξ(0) < ess sup ξ(0) =: es < ∞. We use E P to denote the expected value corresponding to P. Given a realization of ξ and an initial condition x 0 ∈ Z start with one particle at site x 0 . All particles move independently according to a continuous-time simple random walk with jump rate 1. While at site x a particle splits into two at rate ξ(x) independently of everything else. These particles then evolve independently according to the same mechanism. We write P ξ x and E ξ x (the quenched law and expectation respectively) for the law of the process conditioned on starting with a single particle at x. Alternatively, we write P ξ , E ξ and give our random variables a superscript x, which we suppress if x = 0. In case ξ(x) = ξ(0) for all x ∈ Z we use P x , E ξ x . We use P ⊗ P ξ x , P ⊗ P ξ or just P x or P to denote the annealed law of the process.
Let N (t) denote the set of particles alive at time t, for Y ∈ N (t) we denote by (Y s ) s∈[0,t] the trajectory of the particle and its ancestors up to time t; this is called the genealogy of Y . We are interested in M t := max Y ∈N (t) Y t .
The proof of Theorem 1 yields, with minimal changes, the following quenched result.
In particular, Theorems 1 and 2 do not imply each other.
To prove Theorem 1 we adapt the Dekking-Host argument [4], which we now briefly recall in the classical context of deterministic branching random walk in discrete time, that is when ξ(x) = 1 for all x, P-a.s. In that case, we have from the branching structure that, with M n , M n two independent copies of M n and W , W two independent copies of a Ber(1/2) random variable taking the values ±1, Taking expectation and using that max(a, b) = (a + b)/2 + |a − b|/2, we obtain that Since M n+1 − M n ≤ 1, Dekking and Host conclude, that which then implies the tightness of (M n − E[M n ]) n∈N using (1). The Dekking-Host argument generalizes to continuous time walks in deterministic environment, with asynchronous jumps and branching; we note that in that case, M n+1 − M n is not deterministically bounded. However, E[M n ]/n converges by the subadditive ergodic theorem, and then moving to subsequences using the argument presented in [6, p. 9], which originated in [1], yields the analogue of Theorem 1. The case of random environments presents a genuine new difficulty, in that information on ξ is embedded in the law of the configuration at time 1, and in that (quenched) shift invariance is lost. This requires a considerably more involved argument, that we now describe.
We denote the time of the first split by τ s and the time of the first move of any particle with τ m . We then define τ := τ s ∧ τ m ∧ (1/L) and consider 1 {τs<τm∧ 1 L } M t+τ . As in the Dekking-Host argument, this has the same distribution as the maximum of two copies M t,1 , M t,2 of M t , which, given the environment, are independent of each other and also of τ s and τ m . We use this setup in subsection 2.1 to derive the inequality In order to obtain (2), we prove that for which ei > 0 is essential. We then derive bounds for the two summands in (2)  Remember that we write M x t for the maximum at time t starting with a single particle in x. We will sometimes use random variables as starting position.
, y ∈ {±1}. For this let σ y be the time at which any particle of the process with a single starting particle in y hits 0. We can then use the descendants of the starting particle for M t,1 as descendants, after time σ y , of the particle which hits 0. This yields a coupling of M t,1 and M y t for which 1 {σ y ≤t} 1 {τm=τ } M y t ≥ 1 {σ y ≤t} 1 {τm=τ } M t−σ y ,1 , and it mainly remains to control E[1 {σ y ≤t} (M t,1 − M t−σ y ,1 )].
To do this we use the fact that there exist constants c, C 1 > 0 for which P ξ [σ y ≥ z] ≤ ce −C 1 z (see Lemma 5). We also utilize the bound Because σ y has exponential tails, it suffices to find subsequences along which E[M t,1 −M t−j/L,1 ], j ∈ {1, . . . , L·t }, are bounded by c·e c (j−1) with c, c constants, which are specified below. We first do this separately for each fixed j and get in Corollary 1 that we can achieve such a bound along arbitrarily dense subsequences. We then intersect these subsequences to get a, arbitrarily dense, subsequence along which E[1 {τ =τs} (M t,1 − M t+τ )] is bounded (see Lemma 8, Lemma 9 and Corollary 2). One important observation for this argument is that for fixed t ∈ N L,η , we only need E[M t,1 − M t−j/L,1 ] to be controlled for j ≤ c log(t), with c a suitable constant specified below. The reason for this is that σ y has exponential tails and E[M t,1 ] grows at most linearly. This implies that in (3) for all summands with k ≥ c log t we get a good enough upper bound even if we ignore the −M t− k L ,1 term. In subsection 2.4 we combine Lemma 4 and Corollary 2 to prove Theorem 1, by intersecting suitably dense subsequences obtained in the aforementioned lemmata. Acknowledgements This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 692452).
Thanks to Ofer Zeitouni for suggesting the problem and for many useful discussions. (2) Let τ m be the time of the first movement of any particle. Furthermore let τ s be the the time of the first split, τ s := inf{t ∈ R ≥0 : |N (t)| = 2}. Both τ s and τ m are stopping times with respect to the filtration generated by (

Deriving Inequality
Let L ∈ N be arbitrary but fixed and set τ := τ s ∧ τ m ∧ (1/L). Then τ is also a stopping time with respect to that filtration.
For t ∈ R ≥0 we have by definition that 1 , M t,2 are copies of M t , which are independent of each other and of τ s and τ m given the environment. Taking expectation and using that a∨b = (a+b+|a−b|)/2 this yields that By reordering the terms we conclude that Since given the environment 1 {τs<τm∧ 1 L } is independent of M t,1 and M t,2 it is also independent of |M t,1 − M t,2 | given the environment and we have that Proof. Given ξ we have that min{τ s , τ m } ∼ Expo(ξ(0) + 1). Thus Since c ξ(0),L is increasing in ξ(0) and strictly positive for ξ(0) > 0 Lemma 1 and Next we aim to simplify this upper bound by replacing as follows. If the particle V k splits before time t + s choose one of its direct descendants uniformly at random independently of everything else as V k+1 . Iterate this process, until V k doesn't split before time t + s, which will happen almost surely. We then have that V k t = M t and M t+s ≥ V k t+s , which implies that M t+s − M t ≥ V k t+s − V k t =: ∆ s . Since we have chosen the descendants uniformly at random independently of their displacement, (∆ r ) r≥0 is a time-continuous simple random walk. This implies that E ξ [∆ s ] = 0 for all ξ. This in turn yields that Using Lemma 2 the inequality (6) can be rewritten as This proves (2). We will handle the two summands in (2) separately and find arbitrarily dense subsequences of N L,η , η ∈ [0, 1/L), along which the summands are bounded. By intersecting the subsequences we will be able to conclude the proof of Theorem 1.

On
Before we can proceed we need to establish that there exists an x * ∈ R ≥0 such that  Proof. By Lemma 3 there exists an x * ∈ R such that lim sup t→∞ M t /t ≤ x * . Now fix δ ∈ (0, 1). Define t δ,η 0 := 0 and We have that t δ,η j+1 < ∞, since else we would have that E[M t δ,η j +(k+1)/L − M t δ,η j +k/L ] ≥ 2x * δL for all k ∈ N and thus that which implies that lim sup t→∞ E[M t ]/t ≥ 2x * /δ > x * , which contradicts the choice of x * . By definition we have that E[M t δ,η ] ≤ 2x * /(δL) for all j ∈ N and we are left with proving that lim sup j→∞ t δ,η j /j ≤ (1 + δ)/L. For this purpose, let K n := |{t ∈ N L,η : t < n/L + η, t ∈ {t δ,η j }|. By Lemma 2 we have that E[M t+1/L − M t ] ≥ 0 for all t. Thus by definition of K n we have that Remember that we write M x t for the maximum starting with a single particle at location x and allow x to be a random variable. Using this notation we have that with M S 1 t independent of 1 {τ =τm} and S 1 ∼ Unif({−1, 1}) independent of everything. This implies that Let y ∈ {±1}. Let σ y := inf{t ≥ 0 : ∃V ∈ N y (t) with V y t = 0}. Then we can couple M t,1 and M y t in a way such that This implies that We will handle the two summands in (10) separately, starting with the second one. However, for both summands, we will need a bound for the tail of σ y which the next lemma provides.
Proof. By coupling we have that P ξ [σ y ≥ z] ≤ P ei [σ y ≥ z] for P-a.e. ξ. For τ y := inf{t ≥ 0 : ∃V ∈ N (t) with V t = −y} one has that P ei [σ y ≥ z] = P ei [τ y ≥ z]. Furthermore, by symmetry we have that P ei [τ y ≥ z] = P ei [τ −1 ≥ z]. By definition of τ y we have that Let ε > 0. We know that positive constants c * , c exist for which Finally, we know that there exists a p ε > 0, such that Now choose ε := 1/12 then for z ≥ 3/c * we have that c * (1 − ε)z/2 − c * εz ≥ 1. This implies that by independence of the particles starting at time εz for z ≥ 3/c * . This suffices to conclude Lemma 5 Armed with Lemma 5 we can handle the second summand in (10). 1 ] ≤ c * t for all t ∈ N L,η and P-a.e. ξ. Combining these yields that This converges to 0 for t → ∞, and in particular is bounded by a constant for t ∈ N L,η . Now handle −E[1 {τ =τm} 1 {σ y >t} M y t ]. We have, using Cauchy-Schwarz in the last inequality, that We have E[(M y t ) 2 ] ≤ E es [(M y t ) 2 ] by coupling and thus know that there exists a c * ≥ 0, such that lim which in turn gives that the expression in the statement of Lemma 6 is bounded.
Now we proceed with the first summand in (10). For this fix η ∈ 0, 1 L . We have for t ∈ N L,η that where in the second inequality we use that 1 ] is monotonously increasing in s by Lemma 2. Lemma 5 and (11) imply that In particular we can handle the cases y = 1 and y = −1 at once.
Let j ∈ N be arbitrary but fixed and δ ∈ (0, 1). Furthermore, take x * such that lim sup t→∞ E[M t,1 ]/t ≤ x * which exists by Lemma 3. Define t (j,δ,η) 0 In the following, we prove that this is well defined and that (12) is bounded along a suitable subsequence of the sequences (t (j,δ,η) k ) k∈N . We have that t (j,δ,η) k+1 < ∞, since otherwise we would have that We want to estimate K The following lemma compares K (j,δ,η) n with K (j,δ,η) n .
Proof. For t ∈ B δ,η and all j ∈ 1, . . . , 2L C 1 log(t) we have, by definition, that 2L (j−1) /(δL). Furthermore, we know that there exists a c * ≥ 0 such that E[M t,1 − M t−j/L,1 ] ≤ E[M t,1 ] ≤ c * · t for all t ∈ N L,η , since lim sup t→∞ E[M t ]/t ≤ x * by Lemma 3. These inequalities as well as (12) imply that for t ∈ B δ,η where the exact value of c changes from line to line and c is just a constant, which bounds ce −C 1 (t−η) c * t for all t ≥ 0. This proves Lemma 8.
Lemma 9. Let (t δ,η k ) k∈N be a monotonically increasing enumeration of B δ,η and δe Proof. Consider A δ,η n := t ∈ N L,η : t < n/L + η, t ∈ B δ,η , K δ,η n := |A δ,η n |. Then |. This gives that We now want to apply Fatou's lemma and for this need to bound the summands for constant j. Thus let n, j ∈ N. We know, by Lemma 7, that K (j,δ,η) n ≤ j K (j,δ,η) n + j and, by the calculation in Corollary 1, that This implies that .

Quenched tightness-Proof of Theorem 2
We now sketch which changes to the argument are necessary to get Theorem 2. The analogue to inequality (2) is Since Lemma 3 is proven by comparing E ξ with E es we can derive that there is a x * ∈ R with lim sup t→∞ E ξ [Mt] t ≤ x * for P-a.e. ξ. This allows us to replace E by E ξ in Lemma 4 and no further changes are needed. Note that t δ,η k will be ξ dependent, since the condition E ξ [M t+1/L − M t ] ≤ 2x * Lδ depends on ξ. In section 2.3, replacing E and P by E ξ and P ξ everywhere suffices and Lemma 5 is already stated for P-a.e. ξ. Again the condition E ξ [M t,1 − M t−j/L,1 ] ≤ 2 Lδ x * je C 1 2L (j−1) being ξ dependent forces the s δ,η k in Corollary 2 to be ξ dependent. Combining these ingredients to prove Theorem 2 is parallel to subsection 2.4, one only needs to replace E by E ξ in the last display.
We have not managed to find a deterministic subsequence (t k ) k∈N of N L,η such that (M t k − E ξ [M t k ]) k∈N is tight for P-a.e. ξ.