Tightness of Bernoulli Gibbsian line ensembles

A Bernoulli Gibbsian line ensemble $\mathfrak{L} = (L_1, \dots, L_N)$ is the law of the trajectories of $N-1$ independent Bernoulli random walkers $L_1, \dots, L_{N-1}$ with possibly random initial and terminal locations that are conditioned to never cross each other or a given random up-right path $L_N$ (i.e. $L_1 \geq \cdots \geq L_N$). In this paper we investigate the asymptotic behavior of sequences of Bernoulli Gibbsian line ensembles $\mathfrak{L}^N = (L^N_1, \dots, L^N_N)$ when the number of walkers $N$ tends to infinity. We prove that if one has mild but uniform control of the one-point marginals of the lowest-indexed (or top) curves $L_1^N$ then the sequence $\mathfrak{L}^N$ is tight in the space of line ensembles. Furthermore, we show that if the top curves $L_1^N$ converge in the finite dimensional sense to the parabolic Airy$_2$ process then $\mathfrak{L}^N$ converge to the parabolically shifted Airy line ensemble.


Introduction and main results
1.1. Gibbsian line ensembles. In the last several years there has been a significant interest in line ensembles that satisfy what is known as the Brownian Gibbs property. A line ensemble is merely a collection of random continuous curves on some interval Λ ⊂ R (all defined on the same probability space) that are indexed by a set Σ ⊂ Z. In this paper, we will almost exclusively have Σ = {1, . . . , N } with N ∈ N ∪ {∞} and if N = ∞ we use the convention Σ = N. We denote the line ensemble by L and by L i (ω)(x) := L(ω)(i, x) the i-th continuous function (or line) in the ensemble, and typically we drop the dependence on ω from the notation as one does for Brownian motion. We say that a line ensemble L satisfies the Brownian Gibbs property if it is non-crossing almost surely, i.e. L i (s) < L i−1 (s) for i = 2, . . . , N and s ∈ Λ and it satisfies the following resampling invariance. Suppose we sample L and fix two times s, t ∈ Λ with s < t and a finite interval K = {k 1 , k 1 + 1, . . . , k 2 } ⊂ Σ with k 1 ≤ k 2 . We can erase the part of the lines L k between the points (s, L k (s)) and (t, L k (t)) for k = k 1 , . . . , k 2 and sample independently k 2 − k 1 + 1 random curves between these points according to the law of k 2 − k 1 + 1 Brownian bridges, which have been conditioned to not intersect each other as well as the lines L k 1 −1 and L k 2 +1 with the convention that L 0 = ∞ and L k 2 +1 = −∞ if k 2 + 1 ∈ Σ. In this way we obtain a new random line ensemble L , and the essence of the Brownian Gibbs property is that the law of L is the same as that of L. The readers can find a precise definition of the Brownian Gibbs property in Definition 2.8 but for now Date: November 10, 2020.
1 arXiv:2011.04478v1 [math.PR] 9 Nov 2020 they can think of a line ensemble that satisfies the Brownian Gibbs property as N random curves, which locally have the distribution of N avoiding Brownian bridges.
Part of the interest behind Brownian Gibbsian line ensembles is that they naturally arise in various models in statistical mechanics, integrable probability and mathematical physics. If N is finite, a natural example of a Brownian Gibbsian line ensemble is given by Dyson Brownian motion with β = 2 (this is the law of N independent one-dimensional Brownian motions all started at the origin and appropriately conditioned to never cross for all positive time). Other important examples of models that satisfy the Brownian Gibbs property include Brownian last passage percolation, which has been extensively studied recently in [16][17][18][19] and the Airy line ensemble (shifted by a parabola) [7,24]. The Airy line ensemble was first discovered as a scaling limit of the multi-layer polynuclear growth model in [24], where its finite dimensional distribution was derived. Subsequently, in [7] it was shown that the edge of Dyson Brownian motion (or rather a closely related model given by Brownian watermelons) converges uniformly over compacts to the Airy line ensemble, see Figure 1. This stronger notion of convergence was obtained by utilizing the Brownian Gibbs property and the latter has led to the proof of many new and interesting properties of the Airy line ensemble [7,11,16]. Apart from its inherent beautiful structure, the Airy line ensemble plays a distinguished (conjectural) foundational role in the Kardar-Parisi-Zhang (KPZ) universality class through its relationship to the construction of the Airy sheet in [10]. The Airy line ensemble is believed to be a universal scaling limit of not just Dyson Brownian motion but many line ensembles that satisfy a Gibbs property. Recently, it was shown in [9] that uniform convergence to the Airy line ensemble holds for sequences of N non-intersecting Bernoulli, geometric, exponential and Poisson random walks started from the origin as N tends to infinity. These types of result are reminiscent of Donsker's theorem from classical probability theory, which establishes the convergence of generic random walks to Brownian motion. The difference is that as the number of avoiding walkers is increasing to infinity, one leaves the Gaussian universality class and enters the KPZ universality class. It is worth mentioning that the results in [9] rely on very precise integrable inputs (exact formulas for the finite dimensional distributions) for the random walkers for each fixed N , which are suitable for taking the large N limit -this is one reason only the packed initial condition is effectively treated. For more general initial conditions, the convergence even in the Bernoulli case, which is arguably the simplest, remains widely open.
The goal of the present paper is to investigate asymptotics of N avoiding Bernoulli random walkers with general (possibly random) initial and terminal conditions in the large N limit. The main questions that motivate our work are: (1) What are sufficient conditions that ensure that the trajectories of N avoiding Bernoulli random walkers are uniformly tight, meaning that they have uniform weak subsequential limits that are N-indexed line ensembles on R? (2) What are sufficient conditions that ensure that the trajectories of N avoiding Bernoulli random walkers converge uniformly to the Airy line ensemble (shifted by a parabola)?
If L N = (L N 1 , . . . , L N N ) denotes the trajectories of the N avoiding Bernoulli random walkers (with L N 1 ≥ L N 2 ≥ · · · ≥ L N N ) we show that as long as L N 1 under suitable shifts and scales has one-point tight marginals that (roughly) globally approximate an inverted parabola, one can conclude that the whole line ensemble L N under the same shifts and scales is uniformly tight. In other words, having a mild but uniform control of the one-point marginals of the lowest-indexed (or top) curve L N 1 one can conclude that the full line ensemble is tight and moreover any subsequential limit satisfies the Brownian Gibbs property. This result appears as Theorem 1.1 in the next section and is the main result of the paper. It is worth pointing out that to establish tightness we do not require actual convergence of the marginals, which makes our approach more general than that of [9]. In particular, in [9] the authors assume finite dimensional convergence of L N to the Airy line ensemble, while our approach does not making it more suitable for establishing convergence to other Brownian Gibbsian line ensembles, such as the Airy wanderers processes of [1].
Regarding the second question above, we show that if L N 1 under suitable shifts and scales converges weakly to the Airy 2 process (the lowest indexed curve in the Airy line ensemble) minus a parabola in the finite dimensional sense, then the whole line ensemble L N under the same shifts and scales converges uniformly to the Airy line ensemble (again minus a parabola). The latter result is presented as Theorem 1.3 in the next section and is a relatively easy consequence of Theorem 1.1 and the recent characterization result of Brownian Gibbsian line ensembles in [12].

1.2.
Main results. We begin by giving some necessary definitions, which will be further elaborated in Section 2 but will suffice for us to present the main results of the paper. For a, b ∈ Z with a < b we denote by a, b the set {a, a + 1, . . . , b}. Given T 0 , T 1 ∈ Z with T 0 ≤ T 1 and N ∈ N we call a 1, Nindexed Bernoulli line ensemble on T 0 , T 1 a random collection of N up-right paths drawn in the region T 0 , T 1 ×Z in Z 2 -see the bottom-right part of Figure 2. We denote a Bernoulli line ensemble by L and L(i, s) is the location of the i-th up-right path at time s for (i, s) ∈ 1, N × T 0 , T 1 . For convenience we also denote L i (s) = L(i, s) the i-th up-right path in the ensemble and one can think of L i 's as trajectories of Bernoulli random walkers that at each time either stay put or jump by one.
We say that a Bernoulli line ensemble satisfies the Schur Gibbs property if it satisfies the following: (1) With probability 1 we have L 1 (s) ≥ L 2 (s) ≥ · · · ≥ L N (s) for all s ∈ T 0 , T 1 .
(2) For any K = {k 1 , k 1 + 1, . . . , k 2 } ⊂ 1, N − 1 and a, b ∈ T 0 , T 1 with a < b the conditional law of L k 1 , . . . , L k 2 in the region D = a, b × Z, given {L(i, s) : i ∈ K or s ∈ a + 1, b − 1 } is that of k 2 − k 1 + 1 independent Bernoulli random walks that are conditioned to start from x = (L k 1 (a), . . . , L k 2 (a)) at time a, to end at y = (L k 1 (b), . . . , L k 2 (b)) at time b and to never cross each other or the paths L k 1 −1 or L k 2 +1 in the interval T 0 , T 1 (here we use the convention L 0 = ∞).
In simple words, the above definition states that a Bernoulli line ensemble satisfies the Schur Gibbs property if it is non-crossing and its local distribution is that of avoiding Bernoulli random walk bridges. We mention here that in the above definition the curve L N plays a special role, since we do not assume that its conditional distribution is that of a Bernoulli bridge conditioned to stay below L N −1 . Essentially, the curve L N plays the role of a bottom (random) boundary for our ensemble and a Bernoulli line ensemble satisfying the Schur Gibbs property can be seen to be equivalent to the statement that it is precisely the law of N − 1 independent Bernoulli bridges that are conditioned to start from some random configuration at time T 0 , end at some random configuration at time T 1 and never cross each other or a given random up-right path L N in the time interval T 0 , T 1 . We will refer to Bernoulli line ensembles that satisfy the Shcur Gibbs property as Bernoulli Gibbsian line ensembles. We mention that the name Schur Gibbs property originates from the connection between Bernoulli Gibbsian line ensembles and Schur symmetric polynomials, which will be discussed later in Section 8.2. A natural context in which Bernoulli Gibbsian line ensembles arise is lozenge tilings -see Figure  2 and its caption. To be brief, one can take a finite tileable region in the hexagonal lattice and consider the uniform distribution on all possible tilings of this region with three types of rhombi (also called lozenges). The resulting measure on tilings has a natural Gibbs property, which is that if you freeze the tiling outside of some finite region the tiling inside that region will be conditionally uniform among all possible tilings. For special choices of tileable domains uniform lozenge tilings give rise to Bernoulli line ensembles (with deterministic packed starting and terminal conditions), and the tiling Gibbs property translated to the line ensemble becomes the Schur Gibbs property. In Figure 2 one observes that L 3 (which is the bottom-most curve in the ensemble) is not uniformly distributed among all up-right paths that stay below L 2 and have the correct endpoints since it needs to stay above the bottom boundary of the tiled region.
In the remainder of this section we fix a sequence L N = (L N 1 , . . . , L N N ) of 1, N -indexed Bernoulli Gibbsian line ensembles on a N , b N where a N ≤ 0 and b N ≥ 0 are integers. Our interest is in understanding the asymptotic behavior of L N as N → ∞ (i.e. when the number of walkers tends to infinity). Below we we list several assumptions on the sequence L N , which rely on parameters α > 0, p ∈ (0, 1) and λ > 0. The parameter α is related to the fluctuation critical exponent of the line ensemble and the assumptions below will indicate that L N 1 (0) fluctuates on order N α/2 . The parameter p is the global slope of the line ensemble, and since we are dealing with Bernoulli walkers the global slope is in [0, 1] and we exclude the endpoints to avoid degenerate cases. The parameter λ is related to the global curvature of the line ensemble, and the assumptions below will indicate that once the slope is removed the line ensemble approximates the parabola −λx 2 . We now turn to formulating our assumptions precisely. Assumption 1. We assume that there is a function ψ : N → (0, ∞) such that lim N →∞ ψ(N ) = ∞ and a N < −ψ(N )N α while b N > ψ(N )N α .
The significance of Assumption 1 is that the sequence of intervals [a N , b N ] (on which the line ensemble L N is defined) on scale N α asymptotically covers the entire real line. The nature of ψ is not important and any function converging to infinity along the integers works for our purposes. Assumption 2. There is a function φ : (0, ∞) → (0, ∞) such that for any > 0 we have Let us elaborate on Assumption 2 briefly. If n = 0 the statement indicates that N −α/2 L 1 (0) is a tight sequence of random variables and so α/2 is the fluctuation critical exponent of the ensemble. The transversal critical exponent is α and is reflected in the way time (the argument in L N 1 ) is scaled -it is twice α/2 as expected by Brownian scaling. The essence of Assumption 2 is that if one removes a global line with slope p from L N 1 and rescales by N α/2 vertically and N α horizontally the resulting curve asymptotically approximates the parabola −λx 2 . The way the statement is formulated, this approximation needs to happen uniformly over the integers but the choice of Z is not important. Indeed, one can replace Z with any subset of R that has arbitrarily large and small points and the choice of Z is made for convenience. Equation (1.3) indicates that for each n ∈ Z the sequence of random variables X N n = N −α/2 (L N 1 (nN α ) − pnN α + λn 2 N α/2 ) is tight, but it says a bit more. Namely, it states that if M n is the family of all possible subsequential limits of {X N n } N ≥1 then ∪ n∈Z M n is itself a tight family of distributions on R. A simple case when Assumption 2 is Figure 2. The top-left picture represents a tileable region in the triangular lattice and three types of lozenges. The top-right picture depicts a possible tiling of the region and the bottom-left picture represents the same tiling under an affine transformation. One draws lines through the mid-points of the vertical sides of the vertical rhombi and the squares and this gives rise to a collection of random up-right paths. If one shifts these lines down one obtains a Bernoulli line ensemble -depicted in the bottom-right picture. If one takes the uniform measure on lozenge tilings the Bernoulli line ensemble one obtains through the above procedure satisfies the Schur Gibbs property. satisfied is when X N n converges to the Tracy-Widom distribution for all n as N → ∞. In this case the family ∪ n∈Z M n only contains the Tracy-Widom distribution and so is naturally tight.
The final thing we need to do is embed all of our line ensembles L N in the same space. The latter is necessary as we want to talk about tightness and convergence of these line ensembles that presently are defined on different state spaces (remember that the number of up-right paths is changing with N ). We consider N × R with the product topology coming from the discrete topology on N and the usual topology on R. We let C(N × R) be the space of continuous functions on N × R with the topology of uniform convergence over compacts and corresponding Borel σ-algebra. If i ≥ N + 1 we define f N i (s) = 0 for s ∈ R. With the above we have that L N defined by is a random variable taking value in C(N×R) and we let P N denote its distribution. We remark that the particular extension we chose for f N i outside of [−ψ(N ), ψ(N )] and for i ≥ N + 1 is immaterial since all of our convergence/tightness results are formulated for the topology of uniform convergence over compacts. Consequently, only the behavior of these functions on compact intervals and finite index matters and not what these functions do near infinity, which is where the modification happens as lim N →∞ ψ(N ) = ∞ by assumption.
We are now ready to state our main result, whose proof can be found in Section 2.4. then the entire line ensembles need to be tight. The idea of utilizing the Gibbs property of a line ensemble to improve one-point tightness of the top curve to tightness of the entire curve or even the entire line ensemble has appeared previously in several different contexts. For line ensembles whose underlying path structure is Brownian it first appeared in the seminal work of [7] and more recently in [4,5]. For discrete Gibbsian line ensembles (more general than the one studied in this paper) it appeared in [6] and for line ensembles related to the inverse gamma directed polymer in [29].
Theorem 1.1 indicates that in order to ensure the existence of subsequential limits for L N as in (1.2) it suffices to ensure tightness of the one-point marginals of the top curves L N 1 in a sufficiently uniform sense. We next investigate the question of when L N converges to the Airy line ensemble. We let A = {A i } i∈N be the N-indexed Airy line ensemble in the sense of [7, Theorem 3.1] and L = {L Airy i } i∈N given by L Airy i (x) = 2 −1/2 (A i (x) − x 2 ) be its parabolically shifted version. In particular, both A and L are random variables taking values in the space C(N × R), and A 1 (·) is the Airy 2 process while L Airy 1 (·) is the parabolic Airy 2 process. To establish convergence of L N to L Airy we need the following strengthening of Assumption 2. . For any k ∈ N, t 1 , . . . , t k , x 1 , . . . , x k ∈ R we assume that (1.3) lim N →∞ P L N 1 (t i ) ≤ x i for i = 1, . . . , k = P c −1/2 L Airy 1 (ct i ) ≤ x i for i = 1, . . . , k .
In plain words, Assumption 2' states that the top curves L N 1 (t) converge in the finite dimensional sense to c −1/2 L Airy 1 (ct). Let us briefly explain why Assumption 2' is implies Assumption 2 (and hence we refer to it as a strengthening). Under Assumption 2', we would have that N −α/2 (L N 1 (xN α ) − pxN α + λx 2 N α/2 ) converge in the finite dimensional sense to p(1−p) 2c A 1 (cx). In particlar, for each n ∈ Z we have that where we used that A 1 (x) is a stationary process whose one point marginals are given by the Tracy-Widom distribution F GU E , [28], and that F GU E is diffuse. In particular, given > 0 we can find a large enough so that the second line above is less than and such a choice of a furnishes a function φ as in Assumption 2. The next result gives conditions under which L N converges to the parabolically shifted Airy line ensemble, it is proved in Section 2.4 . Remark 1.4. In plain words, Theorem 1.3 states that to prove the convergence of a sequence of Bernoulli Gibbsian line ensembles L N to the parabolically shifted Airy line ensemble, it suffices to show that the top curves L N 1 converge in the finite dimensional sense to the parabolic Airy 2 process. We mention here that the convergence in Theorem 1.3 is in the uniform topology over compacts, which is stronger than finite-dimensional convergence. We also mention that recently in [9] the conclusion of Theorem 1.3 was established under the assumption that L N converge to L ∞ in the finite dimensional sense. Simply put, we require as input only the finite dimensional convergence of the top curves while [9, Theorem 1.5] requires the finite dimensional convergence of not just the top but all curves in the line ensemble, which is a much stronger assumption.
The remainder of the paper is organized as follows. In Section 2 we introduce the basic definitions and notation for line ensembles. The main technical result of the paper, Theorem 2.26, is presented in Section 2.3 and Theorems 1.1 and 1.3 are proved in Section 2.4 by appealing to it. In Section 3 we prove several statements for Bernoulli random walk bridges, by using a strong coupling result that allows us to compare the latter with Brownian bridges. The proof of Theorem 2.26 is presented in Section 4 and is based on three key lemmas. Two of these lemmas are proved in Section 5 and the last one in Section 6. The paper ends with Sections 7 and 8, where various technical results needed throughout the paper are proved. which maps Λ in R. We will often slightly abuse notation and write L : Σ×Λ → R, even though it is not L which is such a function, but L(ω) for every ω ∈ Ω. For i ∈ Σ we write L i (ω) = (L(ω))(i, ·) for the curve of index i and note that the latter is a map L i : We will require the following result, whose proof is postponed until Section 7.1. In simple terms it states that the space C(Σ × Λ) where our random variables L take value has the structure of a complete, separable metric space.
are sequences of real numbers such that a n < b n , [a n , b n ] ⊂ Λ, a n+1 ≤ a n , b n+1 ≥ b n and ∪ ∞ n=1 [a n , b n ] = Λ. For n ∈ N we let K n = Σ n × [a n , b n ] where Σ n = Σ ∩ −n, n . Define d : Then d defines a metric on C(Σ × Λ) and moreover the metric space topology defined by d is the same as the topology of uniform convergence over compact sets. Furthermore, the metric space (C(Σ × Λ), d) is complete and separable.
Definition 2.3. Given a sequence {L n : n ∈ N} of random Σ-indexed line ensembles we say that L n converge weakly to a line ensemble L, and write L n =⇒ L if for any bounded continuous function We also say that {L n : n ∈ N} is tight if for any > 0 there exists a compact set We call a line ensemble non-intersecting if P-almost surely L i (r) > L j (r) for all i < j and r ∈ Λ.
We will require the following sufficient condition for tightness of a sequence of line ensembles, which extends [2,Theorem 7.3]. We give a proof in Section 7.2.
Lemma 2.4. Let Σ ⊂ Z and Λ ⊂ R be an interval. Suppose that {a n } ∞ n=1 , {b n } ∞ n=1 are sequences of real numbers such that a n < b n , [a n , b n ] ⊂ Λ, a n+1 ≤ a n , b n+1 ≥ b n and ∪ ∞ n=1 [a n , b n ] = Λ. Then {L n } is tight if and only if for every i ∈ Σ we have (i) lim a→∞ lim sup n→∞ P(|L n i (a 0 )| ≥ a) = 0. (ii) For all > 0 and k ∈ N, lim δ→0 lim sup n→∞ P sup x,y∈[a k ,b k ], We next turn to formulating the Brownian Gibbs property -we do this in Definition 2.8 after introducing some relevant notation and results. If W t denotes a standard one-dimensional Brownian motion, then the processB (t) = W t − tW 1 , 0 ≤ t ≤ 1, is called a Brownian bridge (fromB(0) = 0 toB(1) = 0) with diffusion parameter 1. For brevity we call the latter object a standard Brownian bridge.
Given a, b, x, y ∈ R with a < b we define a random variable on (C([a, b]), C) through and refer to the law of this random variable as a Brownian bridge (from B(a) = x to B(b) = y) with diffusion parameter 1. Given k ∈ N and x, y ∈ R k we let P a,b, x, y f ree denote the law of k independent Brownian bridges The following definition introduces the notion of an (f, g)-avoiding Brownian line ensemble, which in simple terms is a collection of k independent Brownian bridges, conditioned on not-crossing each other and staying above the graph of g and below the graph of f for two continuous functions f and g.
be two continuous functions. The latter condition means that either f : [a, b] → R is continuous or f = ∞ everywhere, and similarly for g. We also assume that With the above data we define the (f, g)-avoiding Brownian line ensemble on the interval [a, b] with entrance data x and exit data y to be the Σ-indexed line ensemble Q with Σ = 1, k on Λ = [a, b] and with the law of Q equal to P a,b, x, y f ree (the law of k independent Brownian bridges It is worth pointing out that E is an open set of positive measure and so we can condition on it in the usual way -we explain this briefly in the following paragraph. Let (Ω, F, P) be a probability space that supports k independent Brownian bridges = y i all with diffusion parameter 1. Notice that we can findũ 1 , . . . ,ũ k ∈ C([0, 1]) and > 0 (depending on x, y, f, g, a, b) such thatũ i (0) =ũ i (1) = 0 for i = 1, . . . , k and such that if . It follows from Lemma 2.6 that and so we can condition on the event E.
To construct a realization of Q we proceed as follows. For ω ∈ E we define Observe that for i ∈ {1, . . . , k} and an open set U ∈ C([a, b]) we have that This implies that the law Q is indeed well-defined and also it is non-intersecting almost surely. Also, given measurable subsets A 1 , . . . , A k of C([a, b]) we have that . . , L k 2 (a)) and exit data (L k 1 (b), . . . , L k 2 (b)) from Definition 2.7. Note thatQ is introduced because, by definition, any such (f, g)-avoiding Brownian line ensemble is indexed from 1 to k 2 − k 1 + 1 but we want Q to be indexed from k 1 to k 2 . An equivalent way to express the Brownian Gibbs property is as follows. A Σ-indexed line ensemble L on Λ satisfies the Brownian Gibbs property if and only if it is non-intersecting and for any finite K = {k 1 , k 1 + 1, . . . , k 2 } ⊂ Σ and [a, b] ⊂ Λ and any bounded Borel-measurable function is the σ-algebra generated by the variables in the brackets above, Remark 2.9. Let us briefly explain why equation (2.3) makes sense. Firstly, since Σ × Λ is locally compact, we know by [22,Lemma 46.4 , so that the left side of (2.3) is the conditional expectation of a bounded measurable function, and is thus well-defined. A more subtle question is why the right side of (2.3) is F ext (K × (a, b))-measurable. This question was resolved in [12,Lemma 3.4], where it was shown that the right side is measurable with respect to the σ-algebra σ {L i (s) : i ∈ K and s ∈ {a, b}, or i ∈ {k 1 − 1, k 2 + 1} and s ∈ [a, b]} , which in particular implies the measurability with respect to F ext (K × (a, b)).
In the present paper it is convenient for us to use the following modified version of the definition above, which we call the partial Brownian Gibbs property -it was first introduced in [12]. We explain the difference between the two definitions, and why we prefer the second one in Remark 2.12.
Definition 2.10. Fix a set Σ = 1, N with N ∈ N or N = ∞ and an interval Λ ⊂ R. A Σ-indexed line ensemble L on Λ is said to satisfy the partial Brownian Gibbs property if and only if it is nonintersecting and for any finite where we recall that D K,a,b = K × (a, b) and D c K,a,b = (Σ × Λ) \ D K,a,b , and is the σ-algebra generated by the variables in the brackets above, L| K×[a,b] denotes the restriction of L to the set K × [a, b], x = (L k 1 (a), . . . , L k 2 (a)), y = (L k 1 (b), . . . , Remark 2.11. Observe that if N = 1 then the conditions in Definition 2.10 become void, i.e., any line ensemble with one line satisfies the partial Brownian Gibbs property. Also we mention that (2.4) makes sense by the same reason that (2.3) makes sense, see Remark 2.9.
Remark 2.12. Definition 2.10 is slightly different from the Brownian Gibbs property of Definition 2.8 as we explain here. Assuming that Σ = N the two definitions are equivalent. However, if Σ = {1, . . . , N } with 1 ≤ N < ∞ then a line ensemble that satisfies the Brownian Gibbs property also satisfies the partial Brownian Gibbs property, but the reverse need not be true. Specifically, the Brownian Gibbs property allows for the possibility that k 2 = N in Definition 2.10 and in this case the convention is that g = −∞. As the partial Brownian Gibbs property is more general we prefer to work with it and most of the results later in this paper are formulated in terms of it rather than the usual Brownian Gibbs property.

2.2.
Bernoulli Gibbsian line ensembles. In this section we introduce the notion of a Bernoulli line ensemble and the Schur Gibbs property. Our discussion will parallel that of [6, Section 3.1], which in turn goes back to [8, Section 2.1].
A Σ-indexed Bernoulli line ensemble L on T 0 , T 1 is a random variable defined on a probability space (Ω, B, P), taking values in Y such that L is a (B, D)-measurable function.
Remark 2.14. In [6, Section 3.1] Bernoulli line ensembles L were called discrete line ensembles in order to distinguish them from the continuous line ensembles from Definition 2.1. In this paper we have opted to use the term Bernoulli line ensembles to emphasize the fact that the functions f ∈ Y satisfy the property that f (j, i + 1) − f (j, i) ∈ {0, 1} when j ∈ Σ and i ∈ T 0 , T 1 − 1 . This condition essentially means that for each j ∈ Σ the function f (j, ·) can be thought of as the trajectory of a Bernoulli random walk from time T 0 to time T 1 . As other types of discrete line ensembles, see e.g. [29], have appeared in the literature we have decided to modify the notation in [6, Section 3.1] so as to avoid any ambiguity.
The way we think of Bernoulli line ensembles is as random collections of up-right paths on the integer lattice, indexed by Σ (see Figure 3). Observe that one can view an up-right path L on T 0 , T 1 as a continuous curve by linearly interpolating the points (i, L(i)). This allows us to define (L(ω))(i, s) for non-integer s ∈ [T 0 , T 1 ] and to view Bernoulli line ensembles as line ensembles in the sense of Definition 2.1. In particular, we can think of L as a random variable taking values in (C(Σ × Λ), C Σ ) with Λ = [T 0 , T 1 ]. We will often slightly abuse notation and write L : Σ × T 0 , T 1 → Z, even though it is not L which is such a function, but rather L(ω) for each ω ∈ Ω. Furthermore we write L i = (L(ω))(i, ·) for the index i ∈ Σ path. If L is an up-right path on T 0 , T 1 and a, b ∈ T 0 , T 1 satisfy a < b we let L a, b denote the restriction of L to a, b .
Let t i , z i ∈ Z for i = 1, 2 be given such that t 1 < t 2 and 0 ≤ z 2 − z 1 ≤ t 2 − t 1 . We denote by Ω(t 1 , t 2 , z 1 , z 2 ) the collection of up-right paths that start from (t 1 , z 1 ) and end at (t 2 , z 2 ), by P t 1 ,t 2 ,z 1 ,z 2 Ber the uniform distribution on Ω(t 1 , t 2 , z 1 , z 2 ) and write E t 1 ,t 2 ,z 1 ,z 2 Ber for the expectation with respect to this measure. One thinks of the distribution P t 1 ,t 2 ,z 1 ,z 2 Ber as the law of a simple random walk with i.i.d. Bernoulli increments with parameter p ∈ (0, 1) that starts from z 1 at time t 1 and is conditioned to end in z 2 at time t 2 -this interpretation does not depend on the choice of p ∈ (0, 1). Notice that by our assumptions on the parameters the state space Ω(t 1 , t 2 , z 1 , z 2 ) is non-empty. Given . . , k that are uniformly distributed. This measure is well-defined provided that Ω(T 0 , T 1 , x i , y i ) are non-empty for i = 1, . . . , k, which holds if The following definition introduces the notion of an (f, g)-avoiding Bernoulli line ensemble, which in simple terms is a collection of k independent Bernoulli bridges, conditioned on not-crossing each other and staying above the graph of g and below the graph of f for two functions f and g.
Definition 2.15. Let k ∈ N and W k denote the set of signatures of length k, i.e.
With the above data we define the (f, g; S)-avoiding Bernoulli line ensemble on the interval T 0 , T 1 with entrance data x and exit data y to be the Σ-indexed Bernoulli line ensemble Q with Σ = 1, k on T 0 , T 1 and with the law of Q equal to P T 0 ,T 1 , x, y Ber (the law of k independent uniform up-right paths The above definition is well-posed if there exist B i ∈ Ω(T 0 , T 1 , x i , y i ) for i = 1, . . . , k that satisfy the conditions in E S (i.e. if the set of such up-right paths is not empty). We will denote by Ω avoid (T 0 , T 1 , x, y, f, g; S) the set of collections of k up-right paths that satisfy the conditions in E S and then the distribution on Q is simply the uniform measure on Ω avoid (T 0 , T 1 , x, y, f, g; S). We denote the probability distribution of Q as P T 0 ,T 1 , x, y,f,g avoid,Ber;S and write E T 0 ,T 1 , x, y,f,g avoid,Ber;S for the expectation with respect to this measure. If S = T 0 , T 1 , we write Ω avoid (T 0 , T 1 , x, y, f, g), P T 0 ,T 1 , x, y,f,g avoid,Ber , and E T 0 ,T 1 , x, y,f,g avoid,Ber . If f = +∞ and g = −∞, we write Ω avoid (T 0 , T 1 , x, y), P T 0 ,T 1 , x, y avoid,Ber , and E T 0 ,T 1 , x, y avoid,Ber .
It will be useful to formulate simple conditions under which Ω avoid (T 0 , T 1 , x, y, f, g) is non-empty and thus P T 0 ,T 1 , x, y,f,g avoid,Ber well-defined. Note that Ω avoid (T 0 , T 1 , x, y, f, g; S) ⊇ Ω avoid (T 0 , T 1 , x, y, f, g) for any S ⊆ T 0 , T 1 , so P T 0 ,T 1 , x, y,f,g avoid,Ber;S is also well-defined in this case. We accomplish this in the following lemma, whose proof is postponed until Section 7.3.
Then the set Ω avoid (T 0 , T 1 , x, y, f, g) from Definition 2.15 is non-empty.
The following definition introduces the notion of the Schur Gibbs property, which can be thought of a discrete analogue of the partial Brownian Gibbs property the same way that Bernoulli random walks are discrete analogues of Brownian motion.
and for any finite K = {k 1 , k 1 + 1, . . . , k 2 } ⊂ 1, N − 1 and a, b ∈ T 0 , T 1 with a < b the following holds. Suppose that f, g are two up-right paths drawn in {(r, z) ∈ Z 2 : a ≤ r ≤ b} and x, y ∈ W k with k = k 2 − k 1 + 1 altogether satisfy that P(A) > 0 where A denotes the event 18. In simple words, a Bernoulli line ensemble is said to satisfy the Schur Gibbs property if the distribution of any finite number of consecutive paths, conditioned on their end-points and the paths above and below them is simply the uniform measure on all collection of up-right paths that have the same end-points and do not cross each other or the paths above and below them.
Remark 2.19. Observe that in Definition 2.17 the index k 2 is assumed to be less than or equal to N − 1, so that if N < ∞ the N -th path is special and is not conditionally uniform. This is what makes Definition 2.17 a discrete analogue of the partial Brownian Gibbs property rather than the usual Brownian Gibbs property. Similarly to the partial Brownian Gibbs property, see Remark 2.11, if N = 1 then the conditions in Definition 2.17 become void, i.e., any Bernoulli line ensemble with one line satisfies the Schur Gibbs property. Also we mention that the well-posedness of P T 0 ,T 1 , x, y,f,g avoid,Ber in (2.5) is a consequence of Lemma 2.16 and our assumption that P(A) > 0.
Remark 2.20. In [6] the authors studied a generalization of the Gibbs property in Definition 2.17 depending on a parameter t ∈ (0, 1), which was called the Hall-Littlewood Gibbs property due to its connection to Hall-Littlewood polynomials [21]. The property in Definition 2.17 is the t → 0 limit of the Hall-Littlewood Gibbs property. Since under this t → 0 limit Hall-Littlewood polynomials degenerate to Schur polynomials we have decided to call the Gibbs property in Definition 2.17 the Schur Gibbs property. We end this section with the following definition of the term acceptance probability.
Definition 2.22. Assume the same notation as in Definition 2.15 and suppose that T 1 − T 0 ≥ y i − x i ≥ 0 for i = 1, . . . , k. We define the acceptance probability Z(T 0 , T 1 , x, y, f, g) to be the ratio Remark 2.23. The quantity Z(T 0 , T 1 , x, y, f, g) is precisely the probability that if B i are sampled uniformly from Ω(T 0 , T 1 , x i , y i ) for i = 1, . . . , k then the B i satisfy the condition Let us explain briefly why we call this quantity an acceptance probability. One way to sample is as follows. Start by sampling a sequence of i.i.d. up-right paths B N i uniformly from Ω(T 0 , T 1 , x i , y i ) for i = 1, . . . , k and N ∈ N. For each n check if B n 1 , . . . , B n k satisfy the condition E and let M denote the smallest index that accomplishes this. If Ω avoid (T 0 , T 1 , x, y, f, g) is non-empty then M is geometrically distributed with parameter Z(T 0 , T 1 , x, y, f, g), and in particular M is finite almost surely and {B M i } k i=1 has distribution P T 0 ,T 1 , x, y,f,g avoid,Ber . In this sampling procedure we construct a sequence of candidates {B N i } k i=1 for N ∈ N and reject those that fail to satisfy condition E, the first candidate that satisfies it is accepted and has law P T 0 ,T 1 , x, y,f,g avoid,Ber and the probability that a candidate is accepted is precisely Z(T 0 , T 1 , x, y, f, g), which is why we call it an acceptance probability.

Main technical result.
In this section we present the main technical result of the paper. We start with the following technical definition.
Definition 2.24. Fix k ∈ N, α, λ > 0 and p ∈ (0, 1). Suppose we are given a sequence is a sequence of 1, k -indexed Bernoulli line ensembles on −T N , T N . We call the sequence (α, p, λ)-good if • for each N ∈ N we have that L N satisfies the Schur Gibbs property of Definition 2.17; • there is a function ψ : N → (0, ∞) such that lim N →∞ ψ(N ) = ∞ and for each N ∈ N we have that T N > ψ(N )N α ; • there is a function φ : (0, ∞) → (0, ∞) such that for any > 0 we have Remark 2.25. Let us elaborate on the meaning of Definition 2.24. In order for a sequence of L N of 1, k -indexed Bernoulli line ensembles on −T N , T N to be (α, p, λ)-good we want several conditions to be satisfied. Firstly, we want for each N the Bernoulli line ensemble L N to satisfy the Schur Gibbs property. The second condition is that while the interval of definition of L N is finite for each N and given by −T N , T N , we want this interval to grow at least with speed N α . This property is quantified by the function ψ, which can be essentially thought of as an arbitrary unbounded increasing function on N. The third condition is that we want for each n ∈ Z the sequence of random variables N −α/2 (L N 1 (nN α ) − pnN α ) to be tight but moreover we want globally these random variables to look like the parabola −λn 2 . This statement is reflected in (2.7), which provides a certain uniform tightness of the random variables N −α/2 (L N 1 (nN α ) − pnN α + λn 2 N α/2 ). A particular case when (2.7) is satisfied is for example if we know that for each n ∈ Z the random variables N −α/2 (L N 1 (nN α ) − pnN α + λn 2 N α/2 ) converge to the same random variable X. In the applications that we have in mind these random variables would converge to the 1-point marginals of the Airy 2 process that are all given by the same Tracy-Widom distribution (since the Airy 2 process is stationary). Equation (2.7) is a significant relaxation of the requirement that N −α/2 (L N 1 (nN α ) − pnN α + λn 2 N α/2 ) all converge weakly to the Tracy-Widom distribution -the convergence requirement is replaced with a mild but uniform control of all subsequential limits.
The main technical result of the paper is given below and proved in Section 4.
Roughly, Theorem 2.26 (i) states that if we have a sequence of 1, k -indexed Bernoulli line ensembles that satisfy the Schur Gibbs property and the top paths of these ensembles under some shift and scaling have tight one-point marginals with a non-trivial parabolic shift, then under the same shift and scaling the top k − 1 paths of the line ensemble will be tight. The extension of f N i to R is completely arbitrary and irrelevant for the validity of Theorem 2.26 since the topology on C( 1, k − 1 × R) is that of uniform convergence over compacts. Consequently, only the behavior of these functions on compact intervals matters in Theorem 2. 26 and not what these functions do near infinity, which is where the modification happens as lim N →∞ ψ(N ) = ∞ by assumption. The only reason we perform the extension is to embed all Bernoulli line ensembles into the same space We mention that the k-th up-right path in the sequence of Bernoulli line ensembles is special and Theorem 2.26 provides no tightness result for it. The reason for this stems from the Schur Gibbs property, see Definition 2.17, which assumes less information for the k-th path. In practice, one either has an infinite Bernoulli line ensemble for each N or one has a Bernoulli line ensemble with finite number of paths, which increase with N to infinity. In either of these settings one can use Theorem 2.26 to prove tightness of the full line ensemble, we will see this when we prove Theorem 1.1 in the next section. Proof. (of Theorem 1.1) We use the same notation and assumptions as in the statement of the theorem. For clarity we split the proof into two steps.
Step 1. In this step we prove that L N is tight. In view of Lemma 2.4 to establish the tightness of L N it suffices to show that for every k ∈ N Let T N = min(−a N , b N ) and for N ≥ k + 1 letL N = (L N 1 ,L N 2 , . . . ,L N k+1 ) denote the 1, k + 1indexed Bernoulli line ensemble obtained from L N by restriction to the top k + 1 lines and the interval −T N , T N . In particular, since L N satisfies the Schur Gibbs property we conclude the same is true forL N and moreover Assumptions 1 and 2 in Section 1.2 imply that {L N } N ≥k+1 is an (α, p, λ)-good in the sense of Definition 2.24. It follows by Theorem 2.26 that {f N i } k i=1 as in the statement of that theorem for the line ensemblesL N are tight in (C( 1, k × R), C k ).
Since the map F : C( 1, k × R) → R given by F (g) = g(k, 0) is continuous we conclude that =f k (0) is a tight sequence of random variables. By constructionf N k (0) has the same distribution as L N k (0) and so statement (i) above holds. If π m k : C( 1, k × R) → C([−m, m]) denotes the map π k (g)(t) = g(k, t) then π m k is continuous and so we conclude that π m is a tight sequence of random variables in C([−m, m]), which by [2,Theorem 7.3] and the equality in distribution off N k and L N implies condition (ii) above.
Step 2. We next suppose that L ∞ is any subsequential limit of L N and that n m ↑ ∞ is a sequence such that L nm converges weakly to L ∞ . We want to show that L ∞ satisfies the Brownian Gibbs property. Suppose that a, b ∈ R with a < b and K = {k 1 , k 1 + 1, . . . , k 2 } ⊂ N are given. We wish to show that for any bounded Borel-measurable function F : where we use the same notation as in Definition 2.8. In particular, we recall that Step 1, then we know by construction that the resitrction of has the same distribution as the restriction of Π k [L N ] to the same interval. Since ψ(N ) → ∞ by assumption and converge weakly to Π k [L ∞ ] (here we used that the topology is that of uniform convergence over compacts). In particular, by the second part of Theorem 2.26 we conclude that Π k [L ∞ ] satisfies the partial Brownian Gibbs property as a 1, k -indexed line ensemble on R. The latter implies that almost surely . Let A denote the collection of sets A of the form where p ∈ N, B 1 , . . . , B p ∈ B(R) (the Borel σ-algebra on R and (i 1 , x 1 ), . . . , (i p , x p ) ∈ D c K,a,b . Since in (2.9) we have that k ≥ k 2 + 1 was arbitrary we conclude that for all A ∈ A we have In view of the bounded convergence theorem, we see that the collection of sets A that satisfies the last equation is a λ-system and as it contains the π-system A we conclude by the π − λ theorem that it contains σ(A), which is precisely F ext (K × (a, b)). We may thus conclude (2.8) from the defining properties of conditional expectation and the fact that the right side of (2.8) is F ext (K × (a, b))measurable as follows from [12,Lemma 3.4]. This suffices for the proof.
Proof. (of Theorem 1.3) As explained in Section 1.2 we have that Assumption 2' implies Assumption 2 and so by Theorem 1.1 we know that L N is a tight sequence of line ensembles. Let L ∞ sub be any subsequential limit. We will prove that L ∞ sub has the same distribution as L ∞ as in the statement of the theorem. If true, this would imply that L N has only one possible subsequential limit (namely L ∞ ) which combined with the tightness of L N would imply convergence of the sequence to L ∞ .
By Theorem 1.1 we know that L ∞ sub satisfies the Brownian Gibbs property and by Assumption 2', we know that L ∞ sub,1 (the top curve of L ∞ sub ) has the same distribution as L ∞ 1 . In [7] it was proved that L Airy satisfies the Brownian Gibbs property and since L ∞ i (t) = c −1/2 L Airy i (ct), for i ∈ N and t ∈ R we conclude that L ∞ also satisfies the Brownian Gibbs property. To prove the latter one only needs to utilize the fact that if B t is a standard Brownian motion so is c −1/2 B ct -see e.g. [12,Lemma 3.5] where a related result is established. Combining all of the above observations, we see that L ∞ sub and L ∞ both satisfy the Brownian Gibbs property and have the same top curve distribution, which by [12,Theorem 1.1] implies that L ∞ sub and L ∞ have the same law.

Properties of Bernoulli line ensembles
In this section we derive several results for Bernoulli line ensembles, which will be used in the proof of Theorem 2.26 in Section 4.
3.1. Monotone coupling lemmas. In this section we formulate two lemmas that provide couplings of two Bernoulli line ensembles of non-intersecting Bernoulli bridges on the same interval, which depend monotonically on their boundary data. Schematic depictions of the couplings are provided in Figure 4. We postpone the proof of these lemmas until Section 7.
as well as x, y, x , y ∈ W k . Assume that Ω avoid (T 0 , T 1 , x, y, ∞, g; S) and Ω avoid (T 0 , T 1 , x , y , ∞, g; S) are both non-empty. Then there exists a probability space (Ω, F, P), which supports two 1, k -indexed Bernoulli line ensembles L t and L b on T 0 , T 1 such that the law of L t resp. L b under P is given by P T 0 ,T 1 , x , y ,∞,g avoid,Ber;S resp. P T 0 ,T 1 , x, y,∞,g avoid,Ber;S and such that P-almost surely we have L t i (r) ≥ L b i (r) for all i = 1, . . . , k and r ∈ T 0 , T 1 . Lemma 3.2. Assume the same notation as in Definition 2.15. Fix k ∈ N, T 0 , T 1 ∈ Z with T 0 < T 1 , S ⊆ T 0 , T 1 , two functions g t , g b : T 0 , T 1 → [−∞, ∞) and x, y ∈ W k . We assume that g t (r) ≥ g b (r) for all r ∈ T 0 , T 1 and that Ω avoid (T 0 , T 1 , x, y, ∞, g t ; S) and Ω avoid (T 0 , T 1 , x, y, ∞, g b ; S) are both non-empty. Then there exists a probability space (Ω, F, P), which supports two 1, k -indexed Bernoulli line ensembles L t and L b on T 0 , T 1 such that the law of L t resp. L b under P is given by P T 0 ,T 1 , x, y,∞,g t avoid,Ber;S resp. P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S and such that P-almost surely we have L t i (r) ≥ L b i (r) for all i = 1, . . . , k and r ∈ T 0 , T 1 .
In plain words, Lemma 3.1 states that one can couple two Bernoulli line ensembles L t and L b of non-intersecting Bernoulli bridges, bounded from below by the same function g, in such a way that if all boundary values of L t are above the respective boundary values of L b , then all up-right paths of L t are almost surely above the respective up-right paths of L b . See the left part of Figure  4. Lemma 3.2, states that one can couple two Bernoulli line ensembles L t and L b that have the same boundary values, but the lower bound g t of L t is above the lower bound g b of L b , in such a way that all up-right paths of L t are almost surely above the respective up-right paths of L b . See the right part of Figure 4.

3.2.
Properties of Bernoulli and Brownian bridges. In this section we derive several results about Bernoulli bridges, which are random up-right paths that have law P T 0 ,T 1 ,x,y Ber as in Section 2.2, as well as Brownian bridges with law P T 0 ,T 1 ,x,y f ree as in Section 2.1. Our results will rely on the two monotonicity Lemmas 3.1 and 3.2 as well as a strong coupling between Bernoulli bridges and Brownian bridges from [6] -recalled here as Theorem 3.3.
If W t denotes a standard one-dimensional Brownian motion and σ > 0, then the process . With the above notation we state the strong coupling result we use. Theorem 3.3. Let p ∈ (0, 1). There exist constants 0 < C, a, α < ∞ (depending on p) such that for every positive integer n, there is a probability space on which are defined a Brownian bridge B σ with variance σ 2 = p(1 − p) and a family of random paths (n,z) ∈ Ω(0, n, 0, z) for z = 0, . . . , n such that (n,z) has law P 0,n,0,z Ber and (3.2) E e a∆(n,z) ≤ Ce α(log n) 2 e |z−pn| 2 /n , where ∆(n, z) := sup 0≤t≤n √ nB σ t/n + t n z − (n,z) (t) .
Remark 3.4. When p = 1/2 the above theorem follows (after a trivial affine shift) from [20, Theorem 6.3] and the general p ∈ (0, 1) case was done in [6,Theorem 4.5]. We mention that a significant generalization of Theorem 3.3 for general random walk bridges has recently been proved in [13,Theorem 2.3].
We will use the following simple corollary of Theorem 3.3 to compare Bernoulli bridges with Brownian bridges. We use the same notation as in the theorem.
Corollary 3.5. Fix p ∈ (0, 1), β > 0, and A > 0. Suppose |z − pn| ≤ K √ n for a constant K > 0. Then for any > 0, there exists N large enough depending on p, , A, K so that for n ≥ N , Proof. Applying Chebyshev's inequality and (3.2) gives The conclusion is now immediate.
We also state the following result regarding the distribution of the maximum of a Brownian bridge, which follows from formulas in [14,Section 12.3]. Lemma 3.6. Fix p ∈ (0, 1), and let B σ be a Brownian bridge of variance σ 2 = p(1 − p) on [0, 1].
Then for any C, T > 0 we have

(3.3)
In particular, Proof. Let B 1 be a Brownian bridge with variance 1 on [0, 1]. Then B σ t has the same distribution as σB 1 t . Hence proving the second inequality in (3.3). Lastly to prove (3.4), observe that since B σ t has mean 0, B σ t and −B σ t have the same distribution. It follows from the first equality above that P max We state one more lemma about Brownian bridges, which allows us to decompose a bridge on [0, 1] into two independent bridges with Gaussian affine shifts meeting at a point in (0, 1). Lemma 3.7. Fix p ∈ (0, 1), T > 0, t ∈ (0, T ), and let B σ be a Brownian bridge of variance Let ξ be a Gaussian random variable with mean 0 and variance Let B 1 , B 2 be two independent Brownian bridges on [0, 1] with variances σ 2 t/T and σ 2 (T − t)/T respectively, also independent from B σ . Define the process for s ∈ [0, T ]. ThenB is a Brownian bridge with variance σ.
Proof. It is clear that the processB is a.s. continuous. SinceB is built from three independent zero-centered Gaussian processes, it is itself a zero-centered Gaussian process and thus completely characterized by its covariance. Consequently, to show thatB is a Brownian bridge of variance σ 2 , it suffices to show by (3 First assume s ≤ t Using the fact that ξ and B 1 · are independent with mean 0, we find If r ≥ t, we compute If r < t < s, then since ξ, B 1 · , and B 2 · are all independent, we have This proves (3.5) in all cases.
Below we list four lemmas about Bernoulli bridges. We provide a brief informal explanation of what each result says after it is stated. All six lemmas are proved in a similar fashion. For the first two lemmas one observes that the event whose probability is being estimated is monotone in . This allows us by Lemmas 3.1 and 3.2 to replace x, y in the statements of the lemmas with the extreme values of the ranges specified in each. Once the choice of x and y is fixed one can use our strong coupling results, Theorem 3.3 and Corollary 3.5, to reduce each of the lemmas to an analogous one involving a Brownian bridge with some prescribed variance. The latter statements are then easily confirmed as one has exact formulas for Brownian bridges, such as Lemma 3.6.
Lemma 3.8. Fix p ∈ (0, 1), T ∈ N and x, y ∈ Z such that T ≥ y − x ≥ 0, and suppose that has distribution P 0,T,x,y Remark 3.9. If M 1 , M 2 = 0 then Lemma 3.8 states that if a Bernoulli bridge is started from (0, x) and terminates at (T, y), which are above the straight line of slope p, then at any given time s ∈ [0, T ] the probability that (s) goes a modest distance below the straight line of slope p is upper bounded by 2/3.

Proof.
Define A = M 1 T 1/2 and B = pT + M 2 T 1/2 . Then since A ≤ x and B ≤ y, it follows from Lemma 3.1 that there is a probability space with measure P 0 supporting random variables L 1 and L 2 , whose laws under P 0 are P 0,T,A,B Ber and P 0,T,x,y Ber respectively, and P 0 -a.s. we have L 1 ≤ L 2 .
Since the uniform distribution on upright paths on 0, T × A, B is the same as that on upright paths on 0, T × 0, B − A shifted vertically by A, the last line of (3.7) is equal to Now we employ the coupling provided by Theorem 3.3. We have another probability space (Ω, F, P) supporting a random variable (T,B−A) whose law under P is P 0,T,0,B−A Ber as well as a Brownian bridge B σ coupled with (T,B−A) . We have Recalling the definitions of A and B, we can rewrite the quantity in the last line of (3.8) and bound by Thus the last line of (3.7) is bounded below by For the first inequality, we used the fact that the quantity in brackets is bounded in absolute value by ∆(T, B − A). The second inequality follows by dividing the event {B σ s/T ≥ 0} into cases and applying subadditivity.
√ T , Corollary 3.5 allows us to choose W 0 large enough depending on p and M 2 − M 1 so that if T ≥ W 0 , then the last line of (3.9) is bounded above by 1/2 − 1/6 = 1/3. In combination with (3.7) this proves (3.6). Lemma 3.10. Fix p ∈ (0, 1), T ∈ N and y, z ∈ Z such that T ≥ y, z ≥ 0, and suppose that y , z have distributions P 0,T,0,y Ber , P 0,T,0,z Ber respectively. Let M > 0 and > 0 be given. Then we can find Remark 3.11. Roughly, Lemma 3.10 states that if a Bernoulli bridge is started from (0, 0) and terminates at time T not significantly lower (resp. higher) than the straight line of slope p, then the event that goes significantly below (resp. above) the straight line of slope p is very unlikely.
Proof. The two inequalities are proven in essentially the same way. We begin with the first inequality.
where has law P 0,T,0,B

Ber
. By Theorem 3.3, there is a probability space (Ω, F, P) supporting a random variable (T,B) whose law under P is also P 0,T,0,B

Ber
, and a Brownian bridge B σ with variance For the first term in the last line, we used the fact that B σ and −B σ have the same distribution. For the second term, we used the fact that By Lemma 3.6, the first term in the last line of (3.12) is equal to Corollary 3.5 gives us a W 1 large enough depending on M, p, so that the second term in the last line of (3.12) is also < /2 for T ≥ W 1 . Adding the two terms and using (3.11) gives the first inequality in (3.10).
If we replace B with pT + M T 1/2 and change signs and inequalities where appropriate, then the same argument proves the second inequality in (3.10).
We need the following definition for our next result. For a function f ∈ C([a, b]) we define its modulus of continuity for δ > 0 by (3.13) w(f, δ) = sup Lemma 3.12. Fix p ∈ (0, 1), T ∈ N and y ∈ Z such that T ≥ y ≥ 0, and suppose that has distribution P 0,T,0,y Ber . For each positive M , and η, there exist a δ( , η, M ) > 0 and W 2 = W 2 (M, p, , η) ∈ N such that for T ≥ W 2 and |y − pT | ≤ M T 1/2 we have Remark 3.13. Lemma 3.12 states that if is a Bernoulli bridge that is started from (0, 0) and terminates at (T, y) with y close to pT (i.e. with well-behaved endpoints) then the modulus of continuity of is also well-behaved with high probability.
Proof. By Theorem 3.3, we have a probability measure P supporting a random variable (T,y) with law P 0,T,0,y Ber as well as a Brownian bridge B σ with variance σ 2 = p(1 − p). We have The last line follows from the assumption that |y − pT | ≤ M T 1/2 . Now (3.15) and (3.16) together imply that Thus we can find δ 0 > 0 small enough depending on , η so that w(B σ , δ 0 ) < /4 with probability at least 1 − η/2. Then with δ = min(δ 0 , /4M ), the first term in the second line of (3.17) is ≤ η/2 as well. This implies (3.14).
Let L = (L 1 , . . . , L k−1 ) be a line ensemble with law P 0,T, x, y Ber , and let E denote the event Then we can find W 3 = W 3 (p, C, K) so that for T ≥ W 3 , Remark 3.15. This lemma states that if k independent Bernoulli bridges are well-separated from each other and bot , then there is a positive probability that the curves will intersect neither each other nor bot . We will use this result to compare curves in an avoiding Bernoulli line ensemble with free Bernoulli bridges.
Proof. Observe that condition (1) simply states that bot lies a distance of at least C √ T uniformly below the line segment connecting x k−1 and y k−1 . Thus (1) and (2) imply that E occurs if each curve L i remains within a distance of C √ T /2 from the line segment connecting x i and y i . As in Theorem 3.3, let P i be probability measures supporting random variables (T,z i ) with laws P 0,T,0,z i Ber . Then In the third line, we used the fact that L 1 , . . . , L k−1 are independent from each other under P 0,T,0,z i Ber . Let B σ,i be the Brownian bridge with variance σ 2 = p(1 − p) coupled with (T,z i ) given by Theorem 3.3. Then we have By Lemma 3.6, the first term in the second line of (3.21) is equal to 2 ∞ n=1 (−1) n−1 e −n 2 C 2 /8p(1−p) . Moreover, condition (3) in the hypothesis and Corollary 3.5 allow us to find W 3 depending on p, C, K but not on i so that the last probability in (3.21) is bounded above by 1 2 Adding these two terms and referring to (3.20) proves (3.18). Now suppose C ≥ 8p(1 − p) log 3. By (3.4) in Lemma 3.6, the first term in the second line of (3.21) is bounded above by bounded above by 2e −C 2 /8p(1−p) . After possibly enlarging W 3 from above, the second term is The assumption on C implies that 1 − 3e −C 2 /8p(1−p) ≥ 0, and now combining (3.21) and (3.20) proves (3.19).

3.3.
Properties of avoiding Bernoulli line ensembles. In this section we derive two results about avoiding Bernoulli line ensembles, which are Bernoulli line ensembles with law P T 0 ,T 1 , x, y,f,g avoid,Ber;S as in Definition 2.15. The lemmas we prove only involve the case when f (r) = ∞ for all r ∈ T 0 , T 1 and we denote the measure in this case by P T 0 ,T 1 , x, y,∞,g avoid,Ber;S . A P T 0 ,T 1 , x, y,∞,g avoid,Ber;S -distributed random variable will be denoted by Q = (Q 1 , . . . , Q k ) where k is the number of up-right paths in the ensemble. As usual, if g = −∞, we write P T 0 ,T 1 , x, y avoid,Ber;S . Our first result will rely on the two monotonicity Lemmas 3.1 and 3.2 as well as the strong coupling between Bernoulli bridges and Brownian bridges from Theorem 3.3, and the further results make use of the material in Section 8.
Proof. A sketch of the proof is given in Figure 5 and its caption. Define vectors x, y ∈ W k by Figure 5. Sketch of the argument for Lemma 3.16: We use Lemma 3.1 to lower the entry and exit data x, y of the curves to x and y . We define E to be the event that that the lines in the line ensemble lie in well-separated strips with all the strips high enough so that E is contained in the event we want to lower bound in (3.22). We then use strong coupling with Brownian bridges via Theorem 3.3 and bound the probability of the bridges remaining within the blue windows to lower bound P(E).
Let us write Let E denote the event that the following conditions hold for 1 ≤ i ≤ k: The first condition implies in particular that T for each i. The second and third conditions require that each curve Q i remain within a distance of 3 √ T of the graph of the piecewise linear function on [0, T ] passing through the points (0, x 1 ), (T /2, K i ), and (T, y i ). We observe that The second inequality follows since on E we have Let P be a probability space supporting a random variable (T,z) with law P 0,T,0,z coupled with a Brownian bridge B σ with variance σ 2 , as in Theorem 3.3. Then the expression on the right in (3.23) being raised to the k-th power is bounded below for large enough T by (3.25) In the fourth line, we used the fact that ξ, B 1 · , and B 2 · are independent, and in the second to last line we used Lemma 3.6. Since |z − pT | ≤ (M 1 + 1) √ T , Lemma 3.5 allows us to choose T large enough so that P(∆(T, z) > √ T /2) is less than 1/2 the expression in the last line of (3.25). Then in view of (3.23) and (3.24), we conclude (3.22).
We now state an important weak convergence result, whose proof is presented in Section 8 (more specifically see Proposition 8.3).
Then as T → ∞, Z T converges weakly to a random vectorẐ on R k with a probability density ρ supported on W • k . The convergence result in Proposition 3.17 allows us to prove the following lemma, which roughly states that if the entrance and exit data of a sequence of avoiding Bernoulli line ensembles remain in compact windows, then with high probability the curves of the ensemble will remain separated from one another at each point by some small positive distance on scale √ T . This is how Proposition 3.17 will be used in the main argument in the text, although in Section 8 we give a detailed description of the density ρ in Proposition 3.17.
Then for any M 1 , M 2 > 0 and > 0 there exists W 5 ∈ N and δ > 0 depending on p, k, Proof. We prove the claim by contradiction. Suppose there exist M 1 , M 2 , > 0 such that for any W 5 ∈ N and δ > 0 there exists some T ≥ W 5 with Then we can obtain sequences T n , δ n > 0, T n ∞, δ n 0, such that for all n we have Combining (3.26) and (3.27), we obtain To conclude the proof, we find a δ for which (3.28) cannot hold. Forη ≥ 0 put For each i ∈ 1, k − 1 and η > 0, we have contradicting (3.28) for this choice of δ.

Proof of Theorem 2.26
The goal of this section is to prove Theorem 2.26. Throughout this section, we assume that we have fixed k ∈ N with k ≥ 2, p ∈ (0, 1), α, λ > 0, and an (α, p, λ)-good sequence of 1, k -indexed Bernoulli line ensembles as in Definition 2.24, all defined on a probability space with measure P. The proof of Theorem 2.26 depends on three results -Proposition 4.1 and Lemmas 4.2 and 4.3. In these three statements we establish various properties of the sequence of line ensembles L N . The constants in these statements depend implicitly on α, p, λ, k, and the functions φ, ψ from Definition 2.24, which are fixed throughout. We will not list these dependencies explicitly. The proof of Proposition 4.1 is given in In order to formulate it and some of the lemmas below, it will be convenient to adopt the following notation for any r > 0 and m ∈ N: Proposition 4.1. Let P be the measure from the beginning of this section. For any > 0, r > 0 there exist δ = δ( , r) > 0 and k to the set −t 1 , t 1 , and Z is the acceptance probability of Definition 2.22.
The general strategy we use to prove Proposition 4.1 is inspired by the proof of [8, Proposition 6.5]. We begin by stating three key lemmas that will be required. The proofs of Lemmas 4.2 and 4.3 are postponed to Section 5 and Lemma 4.4 is proved in Section 6.
Lemma 4.2. Let P be the measure from the beginning of this section. For any > 0, r > 0 there exist Let P be the measure from the beginning of this section. For any > 0, r > 0 there exist Then there exist constants g, h and N 4 ∈ N all depending on M 1 , M 2 , p, k, r, α such that for anỹ > 0 and N ≥ N 4 we have where Z is the acceptance probability of Definition 2.22, bot −t 1 , t 1 is the vector, whose coordinates match those of bot on −t 1 , t 1 and Q(a) = (Q 1 (a), . . . , Q k−1 (a)) is the value of the line ensemble Q = (Q 1 , . . . , Q k−1 ) whose law is P −t 3 ,t 3 , x, y,∞, bot avoid,Ber at location a.
Proof of Proposition 4.1. Let > 0 be given. Define the event In view of Lemmas 4.2 and 4.3 and the fact that P-almost ,t 3 avoid,Ber to ease the notation; in addition, we have that L N (a) = (L N 1 (a), . . . , L N k−1 (a)) and L on the last line is distributed according . We elaborate on (4.4) in the paragraph below. The first equality in (4.4) follows from the tower property for conditional expectations. The second equality uses the definition of V and the fact that , t 3measurable and can thus be taken outside of the conditional expectation. The third equality uses the Schur Gibbs property, see Definition 2.17. The first inequality on the third line holds if N ≥ N 4 and uses Lemma 4.4 with˜ = /2 as well as the fact that on the event E N the random variables L N (−t 3 ), L N (t 3 ) and L N k −t 3 , t 3 (that play the roles of x, y and bot ) satisfy the inequalities Proof of Theorem 2.26 (i). Sincef N n are obtained from f N n by subtracting a deterministic continuous function (namely λs 2 ) and rescaling by a constant (namely p(1 − p)) we see thatP N is tight if and only if P N is tight and so it suffices to show that P N is tight. By Lemma 2.4, it suffices to verify the following two conditions for all i ∈ 1, k − 1 , r > 0, and > 0: For the sake of clarity, we will prove these conditions in several steps.
Step 1. In this step we prove (4.5). Let > 0 be given. Then by Lemmas 4.2 and 4.3 we can find In particular, if we set R = max(R 1 , R 2 ) and utilize the fact that L N Step 2. In this step we prove (4.6). In the sequel we fix r, > 0 and i ∈ 1, k − 1 . To prove (4.6) it suffices to show that for any η > 0, there exists a δ > 0 and N 0 such that N ≥ N 0 implies (4.7) P sup For δ > 0 we define the event where we recall that t 1 = (r + 1)N α from (4.1). We claim that there exist δ 0 > 0 and N 0 ∈ N such that for δ ∈ (0, δ 0 ] and N ≥ N 0 we have (4.9) P(A N δ ) < η. We prove (4.9) in the steps below. Here we assume its validity and conclude the proof of (4.7).
Observe that if δ ∈ 0, min δ 0 , · (8λr) −1 , where λ is as in the statement of the theorem, we have the following tower of inequalities (4.10) In (4.10) the first equality follows from the definition of f N i , and the inequality on the second line follows from the inequality |x 2 − y 2 | ≤ 2rδ, which holds for all x, y ∈ [−r, r] such that |x − y| ≤ δ. The inequality in the third line of (4.10) follows from our assumption that δ < · (8λr) −1 and the first inequality on the last line follows from the definition of A N δ in (4.8), and the fact that t 1 ≥ rN α . The last inequality follows from our assumption that δ < δ 0 and (4.9). In view of (4.10) we conclude (4.7).
Step 3. In this step we prove (4.9) and fix η > 0 in the sequel. For δ 1 , M 1 > 0 and N ∈ N we define the events where we used the same notation as in Proposition 4.1 (in particular . Combining Lemmas 4.2, 4.3 and Proposition 4.1 we know that we can find δ 1 > 0 sufficiently small, M 1 sufficiently large andÑ ∈ N such that for N ≥Ñ we know We claim that we can find δ 0 > 0 and N 0 ≥Ñ such that for N ≥ N 0 and δ ∈ (0, δ 0 ) we have we see that (4.12) and (4.13) together imply (4.9).
Step 4. In this step we prove (4.13). We define the σ-algebra Clearly E 1 , E 2 ∈ F, so the indicator random variables 1 E 1 and 1 E 2 are F-measurable. It follows from the tower property of conditional expectation that (4.14) By the Schur-Gibbs property (see Definition 2.17), we know that P-almost surely We now observe that the Radon-Nikodym derivative of P Ber is given by To see this, note that by Definition 2.15 we have for any set It follows from (4.14), (4.16), and the definition of E 2 in 4.11 that where δ 1 is as in 4.11, and Ber , and so we conclude that where has law P 0,2t 1 ,0,y i −x i Ber (note that in (4.18) we implicitly translated the path to the right by t 1 and up by −x i , which does not affect the probability in question). Since on the event E 1 we know that |y i − x i − 2pt 1 | ≤ 2M 1 N α we conclude from Lemma 3.12 that we can find N 0 and δ 0 > 0 depending on M 1 , r, α such that for N ≥ N 0 and δ ∈ (0, δ 0 ) we have Combining (4.17), (4.18) and (4.19) we conclude (4.13), and hence statement (i) of the theorem.

Proof of Theorem 2.26 (ii).
In this section we fix a subsequential limit L ∞ = (f ∞ 1 , . . . ,f ∞ k−1 ) of the sequenceP N as in the statement of Theorem 2.26, and we prove that L ∞ possesses the partial Brownian Gibbs property. Our approach is similar to that in [12, Sections 5.1 and 5.2]. We first give a definition of measures on scaled free and avoiding Bernoulli random walks. These measures will appear when we apply the Schur Gibbs property to the scaled line ensembles Let (T,z) denote a random variable with law P 0,T,0,z Ber as before Definition 2.15. We define P a,b,x,y f ree,N to be the law of the C([a, b])-valued random variable Y given by Suppose further that f : are continuous functions. We define the probability measure P a,b, x, y,f,g avoid,N to be P a,b, x, y f ree,N conditioned on the event This measure is well-defined if E is nonempty.
Next, we state two lemmas whose proofs we give in Section 7.5. The first lemma proves weak convergence of the scaled avoiding random walk measures in Definition 4.5. It states roughly that if the boundary data of these measures converge, then the measures converge weakly to the law of avoiding Brownian bridges with the boundary limiting data, as in Definition 2.7.
The next lemma shows that at any given point, the values of the k − 1 curves in L ∞ are each distinct, so that Lemma 4.6 may be applied.
s. Using these two lemmas whose proofs are postponed, we now give the proof of Theorem 2.26 (ii).
Proof. (of Theorem 2.26 (ii)) We will write Σ = 1, k . Let us write Since L ∞ is a weak subsequential limit of Y N by possibly passing to a subsequence we may assume that Y N =⇒ L ∞ . We will still call the subsequence Y N to not overburden the notation. By the Skorohod representation theorem [2, Theorem 6.7], we can also assume that Y N and L ∞ are all defined on the same probability space with measure P and the convergence is happening P-almost surely. Here we are implicitly using Lemma 2.2 from which we know that the random variables Y N and L ∞ take value in a Polish space so that the Skorohod representation theorem is applicable.
Fix a set K = k 1 , k 2 ⊆ 1, k − 2 and a, b ∈ R with a < b. We also fix a bounded Borelmeasurable function F : C(K × [a, b]) → R. In view of Definition 2.10 we need to prove that P-a.s., × (a, b)) is as in Definition 2.8, and Q has law P a,b, x, y,f,g avoid . We prove (4.20) in two steps.
Step 1. Fix m ∈ N, n 1 , . . . , n m ∈ Σ, t 1 , . . . , t m ∈ R, and h 1 , . . . , h m : R → R bounded continuous functions. Define S = {i ∈ 1, m : n i ∈ K, t i ∈ [a, b]}. In this step we prove that where Q denotes a random variable with law P a,b, x, y,f,g avoid . By assumption, we have We define the sequences . Therefore, writing Z N for a random variable with this law, we have In addition, we have by part (i) of Theorem 2.26 that Lastly, the continuity of the h i implies that Combining (4.22), (4.23), (4.24), and (4.25) with the bounded convergence theorem proves (4.21).
Step 2. In this step we use (4.21) to prove (4.20). The argument below is a standard monotone class argument. For n ∈ N we define piecewise linear functions x < r.
We fix m 1 , m 2 ∈ N, n 1 1 , . . . , n 1 m 1 , n 2 1 , . . . , n 2 m 2 ∈ Σ, t 1 1 , . . . , t 1 m 1 , t 2 1 , . . . , t 2 m 2 ∈ R, such that (n 1 Letting n → ∞, we have χ n (x, r) → χ(x, r) = 1 x≤r , and the bounded convergence theorem gives Let H denote the space of bounded Borel measurable functions H : The above shows that H contains all functions 1 A for sets A contained in the π-system A consisting of sets of the form  (K × (a, b)) such that We observe that B is a λ-system. Indeed, since (4.26) holds for H = F , taking a i , b i → ∞ and applying the bounded convergence theorem shows that (4.27) holds with and it follows from the monotone convergence theorem that B ∈ B. Moreover, (4.26) with H = F implies that B contains the π-system P of sets of the form By the π-λ theorem [15, Theorem 2.1.6] it follows that B contains σ(P ) = F ext (K × (a, b)). Thus (4.27) holds for all B ∈ F ext (K × (a, b)). It is proven in [12,Lemma 3.4] that E a,b, x, y,f,g avoid [F (Q)] is an F ext (K × (a, b))-measurable function. Therefore (4.20) follows from (4.27) by the definition of conditional expectation. This suffices for the proof.

Bounding the max and min
In this section we prove Lemmas 4.2 and 4.3 and we assume the same notation as in the statements of these lemmas. In particular, we assume that k ∈ N, k ≥ 2, p ∈ (0, 1), α, λ > 0 are all fixed and . For clarity we split the proof into three steps. In the first step we introduce some notation that will be required in the proof of the lemma, which is presented in Steps 2 and 3.
Step 1. We write s 4 = r + 4 N α , s 3 = r + 3 N α , so that s 3 ≤ t 3 ≤ s 4 , and assume that N is large enough so that ψ(N )N α from Definition 2.24 is at least s 4 . Notice that such a choice is possible by our assumption that L N is an (α, p, λ)-good sequence and in particular, we know that L N i are defined at ±s 4 for i ∈ 1, k . We define events and what remains to be shown to prove (5.2) is that E(a, b, s, top , bot ) are pairwise disjoint.
On the intersection of E(a, b, s, top , bot ) and E(ã,b,s,˜ top ,˜ bot ) we must haveã = L N 1 (−s 4 ) = a so that a =ã. Furthermore, we have by properties (2) and (5) that s ≥s ands ≥ s from which we conclude that s =s and then we concludeb = b, top =˜ top , bot =˜ bot . In summary, if E(a, b, s, top , bot ) and E(ã,b,s,˜ top ,˜ bot ) have a non-trivial intersection then (a, b, s, top , bot ) = (ã,b,s,˜ top ,˜ bot ), which proves (5.2).
Step 2. In this step we prove that we can find an N 2 so that for N ≥ N 2 A similar argument, which we omit, proves the same inequality with [−t 3 , 0] in place of [0, t 3 ] and then the statement of the lemma holds for all N ≥ N 2 , with R 1 = (6r + 22)(2r + 10) 1/2 (M + 1).
We claim that we can findÑ 2 ∈ N sufficiently large so that if N ≥Ñ 2 and (a, b, s, top , bot ) ∈ D(M ) satisfies P (E(a, b, s, top , bot )) > 0 then we have We will prove (5.4) in Step 3. For now we assume its validity and conclude the proof of (5.3).
Let Step 3. In this step we prove (5.4) and in the sequel we let (a, b, s, top , bot ) ∈ D(M ) be such that P (E(a, b, s, top , bot )) > 0. We remark that the condition P(E(a, b, s, top , bot )) > 0 implies that Ω avoid (−s 4 , s, a, b, ∞, bot ) is not empty. By Lemma 3.2 we know that Combining (5.9), (5.10) and (5.11) we conclude that we can findÑ 2 ∈ N such that if N ≥Ñ 2 we have (5.8). This suffices for the proof.

Proof of Lemma 4.3.
We begin by proving the following important lemma, which shows that it is unlikely that the curve L N k−1 falls uniformly very low on a large interval. Lemma 5.1. Under the same conditions as in Lemma 4.3 the following holds. For any r, > 0 there exist R > 0 and N 5 ∈ N such that for all N ≥ N 5 Proof. Before we go into the proof we give an informal description of the main ideas. The key to this lemma is the parabolic shift implicit in the definition of an (α, p, λ)-good sequence. This shift requires that the deviation of the top curve L N 1 from the line of slope p to appear roughly parabolic. On the event in equation (5.12) we have that the (k − 1)-th curve dips very low uniformly on the interval [r, R] and we will argue that on this event the top k − 2 curves essentially do not feel the presence of the (k − 1)-th curve. After a careful analysis using the monotone coupling lemmas from Section 3.1 we will see that the latter statement implies that the curve L N 1 behaves like a free bridge between its end-points that have been slighly raised. Consequently, we would expect the midpoint ]/2 lies much lower than the inverted parabola −λ(R +r) 2 N α/2 /4 (due to the concavity of the latter), and so it is very unlikely for L N 1 (N α (R + r)/2) to be near it by our assumption. The latter would imply that the event in (5.12) is itself unlikely, since conditional on it an unlikely event suddenly became likely.
We proceed to fill in the details of the above sketch of the proof in the following steps. In total there are six steps and we will only prove the statement of the lemma for the interval [r, R], since the argument for [−R, −r] is very similar.
Step 1. We begin by specifying the choice of R in the statement of the lemma, fixing some notation and making a few simplifying assumptions.
Fix r, > 0 as in the statement of the lemma. Note that for any R > r, Thus by replacing r with r , we can assume that r ∈ Z, which we do in the sequel. Notice that by our assumption that L N is (α, p, λ)-good we know that (5.12) holds trivially if k = 2 (with the right side of (5.12) being any number greater than /16 and in particular ) and so in the sequel we assume that k ≥ 3. Define constants and R 0 > r sufficiently large so that for R ≥ R 0 and N ∈ N we have (5.14) We define R = R 0 + 1 R 0 +r odd , so that R ≥ R 0 and the midpoint (R + r)/2 are integers. This specifies our choice of R and for convenience we denote m = (R + r)/2.
In the following, we always assume N is large enough so that ψ(N ) > R, hence L N i are defined at RN α for 1 ≤ i ≤ k. We may do so by the second condition in the definition of an (α, p, λ)-good sequence (see Definition 2.24).
With the choice of R as above we define the events The goal of the lemma is to prove that we can find N 5 ∈ N so that for all N ≥ N 5 which we accomplish in the steps below.
Step 3. We claim that we can findÑ 0 so that for N ≥Ñ 0 we have for all ( x, y, bot ) ∈ D such that P(E( x, y, bot )) > 0. We will prove (5.19) in the steps below. In this step we assume its validity and conclude the proof of (5.16).

(5.25)
Let us elaborate on (5.25) briefly. The condition that P(E( x, y, bot )) > 0 is required to ensure that the probabilities on the first line of (5.25) are well-defined and N ≥Ñ 0 ensures that all other probabilities are also well-defined. The equality on the first line of (5.25) follows from the definition of A and the Schur Gibbs property, see Definition 2.17, and Q = (Q 1 , . . . , Q k−2 ) is P γ,Γ, x, y,∞, bot avoid,Ber distributed. The inequality in the first line of (5.25) follows from Lemma 3.1, while the equality in the second line follows from Definition 2.15, and now Q = (Q 1 , . . . , Q k−2 ) is P γ,Γ, x , y Ber -distributed with the convention that Q k−1 = bot .
Below will be used for a generic random variable with law P ·,·,·,· Ber , where the boundary data changes from line to line. With x, y as in (5.22), write z = y − x and recall that T = Γ − γ. Then P γ,Γ,x 1 ,y 1 Ber (5.26) The equalities in (5.26) follow from shifting the boundary data of the curve , while the inequality on the third line follows from the definition of x, y as in (5.22). From our choice of R in Step 1 and the definition of γ, Γ we know that  LetP be the probability measure on the space afforded by Theorem 3.3, supporting a random variable (T,z) with law P 0,T,0,z Ber and a Brownian bridge B σ with variance σ 2 = p(1 − p). Then the probability in the last line of (5.26) is equal to where we recall that ∆(T, z) is as in (3.2). Since as N → ∞ we have we conclude from Corollary 3.5 that there existsN 2 ∈ N such that if N ≥ max(N 1 ,N 2 ) we havẽ Combining (5.27), (5.28) and (5.29) we obtain (5.23).
Step 6. In this last step, we prove (5.24). Let bot be the straight segment connecting x and y, defined in (5.22). By construction, we have that there isN 3 ∈ N such that if N ≥N 3 we have for any ( x, y, bot ) ∈ D that bot lies uniformly below the line segment bot , which in turn lies at least C √ T below the straight segment connecting x k−2 and y k−2 . IfN 1 is as in Step 5 we conclude from Lemma 3.14 that there existsN 4 ∈ N such that if N ≥ max (N 1 ,N 3 ,N 4 ) and P(E( x, y, bot )) > 0 where the condition that N ≥N 1 is included to ensure that the probability P γ,Γ, x , y Ber is well-defined. In deriving (5.30) we also used (5.13), which implies We see that (5.30) implies (5.24), which concludes the proof of the lemma.
In the remainder of this section we use Lemma 5.1 to prove Lemma 4.3.

Proof. (of Lemma 4.3) For clarity we split the proof into five steps.
Step 1. In this step we specify the choice of R 2 in the statement of the lemma and introduce some notation that will be used in the proof of the lemma, which is given in Steps 2-5 below. Throughout we fix r, > 0. Define the constant (5.31) Let R > r + 3, M > 0 andÑ 1 ∈ N be such that for N ≥Ñ 1 we have that the event  for i = 1, . . . , k − 1. We will write z = y − x , and we note that z k−1 ≥ p(b − a) − 1 and also 2RN α ≥ b − a ≥ 2(r + 3)N α . The latter and Lemma 3.10 imply that there exists R 2 > 0 and N 2 ∈ N such that if N ≥Ñ 2 we have This fixes our choice of R 2 in the statement of the lemma.
With the above choice of R 2 we define the event and then to prove the lemma it suffices to show that there exists N 4 ∈ N such that for N ≥ N 4 (5.37) P(A) < Step 2. In this step, we prove that the event B from (5.32) can be written as a countable disjoint union of the form where the set D and events E(a, b, x, y, bot , − We also let D be the collection of tuples (a, b, x, y, bot , − top , + top ) satisfying: It is clear that D is countable, and that B =  (2) and (3) imply that a =ã, while conditions (2) and (4) that b =b. Afterwards, we conclude that x =˜ x, y =˜ y, bot =˜ bot , − top =˜ − top and + top =˜ + top , confirming (5.38).
We will prove (5.39) in the steps below. Here we assume its validity and conclude the proof of (5.37).
Step 4. In this step we prove (5.39). We claim that there existsÑ 4 Ber -distributed and we recall that x , y were defined in (5.34). We will prove (5.40) in Step 5 below. Here we assume its validity and conclude the proof of (5.39).
Observe that by condition (2) in Step 2, we have that .

(5.41)
Let us elaborate on (5.41) briefly. The first inequality in (5.41) follows from the definition of A and the fact that a ≤ −t 3 while b ≥ t 3 by construction. The condition P E(a, b, x, y, bot , − top , + top ) > 0 ensures that the first three probabilities in (5.41) are all well-defined. The equality on the second line follows from the Schur Gibbs property and the inequality on the third line follows from Lemmas 3.1 and 3.2 since x i ≤ x i and y i ≤ y i by construction. To ensure that the probability in the fourth line is well-defined (and hence Lemmas 3.1 and 3.2 are applicable) it suffices to assume that N ≥Ñ 4 , in view of Lemma 2.16. The equality on the fourth line follows from the definition of P a,b, x , y avoid,Ber , see Definition 2.15 and the last inequality is trivial. By our choice of R 2 , see (5.35), we know that there isÑ 5 ∈ N such that if N ≥Ñ 5 P a,b, x , y  (A|E(a, b, x, y, bot , − top , + top )) < 2 · /4 = /2, which implies (5.39).
Step 5. In this final step we prove (5.40). Set T = b − a and note that by our assumption that a ∈ s − 1 , s − 2 and b ∈ s + 1 , s + 2 we know that (2r + 6)N α ≤ T ≤ 2RN α . This implies that T and likewise for y i . It follows from Lemma 3.14, applied with bot = −∞ that there isÑ 4 In deriving (5.43) we also used (5.31), which implies Equation (5.43) clearly implies (5.40) and this concludes the proof of the lemma.

Lower bounds on the acceptance probability
We prove Lemma 4.4 in Section 6.1 by using Lemma 6.2, whose proof is presented in Section 6.2.
In other words,Q has the law of k − 1 independent Bernoulli bridges that have been conditioned on not-crossing each other on the set S and also staying above the graph of bot but only on the intervals −t 3 , −t 1 and t 1 , t 3 . The latter restriction means that the lines are allowed to cross on −t 1 + 1, t 1 − 1 , andQ k−1 is allowed to dip below bot on −t 1 + 1, t 1 − 1 as well.
Lemma 6.2. There exists N 5 ∈ N and constants g, h such that for N ≥ N 5 we have We will prove Lemma 6.2 in Section 6.2. In the remainder of this section, we give the proof of Lemma 4.4, with the constants g and h given by Lemma 6.2. The proof begins by evaluating the Radon-Nidokym derivative between P Q and PQ . We then use this Radon-Nikodym derivative to transition betweenQ in Lemma 6.2 which ignores bot on −(t 1 − 1), t 1 − 1 and Q in Lemma 4.4 which avoids bot everywhere.
Proof of Lemma 4.4. Let us denote by P Q and PQ the measures on 1, k − 1 -indexed Bernoulli line ensembles Q ,Q on the set S in Definition 6.1 induced by the restrictions of the measures P Q , PQ to S. Also let us write Ω a (·) for Ω avoid (·) for simplicity, and denote by Ω a (S) the set of elements of Ω avoid (−t 3 , t 3 ,Q(−t 3 ),Q(t 3 )) restricted to S. We claim the Radon-Nikodym derivative between these two restricted measures on elements B = (B 1 , . . . , B k−1 ) of Ω a (S) is given by ]. The first equality holds simply because the measures are discrete. To prove the second equality, observe that These identities follow from the restriction, and the fact that the measures are uniform. Then from Definition 2.22 we know Comparing the above identities proves the second equality in (6.2). Now note that Z (−t 1 , t 1 , B(−t 1 ), B(t 1 ), bot −t 1 , t 1 ) is a deterministic function of ((B(−t 1 ), B(t 1 )). In fact, the law of ((B(−t 1 ), B(t 1 )) under PQ is the same as that of Q (−t 1 ),Q(t 1 ) by way of the restriction. It follows from Lemma 6.2 that Similarly, the law of (B(−t 1 ), B(t 1 )) under P Q is the same as that of (Q(−t 1 ), Q(t 1 )) under P Q . Hence From the definition of E, the inequality (6.4), and the fact that 1 E ≤ 1, it follows that In combination with (6.5), this proves (4.2).
6.2. Proof of Lemma 6.2. In this section, we prove Lemma 6.2. We first state and prove two auxiliary lemmas necessary for the proof. The first lemma establishes a set of conditions under which we have the desired lower bound on the acceptance probability. Lemma 6.3. Let > 0 and V top > 0 be given such that V top > M 2 + 6(k − 1) . Suppose further that a, b ∈ W k−1 are such that Then we can find g = g( , V top , M 2 ) > 0 and N 6 ∈ N such that for all N ≥ N 6 we have Proof. Observe by the rightmost inequalities in conditions (1) and (2) in the hypothesis, as well as condition (1) in Lemma 4.4, that bot lies a distance of at least 2 (2t 3 ) 1/2 ≥ 2 (2t 1 ) 1/2 uniformly below the line segment connecting a k−1 and b k−1 . Also note that (1) and (2) imply |b i − a i − 2pt 1 | ≤ (V top − M 2 − 2 )(2t 3 ) 1/2 for each i. Lastly noting (3), we see that the conditions of Lemma 3.14 are satisfied with C = 2 . This implies (6.6), with The next lemma helps us derive the lower bound h in (6.1).
Proof. We first define the constants V b 1 and h 1 , as well as two other constants C and K 1 to be used in the proof. We put Step 3 below depending on h 1 . We will prove in the following steps that for these choices of V b 1 , V t 1 , h 1 , we can find N 7 so that for N ≥ N 7 we have Assuming the validity of the claim, we then observe that the probability in (6.7) is bounded below by 2h 1 − h 1 = h 1 , proving the lemma. We will prove (6.9) and (6.10) in three steps.
Step 1. In this step we prove that there exists N 7 so that (6.9) holds for N ≥ N 7 , assuming results from Step 2 below. We condition on the value ofQ at 0 and divideQ into two independent line ensembles on [−t 3 , 0] and [0, t 3 ]. Observe by Lemma 3.2 that With K 1 as in (6.8), we define events and E = z∈X E z . By Lemma 2.16, we can chooseÑ 0 large enough depending on M 1 , C, k, M 2 , R so that X is non-empty for N ≥Ñ 0 . By Lemma 3.16 we can findÑ 1 so that is a constant given explicitly in (3.22). Now letQ 1 i andQ 2 i denote the restrictions ofQ i to [−t 3 , 0] and [0, t 3 ] respectively for 1 ≤ i ≤ k−1, and write S 1 = S ∩ −t 3 , 0 , S 2 = S ∩ 0, t 3 . We observe that if z ∈ X, then (6.13) P −t 3 ,t 3 , x, y avoid,Ber;S x, z avoid,Ber;S 1 ( 1 ) · P 0,t 3 , z, y avoid,Ber;S 2 ( 2 ).
Step 2. In this step, we prove the inequalities in (6.14) from Step 1, using Lemma 3.8. Let us define vectors x , z , y by To bound the first term on the second line, first note that for sufficiently large N . Let us writex,z for these two lower bounds. Then by Lemma 3.8, we have anÑ 3 so that for N ≥Ñ 3 , Moreover, as long asÑ α 3 > 2, we have for N ≥Ñ α 3 that It follows from our choice of V b 1 and K 1 = 2(2r + 5)V b 1 in (6.8), as well as (6.17), that For the first inequality, we used the fact that t 2 /t 3 < 1, and we assumed thatÑ 3 is sufficiently large so that C(k − 1)(2t 3 ) 1/2 + (2t 3 ) 1/4 ≤ Ck(2t 3 ) 1/2 for N ≥Ñ 3 . Using (6.16), we conclude for N ≥Ñ 3 Since |z i − x i − pt 2 | ≤ (K 1 + M 1 + 1)(2t 2 ) 1/2 , we have by Lemma 3.14 and our choice of C that the second probability in the second line of (6.15) is bounded below by for N larger than someÑ 4 . It follows from (6.15) and (6.18) that for N ≥Ñ 2 = max(Ñ 3 ,Ñ 4 ), proving the first inequality in (6.14). The second inequality is proven similarly.
Step 3. In this last step, we fix V t 1 and prove that we can enlarge N 7 from Step 1 so that (6.10) holds for N ≥ N 7 . Let C be as in (6.8), and define vectors x , y ∈ W k−1 by Note that x i ≥ x 1 ≥ x i and x i −x i+1 ≥ C(2t 3 ) 1/2 , and likewise for y i . Moreover, bot lies a distance of at least C(2t 3 ) 1/2 uniformly below the line segment connecting x k−1 and y k−1 . By Lemma 3.1 we have In the numerator in the second line, we used the fact that the curvesL 1 , . . . ,L k−1 are independent under P −t 3 ,t 3 ,x 1 ,y 1 Ber , and the event in the parentheses depends only onL 1 . By Lemma 3.10, since min( as well asÑ 5 large enough so that the numerator is bounded above by h 1 /2 for N ≥Ñ 5 . Since |y i − x i − 2pt 3 | ≤ 1, our choice of C and Lemma 3.14 give aÑ 6 so that the denominator is at least 11/12 for N ≥Ñ 6 . This gives an upper bound of 12/11 · h 1 /2 < h 1 in the above as long as N 7 ≥ max(Ñ 5 ,Ñ 6 ), which concludes the proof of (6.10).
We are now equipped to prove Lemma 6.2. Let us put for convenience (6.19) Proof. (of Lemma 6.2) We first introduce some notation to be used in the proof. Let S be as in Defini-  Here, and V top are constants which we will specify later. By Lemma 6.3, for all ( c, d) and N sufficiently large we have for some g depending on , V top , M 2 . The above gives all the notation we require. We now turn to the proof of the lemma, which split is into several steps.
Step 1. In this step, we show that there exist R > 0 andN 0 sufficiently large so that if and vectors c , d ∈ W k by Then by Lemma 3.1 we have By Lemma 3.14 and our choice of C, we can findÑ 0 so that P −t 2 ,t 2 , c , d Ber (L 1 ≥ · · · ≥ L k−1 ) > 199/200 > 39/40 for N ≥Ñ 0 . Writing z = d k−1 − c k−1 , the term in the second line of (6.25) is equal to In the second line, we used the estimate c k−1 ≥ −pt 2 + (M 2 + R − Ck)(2t 3 ) 1/2 . Now by Lemma 3.10, we can choose R large enough depending on C, k, M 2 , p so that this probability is greater than 39/40 for N greater than someÑ 1 . This gives a lower bound in (6.25) of 39/40 − 1/40 = 19/20 for N ≥ max(Ñ 0 ,Ñ 1 ), and in combination with (6.23) this proves the first inequality in (6.22).
We prove the second inequality in (6.22) similarly. Note that since bot (s) ≤ ps + M 2 (2t 3 ) 1/2 on [−t 3 , t 3 ] by assumption, we have We enlarge R if necessary so that the probability in the third line of (6.26) is > 199/200 for N ≥Ñ 2 by Lemma 3.10, and 3.14 implies as above that the second expression in the last line of (6.26) is > −1/200 for N ≥Ñ 3 . This gives us a lower bound of 199/200 − 1/200 = 99/100 for N ≥Ñ 0 = max(Ñ 2 ,Ñ 3 ) as desired. This proves the two inequalities in (6.22) Step 2. In this step we fix R sufficiently large so that R > C from (6.24) and the inequalities in (6.22) both hold for this choice of R. Our work from Step 1 ensures that such a choice for R is possible. Let V t 1 , V b 1 , and h 1 be as in Lemma 6.4 for this choice of R. Define the set Let C be as in (6.24), and define c , d ∈ W k−1 by for each i, and likewise for d i . By Lemma 3.1, the left hand side of (6.28) is bounded below by In the last line, we have written z = d 1 −c 1 , and we used the fact that c 1 ≤ −pt 2 +(V t 1 +Ck)(2t 3 ) 1/2 . By Lemma 3.10, we can find V top large enough depending on V t 1 , C, k, p so that the probability in the third line of (6.29) is at least 39/40 for N ≥Ñ 4 . On the other hand, the above observations regarding c , d , and bot , as well as the fact that |d 1 − c 1 − 2pt 2 | ≤ 1, allow us to conclude from Lemma 3.14 that the probability in the last line of (6.29) is at least 39/40 for N ≥Ñ 5 . In applying Lemma 3.14 we used the fact that V b 1 ≥ M 2 + R, which implies that bot lies a distance of at least R(2t 3 ) 1/2 (and hence C(2t 3 ) 1/2 as R > C by construction) uniformly below the line segment connecting c k−1 and d k−1 . We thus obtain a lower bound of 39/40 − 1/40 = 19/20 in (6.29) for N 1 = max(Ñ 4 ,Ñ 5 ), which proves (6.28) as desired.
Step 3. In this step, we show that with E, V t 1 , and V b 1 as in Step 2, there exist > 0 sufficiently small andN 2 such that for all ( c, d) ∈ E and N ≥N 2 , we have We claim that this follows if we findÑ 6 so that for N ≥Ñ 6 , (6.31) P −t 2 ,t 2 , c, d avoid,Ber;S C( c, d, , To see this, note that (6.22) and (6.28) imply that for N ≥ max(N 0 ,N 1 ), and then (6.31) and the second inequality in (6.22) imply that for N ≥N 2 = max(N 0 ,N 1 ,Ñ 6 ), which gives (6.30) once we recall the definition of D( c, d, V top , , t 12 ).
In the remainder of this step, we verify (6.31). Observe that A( c, d, t 1 ) ∩ B( c, d, V top , t 1 ) can be written as a countable disjoint union: a, b).
Step 4. In this step, we findN 3 so that for N ≥N 3 . We will findÑ 9 so that for N ≥Ñ 9 , Here, for a, b ∈ W k−1 , G( a, b) is the event that Q(−t 12 ) = a and Q(t 12 ) = b, and J is the collection of ( a, b) satisfying a, b)) > 0}, and we takeÑ 9 large enough by Lemma 2.16 so thatJ = ∅. We also letD(V top , , t 1 ) denote the set consisting of elements of D( c, d, V top , , t 1 ) restricted to −t 12 , t 12 . Then for ( a, b) ∈J we have We observe that the event in the second line of (6.41) occurs as long as each curve L i remains within a distance of (2t 3 ) 1/2 from the straight line segment connecting a i and b i on [−t 12 , t 12 ], for 1 ≤ i ≤ k − 2. By the argument in the proof of Lemma 3.14, we can enlargeÑ 9 so that the probability of this event is bounded below by the expression on the right in (6.39) for N ≥Ñ 9 . Then using (6.41) and (6.40) and summing overJ implies (6.39).
Step 5. In this last step, we complete the proof of the lemma, fixing the constants g and h as well as N 5 . Let g = g( , V top , M 2 ) be as in Lemma 6.3 for the choices of , V top in Steps 2 and 3, let with h 1 as in Step 2, and let N 5 = max(N 0 ,N 1 ,N 2 ,N 3 , N 7 ), with N 7 as in Lemma 6.4. In the following we assume that N ≥ N 5 . By (6.38) we have that if ( c, d) ∈ E and N ≥ N 5 , then where H is the event that Let Y denote the event appearing in (6. Now Lemma 6.3 implies (6.1), completing the proof.

Appendix A
In this section we prove Lemmas 2.2, 2.4, 3.1 and 3.2.
7.1. Proof of Lemma 2.2. We adopt the same notation as in the statement of Lemma 2.2 and proceed with its proof.
Observe that the sets K 1 ⊂ K 2 ⊂ · · · ⊂ Σ × Λ are compact, they cover Σ × Λ, and any compact subset K of Σ × Λ is contained in all K n for sufficiently large n. To see this last fact, let π 1 , π 2 denote the canonical projection maps of Σ × Λ onto Σ and Λ respectively. Since these maps are continuous, π 1 (K) and π 2 (K) are compact in Σ and Λ. This implies that π 1 (K) is finite, so it is contained in Σ n 1 = Σ ∩ −n 1 , n 1 for some n 1 . On the other hand, π 2 (K) is closed and bounded in R, thus contained in some closed interval [α, β] ⊆ Λ. Since a n a and b n b, we can choose n 2 large enough so that π 2 (K) ⊆ [α, β] ⊆ [a n 2 , b n 2 ]. Then taking n = max(n 1 , n 2 ), we have We now split the proof into several steps.
Step 1. In this step, we show that the function d defined in the statement of the lemma is a metric.
For each n and f, g ∈ C(Σ × Λ), we define Then we have Clearly each d n is nonnegative and satisfies the triangle inequality, and it is then easy to see that the same properties hold for d n . Furthermore, d n ≤ 1, so d is well-defined and d(f, g) ∈ [0, 1]. Observe that d is nonnegative, and if f = g, then each d n (f, g) = 0, so the sum d(f, g) is 0. Conversely, if f = g, then since the K n cover Σ × Λ, we can choose n large enough so that K n contains an x with f (x) = g(x). Then d n (f, g) = 0, and hence d(f, g) = 0. Lastly, the triangle inequality holds for d since it holds for each d n .
Step 2. Now we prove that the topology τ d on C(Σ × Λ) induced by d is the same as the topology of uniform convergence over compacts, which we denote by τ c . Recall that τ c is generated by the basis consisting of sets for K ⊂ Σ × Λ compact, f ∈ C(Σ × Λ), and > 0, and τ d is generated by sets of the form We first show that τ d ⊆ τ c . It suffices to prove that every set B d (f ) is a union of sets B K (f, ). First, choose > 0 and f ∈ C(Σ × Λ). Let g ∈ B d (f ). We will find a basis element A g of τ c such that g ∈ A g ⊂ B d (f ). Let δ = d(f, g) < , and choose n large enough so that k>n 2 −k < −δ 2 . Define A g = B Kn (g, −δ n ), and suppose h ∈ A g . Then since K m ⊆ K n for m ≤ n, we have Therefore g ∈ A g ⊂ B d (f ). Then we can write a union of basis elements of τ c . We now prove conversely that τ c ⊆ τ d . Let K ⊂ Σ × Λ be compact, f ∈ C(Σ × Λ), and > 0. Choose n so that K ⊂ K n , and let g ∈ B K (f, ) and δ = sup x∈K |f (x) − g(x)| < . If d(g, h) < 2 −n ( − δ), then d n (g, h) ≤ 2 n d(g, h) < − δ, hence d n (g, h) < − δ, assuming without loss of generality that ≤ 1. It follows that , proving that B K (f, ) ∈ τ d by the same argument as above. We conclude that τ d = τ c .
Step 3. In this step, we show that (C(Σ×Λ), d) is a complete metric space. Let {f n } n≥1 be Cauchy with respect to d. Then we claim that {f n } must be Cauchy with respect to d n , on each K n . This follows from the observation that d n (f , f m ) ≤ 2 n d(f , f m ). Thus {f n } is Cauchy with respect to the uniform metric on each K n , and hence converges uniformly to a continuous limit f Kn on each K n (see [26,Theorem 7.15]). Since the pointwise limit must be unique at each x ∈ Σ × Λ, we have f Kn (x) = f Km (x) if x ∈ K n ∩ K m . Since ∪ n K n = Σ × Λ, we obtain a well-defined function f on all of Σ × Λ given by f (x) = lim n→∞ f Kn (x). We have f ∈ C(Σ × Λ) since f | Kn = f Kn is continuous on K n for all n. Moreover, if K ⊂ Σ × Λ is compact and n is large enough so that K ⊂ K n , then because f n → f Kn = f | Kn uniformly on K n , we have f n → f Kn | K = f | K uniformly on K. That is, for any K ⊂ Σ × Λ compact and > 0, we have f n ∈ B K (f, ) for all sufficiently large n. Therefore f n → f in τ c , and equivalently in the metric d by Step 2.
Step 4. Lastly, we prove separability by adapting the arguments from [2, Example 1.3]. For each pair of positive integers n, k, let D n,k be the subcollection of C(Σ × Λ) consisting of polygonal functions that are piecewise linear on {j} × I n,k,i for each j ∈ Σ n and each subinterval I n,k,i = a n + i−1 k (b n − a n ), a n + i k (b n − a n ) , 1 ≤ i ≤ k, taking rational values at the endpoints of these subintervals, and extended constantly to all of Λ. Then D = ∪ n,k D n,k is countable, and we claim that it is dense in τ c . To see this, let K ⊂ Σ × Λ be compact, f ∈ C(Σ × Λ), and > 0, and choose n so that K ⊂ K n . Since f is uniformly continuous on K n , we can choose k large enough so that for 0 ≤ i ≤ k, if t ∈ I n,k,i , then f (j, t) − f (j, a n + i k (b n − a n )) < /2 for all j ∈ Σ n . Using that Q is dense in R we can choose g ∈ ∪ k D n,k with |g(j, a n + i k (b n − a n )) − f (j, a n + i k (b n − a n ))| < /2. Then we have f (j, t) − g(j, a n + i−1 k (b n − a n )) < and f (j, t) − g(j, a n + i k (b n − a n )) < . Since g(j, t) is a convex combination of g(j, a n + i−1 k (b n − a n )) and g(j, a n + i k (b n − a n )), we get |f (j, t) − g(j, t)| < as well. In summary, |f (j, t) − g(j, t)| < , so g ∈ B K (f, ). This proves that D is a countable dense subset of C(Σ × Λ).

7.2.
Proof of Lemma 2.4. We first prove two lemmas that will be used in the proof of Lemma 2.4. The first result allows us to identify the space C(Σ × Λ) with a product of copies of C(Λ). In the following, we assume the notation of Lemma 2.4. Lemma 7.1. Let π i : C(Σ × Λ) → C(Λ), i ∈ Σ, be the projection maps given by π i (F )(x) = F (i, x) for x ∈ Λ. Then the π i are continuous. Endow the space i∈Σ C(Λ) with the product topology induced by the topology of uniform convergence over compacts on C(Λ). Then the mapping Proof. We first prove that the π i are continuous. We know C(Σ × Λ) is metrizable by Lemma 2.2, and by a similar argument so is C(Λ) (take Σ = {0} in Lemma 2.2). Consequently, it suffices to assume that f n → f in C(Σ × Λ) and show that π i (f n ) → π i (f ) in C(Λ). Let K be compact in Λ. Then {i} × K is compact in Σ × Λ, and f n → f uniformly on {i} × K by assumption, so we have We now observe that F is invertible. If (f i ) i∈Σ ∈ i∈Σ C(Λ), then the function f defined by , since Σ has the discrete topology. This gives a well-defined inverse for F . It suffices to prove that F and F −1 are open maps.
We first show that F sends each basis element B K (f, ) of C(Σ × Λ) to a basis element in i∈Σ C(Λ). Note that a basis for the product topology is given by products i∈Σ B K i (f i , ), where at most finitely many of the K i are nonempty. Here, we use the convention that B ∅ (f i , ) = C(Λ). Let π Σ , π Λ denote the canonical projections of Σ × Λ onto Σ, Λ. The continuity of π Σ implies that if K ⊂ Σ × Λ is compact, then π Σ (K) is compact in Σ, hence finite. Observe that the set K ∩ ({i} × Λ) is an intersection of a compact set with a closed set and is hence compact in Σ × Λ. Therefore the sets K i = π Λ (K ∩ ({i} × Λ)) are compact in Λ for each i ∈ Σ since π Λ is continuous. We observe that F (B K (f, )) = i∈Σ U i , where and U i = C(Λ) otherwise. Since π Σ (K) is finite and the K i are compact, we see that F (B K (f, )) is a basis element in the product topology as claimed.
Lastly, we show that F −1 sends each basis element U = i∈Σ B K i (f i , ) for the product topology to a set of the form B K (f, ). We have K i = ∅ for all but finitely many i. Proof. The fact that the X n i are random variables follows from the continuity of the π i in Lemma 7.1 and [15,Theorem 1.3.4]. First suppose the sequence {L n } is tight. By Lemma 2.2, C(Σ × Λ) is a Polish space, so it follows from Prohorov's theorem, [2, Theorem 5.1], that {L n } is relatively compact. That is, every subsequence {L n k } has a further subsequence {L n k } converging weakly to some L. Then for each i ∈ Σ, since π i is continuous by the above, the subsequence {π i (L n k )} of {π i (L n k )} converges weakly to π i (L) by the Continuous mapping theorem, [2,Theorem 2.7]. Thus every subsequence of {π i (L n )} has a convergent subsequence. Since C(Λ) is a Polish space (apply Lemma 2.2 with Σ = {0}), Prohorov's theorem, [2, Theorem 5.2], implies {π i (L n )} is tight.
Conversely, suppose {X n i } is tight for all i ∈ Σ. Then given > 0, we can find compact sets K i ⊂ C(Λ) such that P(X n i / ∈ K i ) ≤ /2 i for each i ∈ Σ. By Tychonoff's theorem, [22,Theorem 37.3], the productK = i∈Σ K i is compact in i∈Σ C(Λ). We have By Lemma 7.1, we have a homeomorphism G : i∈Σ C(Λ) → C(Σ × Λ). We observe that G((X n i ) i∈Σ ) = L n , and K = G(K) is compact in C(Σ×Λ). Thus L n ∈ K if and only if (X n i ) i∈Σ ∈K, and it follows from (7.1) that P(L n ∈ K) ≥ 1 − . This proves that {L n } is tight.
We are now ready to prove Lemma 2.4.
To conclude tightness of {L n i }, it suffices to prove that K = ∩ ∞ m=1 π −1 m (K m ) is sequentially compact in C(Λ). We argue by diagonalization. Let {f n } be a sequence in K, so that f n | [am,bm] ∈ K m for every m, n. Since K 1 is compact, there is a sequence {n 1,k } of natural numbers such that the subsequence {f n 1,k | [a 1 ,b 1 ] } k converges in C ([a 1 , b 1 ]). Since K 2 is compact, we can take a further subsequence . Continuing in this manner, we obtain sequences {n 1,k } ⊇ {n 2,k } ⊇ · · · so that {f n m,k | [am,bm] } k converges in C([a m , b m ]) for all m. Writing n k = n k,k , it follows that the sequence {f n k } converges uniformly on each [a m , b m ]. If K is any compact subset of C(Λ), then K ⊂ [a m , b m ] for some m, and hence {f n k } converges uniformly on K. Therefore {f n k } is a convergent subsequence of {f n }.
7.3. Proof of Lemma 2.16. We adopt the same notation as in the statement of Lemma 2.16 and proceed with its proof.
We first construct a candidate B and then we prove that B ∈ Ω avoid (T 0 , T 1 , x, y, f, g). Denote B 0 = f and B k+1 = g with x 0 = f (T 0 ) and y 0 = f (T 1 ). By Condition (3) of Lemma 2.16 we know x 0 ≥ x 1 and y 0 ≥ y 1 . We define inductively B j for j = 1, . . . , k as follows (recall that B 0 = f ). Assuming that B j−1 has been constructed we let B j (T 0 ) = x j and then for i ∈ T 0 , T 1 − 1 we define This gives our candidate B = (B 1 , . . . , B k ). In order to verify that this candidate ensemble B is an element of Ω avoid (T 0 , T 1 , x, y, f, g), three properties must be ensured: Property (c) follows directly from our definition in (7.2). We split the proof of (a) and (b) above into three steps.
Step 1. In this step we prove that for each j = 1, . . . , k that B j−1 (i) ≥ B j (i) for i ∈ T 0 , T 1 . If j = 1 and f ≡ ∞ there is nothing to prove, so we may assume that either j ≥ 2 or j = 1 and f is an up-right path -the proofs in these cases are the same. Suppose that for some i ∈ T 0 , T 1 − 1 we have that B j (i) ≤ B j−1 (i) then we know by construction that B j (i + 1) = B j (i) or B j (i) + 1.
In the former case, we trivially get where the last inequality used that B j−1 is an up-right path. If B j (i + 1) = B j (i) + 1 from (7.2) we see that B j (i) + 1 ≤ B j−1 (i + 1) and so we again conclude that B j (i + 1) ≤ B j−1 (i + 1). By assumption we know that B j (T 0 ) = x j ≤ x j−1 = B j−1 (T 0 ), and so by inducting on i from T 0 to T 1 we conclude that B j−1 (i) ≥ B j (i) for i ∈ T 0 , T 1 and j = 1, . . . , k. To summarize, we have proved that for i ∈ T 0 , T 1 Step 2. In this step we prove (a). By construction we already know that B(T 0 ) = x and so we only need to prove that B(T 1 ) = y. We will show this claim inductively on j: we trivially know the claim is true for j = 0, since y 0 = f (T 1 ) is given. Then suppose that B j (T 1 ) = y j holds up to j = n − 1.
We seek to prove that B n (T 1 ) = y n . Notice that by construction we know that B n (i) ≤ y n for all i ∈ T 0 , T 1 and so we only need to show that B n (T 1 ) ≥ y n . Suppose first that B n (i + 1) = B n (i) + 1 for all i ∈ T 0 , T 1 − 1 . Then we know that B n (T 1 ) = x n + (T 1 − T 0 ) ≥ y n by assumption (1) in Lemma 2.16, and so we are done. Conversely, there is an i 0 ∈ T 0 , T 1 − 1 such that B n (i 0 + 1) = B n (i 0 ) and we can take i 0 to be the largest index in T 0 , T 1 − 1 satisfying this condition. Observe that by (7.2) we must have that either B n (i 0 ) ≥ y n or B n (i 0 ) ≥ B n−1 (i 0 + 1). In the former case, we see that since B n is an up-right path we must have B n (T 1 ) ≥ B n (i 0 ) ≥ y n and again we are done. Thus we only need to consider the case when B n−1 (i 0 + 1) ≤ B n (i 0 ). By the maximality of i 0 we know that B n (i + 1) = B n (i) + 1 for i = i 0 + 1, . . . , T 1 and so we see that Overall, we conclude in all cases that B n (T 1 ) ≥ y n which concludes the proof of (a).
Step 3. In this step we prove (b), and in view of (7.4) we see that it suffices to show that B k (i) ≥ g(i) for all i. If g ≡ −∞ there is nothing to prove and so we may assume that g is an up-right path.
Suppose that g(i) > B k (i) for some i ∈ T 0 , T 1 . Since g(T 0 ) ≤ B k (T 0 ) = x k by Condition (3) in Lemma 2.16, we know that there exists some point i 0 such that g(i 0 ) = B k (i 0 ) and g(i 0 + 1) > B k (i 0 +1). In particular, since g and B k can each only increase by 1, this implies B k (i 0 ) = B k (i 0 +1) and g(i 0 +1) = g(i 0 )+1. This implies either B k (i 0 ) = y k or B k (i 0 )+1 > B k−1 (i 0 +1). If B k (i 0 ) = y k then by assumption (3) of Lemma 2.16 we conclude which is an obvious contradiction.
Therefore, it must be the case that B k (i 0 ) + 1 > B k−1 (i 0 + 1) and then we conclude that B k−1 (i 0 +1) = B k−1 (i 0 ) = B k (i 0 ) in view of (7.4). By the same argument we see that But then g(i 0 + 1) > f (i 0 + 1), which contradicts condition (3) in Lemma 2.16. The contradiction arose from our assumption that g(i) > B k (i) for some i ∈ T 0 , T 1 and so no such i exists, proving (b). 7.4. Proof of Lemmas 3.1 and 3.2. We will prove the following lemma, of which the two lemmas are immediate consequences. In particular, Lemma 3.1 is the special case when g b = g t , and Lemma 3.2 is the case when x = x and y = y . We argue in analogy to [12,Lemma 5.6]. Lemma 7.3. Fix k ∈ N, T 0 , T 1 ∈ Z with T 0 < T 1 , S ⊆ T 0 , T 1 , and two functions g b , g t : Also fix x, y, x , y ∈ W k such that x i ≤ x i , y i ≤ y i for 1 ≤ i ≤ k. Assume that Ω avoid (T 0 , T 1 , x, y, ∞, g b ; S) and Ω avoid (T 0 , T 1 , x , y , ∞, g t ; S) are both non-empty. Then there exists a probability space (Ω, F, P), which supports two 1, k -indexed Bernoulli line ensembles L t and L b on T 0 , T 1 such that the law of L t resp. L b under P is given by P T 0 ,T 1 , x , y ,∞,g t avoid,Ber;S resp. P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S and such that P-almost surely we have L t i (r) ≥ L b i (r) for all i = 1, . . . , k and r ∈ T 0 , T 1 .
Proof. Throughout the proof, we will write Ω a,S to mean Ω avoid (T 0 , T 1 , x, y, ∞, g b ; S) and Ω a,S to mean Ω avoid (T 0 , T 1 , x , y , ∞, g t ; S). We split the proof into two steps.
Step 1. We first aim to construct a Markov chain (X n , Y n ) n≥0 , with X n ∈ Ω a,S , Y n ∈ Ω a,S , with initial distribution given by We also note here that X 0 is maximal on the entire space Ω(T 0 , T 1 , x, y), in the sense that for any Z ∈ Ω(T 0 , T 1 , x, y), we have Z i (t) ≤ X 0 i (t) for all t ∈ T 0 , T 1 . In particular, X 0 is maximal on Ω a,S . Likewise, we see that Y 0 is maximal on Ω a,S .
We want the chain (X n , Y n ) to have the following properties: (1) (X n ) n≥0 and (Y n ) n≥0 are both Markov in their own filtrations, (2) (X n ) is irreducible and aperiodic, with invariant distribution P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S , (3) (Y n ) is irreducible and aperiodic, with invariant distribution P T 0 ,T 1 , x , y ,∞,g t avoid,Ber;S , (4) X n i ≤ Y n i on T 0 , T 1 for all n ≥ 0 and 1 ≤ i ≤ k. This will allow us to conclude convergence of X n and Y n to these two uniform measures.
We specify the dynamics of (X n , Y n ) as follows. At time n, we uniformly sample a triple (i, t, z) ∈ 1, k × T 0 , T 1 × x k , y 1 − 1 . We also flip a fair coin, with P(heads) = P(tails) = 1/2. We update X n and Y n using the following procedure. If j = i, we leave X j , Y j unchanged, and for all points s = t, we set X n+1 i (s) = X n i (s). If T 0 < t < T 1 , X n i (t − 1) = z, and X n i (t + 1) = z + 1 (note that this implies X n i (t) ∈ {z, z + 1}), we consider two cases. If t ∈ S, then we set assuming this does not cause X n+1 i (t) to fall below X n i+1 (t), with the convention that X n k+1 = g b . If t / ∈ S, we perform the same update regardless of whether it results in a crossing. In all other cases, we leave X n+1 i (t) = X n i (t). We update Y n using the same rule, with g t in place of g b . We first observe that X n and Y n are in fact non-intersecting on S for all n. Note X 0 is noncrossing, and if X n is non-crossing, then the only way X n+1 could be crossing on S is if the update were to push X n+1 i (t) below X n i+1 (t) for some i, t with t ∈ S. But any update of this form is suppressed, so it follows by induction that X n ∈ Ω a,S for all n. Similarly, we see that Y n ∈ Ω a,S .
It is easy to see that (X n , Y n ) is a Markov chain, since at each time n, the value of (X n+1 , Y n+1 ) depends only on the current state (X n , Y n ), and not on the time n or any of the states prior to time n. Moreover, the value of X n+1 depends only on the state X n , not on Y n , so (X n ) is a Markov chain in its own filtration. The same applies to (Y n ). This proves the property (1) above.
We now argue that (X n ) and (Y n ) are irreducible. Fix any Z ∈ Ω a;S . As observed above, we have Z i ≤ X 0 i on T 0 , T 1 for all i. We argue that we can reach the state Z starting from X 0 in some finite number of steps with positive probability. Due to the maximality of X 0 , we only need to move the paths downward. If we do this starting with the bottom path, then there is no danger of the paths X i crossing on S, or of X k crossing g b on S. To ensure that X n k = Z k , we successively sample triples (k, t, z) as follows. We initialize t = T 0 + 1. If X n k (t) = Z k (t), we increment t by 1. Otherwise, we have X n k (t) > Z k (t), so we set z = X n k (t) − 1 and flip tails. This may or may not push X k (t) downwards by 1. We then increment t and repeat this process. If t reaches T 1 − 1, then at the increment we reset t = T 0 + 1. After finitely many steps, X k will agree with Z k on all of T 0 , T 1 . We then repeat this process for X n i and Z i , with i descending. Since each of these samples and flips has positive probability, and this process terminates in finitely many steps, the probability of transitioning from X n to Z after some number of steps is positive. The same reasoning applies to show that (Y n ) is irreducible.
To see that the chains are aperiodic, simply observe that if we sample a triple (i, T 0 , z) or (i, T 1 , z), then the states of both chains will be unchanged.
To see that the uniform measure P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S on Ω a,S is invariant for (X n ), fix any ω ∈ Ω a,S . For simplicity, write µ for the uniform measure. Then for all τ ∈ Ω a,S , we have µ(τ ) = 1/|Ω a,S |. Hence The second equality is clear if τ = ω. Otherwise, note that P(X n+1 = ω | X n = τ ) = 0 if and only if τ and ω differ only in one indexed path (say the ith) at one point t, where |τ i (t) − ω i (t)| = 1, and this condition is also equivalent to P(X n+1 = τ | X n = ω) = 0. If X n = τ , there is exactly one choice of triple (i, t, z) and one coin flip which will ensure X n+1 i (t) = ω(t), i.e., X n+1 = ω. Conversely, if X n = ω, there is one triple and one coin flip which will ensure X n+1 = τ . Since the triples are sampled uniformly and the coin flips are fair, these two conditional probabilities are in fact equal. This proves (2), and an analogous argument proves (3).
Lastly, we argue that X n i ≤ Y n i on T 0 , T 1 for all n ≥ 0 and 1 ≤ i ≤ k. This is of course true at n = 0. Suppose it holds at some n ≥ 0, and suppose that we sample a triple (i, t, z). Then the update rule can only change the values of the X n i (t) and Y n i (t). Notice that the values can change by at most 1, and if Y n i (t) − X n i (t) = 1, then the only way the ordering could be violated is if Y i were lowered and X i were raised at the next update. But this is impossible, since a coin flip of heads can only raise or leave fixed both curves, and tails can only lower or leave fixed both curves. Thus it suffices to assume X n i (t) = Y n i (t). There are two cases to consider that violate the ordering of X n+1 is lowered yet X i (t) is left fixed. These can only occur if the curves exhibit one of two specific shapes on t − 1, t + 1 . For X i (t) to be raised, we must have X n i (t − 1) = X n i (t) = X n i (t + 1) − 1, and for Y i (t) to be lowered, we must have . From the assumptions that X n i (t) = Y n i (t), and X n i ≤ Y n i , we observe that both of these requirements force the other curve to exhibit the same shape on t − 1, t + 1 . Then the update rule will be the same for both curves for either coin flip, proving that both (i) and (ii) are impossible.
Step 2. It follows from (2) and (3) and [23,Theorem 1.8.3] that (X n ) n≥0 and (Y n ) n≥0 converge weakly to P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S and P T 0 ,T 1 , x , y ,∞,g t avoid,Ber;S respectively. In particular, (X n ) and (Y n ) are tight, so (X n , Y n ) n≥0 is tight as well. By Prohorov's theorem, it follows that (X n , Y n ) is relatively compact. Let (n m ) be a sequence such that (X nm , Y nm ) converges weakly. Then by the Skorohod representation theorem [2,Theorem 6.7], it follows that there exists a probability space (Ω, F, P) supporting random variables X n , Y n and X, Y taking values in Ω a,S , Ω a,S respectively, such that (1) The law of (X n , Y n ) under P is the same as that of (X n , Y n ), In particular, (1) implies that X nm has the same law as X nm , which converges weakly to P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S . It follows from (2) and the uniqueness of limits that X has law P T 0 ,T 1 , x, y,∞,g b avoid,Ber;S . Similarly, Y has law P T 0 ,T 1 , x , y ,∞,g t avoid,Ber;S . Moreover, condition (4) in Step 1 implies that X n (i, ·) ≤ Y n (i, ·), P-a.s., so X(i, ·) ≤ Y(i, ·) for 1 ≤ i ≤ k, P-a.s. Thus we can take L b = X and L t = Y. 7.5. Proof of Lemmas 4.6 and 4.7. In this section we use the same notation as in Section 4.3. We first prove Lemma 4.6. We will use the following lemma, which proves an analogous convergence result for a single rescaled Bernoulli random walk.
We observe that B has law P a,b,x,y f ree and B N =⇒ B as N → ∞. By [2,Theorem 3.1], to show that Z N =⇒ B, it suffices to find a sequence of probability spaces supporting Y N , B N so that It follows from Theorem 3.3 that for each N ∈ N there is a probability space supporting B N and Y N , as well as constants C, a , α > 0, such that by assumption, there exist N 0 ∈ N and A > 0 so that |z − pT N | ≤ AN α/2 for N ≥ N 0 . Then for > 0 and N ≥ N 0 , Chebyshev's inequality and (7.6) give The right hand side tends to 0 as N → ∞, implying (7.5).
We now give the proof of Lemma 4.6.
Proof. (of Lemma 4.6) We prove the two statements of the lemma in two steps.
Step 1. In this step we fix N 0 ∈ N so that P a N ,b N , x N , y N ,f N ,g N avoid,N is well-defined for N ≥ N 0 . Observe that we can choose > 0 and continuous functions h 1 , . . . , h k : [a, b] → R depending on a, b, x, y, f, g for all x ∈ [a, b]. By Lemma 2.6, we have b] , then the law ofZ N converges weakly to P a,b, x, y f ree . In view of (7.8) we can then find N 2 so that if N ≥ max(N 1 , N 2 ) then , we interpret this to mean that f N = ∞ (resp. g N = −∞). We take N 4 large enough so that if N ≥ N 4 and |x−y| ≤ N −α/2 then |f (x)−f (y)| < /4 and |g(x)−g(y)| < /4. Lastly, we choose N 5 so that N −α 5 < /4. Then for N ≥ N 0 = max(N 1 , N 2 , N 3 , N 4 , N 5 ), we have using (7.7) that (7.10) {ρ By (7.9) and (7.10) we conclude that Step 2. In this step we prove that Z N =⇒ P a,b, x, y,f,g avoid , with Z N defined in the statement of the lemma. We write Σ = 1, k , Λ = [a, b], and Λ N = [a N , b N ]. It suffices to show that for any bounded continuous function F : C(Σ × Λ) → R we have (7.11) lim where Q has law P a,b, x, y,f,g avoid . We define the functions H f,g : . By our choice of N 0 in Step 1, the denominator in (7.12) is positive for all N ≥ N 0 . Similarly, we have , where L has law P a,b, x, y f ree . From (7.12) and (7.13), we see that to prove (7.11) it suffices to show that for any bounded continuous function F : C(Σ × Λ) → R, Define the events where in the definition of E 2 we use the convention . The bounded convergence theorem then implies (7.14), completing the proof of (7.11).
We now state two lemmas about Brownian bridges which will be used in the proof of Lemma 4.7. The first lemma shows that a Brownian bridge started at 0 almost surely becomes negative somewhere on its domain. Lemma 7.5. Fix any T > 0 and y ∈ R, and let Q denote a random variable with law P 0,T,0,y f ree . Define the event C = {inf s∈[0,T ] Q(s) < 0}. Then P 0,T,0,y f ree (C) = 1. Proof. Let B denote a standard Brownian bridge on [0, 1], and let ThenB has the law of Q. Consider the stopping time τ = inf{s > 0 :B s < 0}. We will argue that τ = 0 a.s, which implies the conclusion of the lemma since {τ = 0} ⊂ C. We observe that sinceB is a.s. continuous and Q is dense in R, Here, σ(B s : s < ) denotes the σ-algebra generated byB s for s < . We used the fact that for a fixed , each set {B s < 0} for s ∈ (0, ) ∩ Q is contained in this σ-algebra, and thus so is their countable union. As s → 0, the probability on the right tends to P(N (0, 1) > 0) = 1/2. Since {τ = 0} = ∞ n=1 {τ ≤ 1/n} and {τ ≤ 1/(n + 1)} ⊂ {τ ≤ 1/n}, we conclude that Therefore P(τ = 0) = 1.
The second lemma shows that a difference of two independent Brownian bridges is another Brownian bridge.
Lemma 7.6. Let a, b, x 1 , y 1 , x 2 , y 2 ∈ R with a < b. Let B 1 (t), B 2 (t) be independent Brownian bridges from on [a, b] from x 1 to y 1 and from x 2 to y 2 respectively, as defined in (2.2) Proof. By definition, for i = 1, 2 we have We have Note that the processB 1 −B 2 is a linear combination of continuous Gaussian mean 0 processes, so it is a continuous Gaussian mean 0 process, and is thus characterized by its covariance. SinceB 1 (·) andB 2 (·) are both Gaussian with mean 0 and the covariance min(s, t), their differenceB 1 (·) −B 2 (·) is also Gaussian with mean 0 and covariance 2 min(s, t). This implies that 2 −1/2 (B 1 −B 2 ) is itself a Brownian bridgeB on [a, b], and hence equation 7.15 can be rewritten This is a Brownian bridge on [a, b] from 2 −1/2 (x 1 − x 2 ) to 2 −1/2 (y 1 − y 2 ) as desired.
To conclude this section, we prove Lemma 4.7.
Proof. (of Lemma 4.7) Suppose that L ∞ is a subsequential limit of (f N 1 , . . . ,f N k−1 ). By possibly passing to a subsequence we may assume that (f N 1 , . . . ,f N k−1 ) =⇒ L ∞ . We will still call the subsequence (f N 1 , . . . ,f N k−1 ) to not overburden the notation. By the Skorohod representation theorem [2, Theorem 6.7], we can also assume that (f N 1 , . . . ,f N k−1 ) and L ∞ are all defined on the same probability space with measure P and the convergence is happening P-almost surely. Here we are implicitly using Lemma 2.2 from which we know that the random variables (f N 1 , . . . ,f N k−1 ) and L ∞ take value in a Polish space so that the Skorohod representation theorem is applicable.
Let us denote the random variables with laws (f N 1 , . . . ,f N k−1 ) by X N and the one with law L ∞ by X and so X N → X almost surely w.r.t. P. In particular, respectively. Consequently, their difference N −α/2 / p(1 − p) converges weakly to the difference of two independent Brownian bridges, B 1 − B 2 . By Lemma 7.6, this difference is equal to 2 1/2 B, where B is a Brownian bridge B on [s, s + 2] from 0 to 2 −1/2 y, where In other words, B has law P s,s+2,0,2 −1/2 y f ree . Therefore P a,b,x N ,y N diff converges weakly to P s,s+2,0,y f ree . With probability one, min t∈[s,s+2] B t < 0 by Lemma 7.5. Thus given δ > 0, we can choose N large enough so that the probability of N −α/2 / p(1 − p), or equivalently , remaining above 0 on [a, b] is less than δ. Thus for large enough N we have Here, Z denotes the acceptance probability of Definition 2.22. This is the probability that k − 1 independent Bernoulli bridges Q 1 , . . . , Q k−1 on [a, b] with entrance and exit data L N (a) and L N (b) do not cross one another or L N k . The last inequality follows because has the law of the difference of Q i and Q i+1 , and the acceptance probability is bounded above by the probability that Q i and Q i+1 do not cross, i.e., that Q i − Q i+1 ≥ 0. By Proposition 4.1, given > 0 we can choose δ so that the probability on the right in (7.16) is < . We conclude that

Appendix B
The goal of this section is to prove Proposition 3.17, which roughly states that if the boundary data of an avoiding Bernoulli line ensemble converges then the fixed time distribution of the ensemble converges weakly to a random vector with density ρ. In the process of the proof we will identify this limiting density ρ.
Throughout this section we fix k ∈ N and consider sequences of 1, k -indexed line ensembles with distribution given by P 0,T, x, y avoid,Ber in the sense of Definition 2.15. Recall that this is just the law of k independent Bernoulli random walks that have been conditioned to start from x = (x 1 , . . . , x k ) at time 0 and end at y = (y 1 , · · · , y k ) at time T and are conditioned on never crossing. Here x, y ∈ W k satisfy T ≥ y i − x i ≥ 0 for i = 1, . . . , k, which by Lemma 2.16 ensures the well-posedness of P 0,T, x, y avoid,Ber . In Section 8.1, we introduce some definitions and formulate the precise statements of the two results we want to prove as Propositions 8.2 and 8.3. In Section 8.2, we introduce some basic results about skew Schur polynomials and express the fixed time distribution of avoiding Bernoulli line ensembles through these polynomials in Lemma 8.7. In Sections 8.3 and 8.4, we prove Propositions 8.2 and 8.3 for an important special case. In Section 8.5 we introduce some notations and results about multi-indices and multivariate functions which paves the way for the full proofs of Propositions 8.2 and 8.3 in that section and Section 8.6. 8.1. Weak convergence. We start by recalling and introducing some helpful notation. Recall, Definition 8.1. Here we recall the scaling from Proposition 3.17. We fix p, t ∈ (0, 1), and a, b ∈ W k .
Suppose that x T = (x T 1 , . . . , x T k ) and y T = (y T 1 , . . . , y T k ) are two sequences of k-dimensional vectors in W k such that . . , k. Define the sequence of random k-dimensional vectors Z T by We next define a class of functions that will be used to express the limiting density ρ in Proposition 3.17. These functions depend on two vectors a, b ∈ W k as well as parameters p, t ∈ (0, 1) through the quantities . where α 1 > α 2 > · · · > α p , β 1 > β 2 > · · · > β q and p i=1 m i = q i=1 n i = k. We denote m = (m 1 , · · · , m p ), n = (n 1 , · · · , n q ) and define two determinants ϕ( a, z, m) and ψ( b, z, n) as follows ϕ( a, z, m) = det Then we define the function The function H implicitly depends on p, t, k, a, b but we will not reflect this dependence in the notation. The following result summarizes the properties we will require from H( z). Its proof can be found in Sections 8.3 and 8.5.
Proposition 8.2. Fix p, t ∈ (0, 1) and a, b ∈ W k and let H( z) be as in (8.5). Then we have: where d z stands for the usual Lebesgue measure.
In view of Proposition 8.2 we know that the function (8. 6) ρ( z) = ρ(z 1 , . . . , z k ) = Z −1 c · 1 {z 1 >z 2 >···>z k } · H( z), defines a density on R k . This is the limiting density in Proposition 3.17. We end this section by stating the main convergence statement we want to establish.
Proposition 8.3. Assume the same notation as in the Definition 8.1. Then the random vectors Z T converge weakly to ρ as in (8.6) as T → ∞.
The way the proof of the above two propositions is organized in the remainder of the section is as follows. We first prove Proposition 8.2 and Proposition 8.3 for the case when a, b ∈ W • k -this is done in Sections 8.3 and 8.4 respectively. Afterwards we will prove Proposition 8.2 for vectors a, b that have the form in (8.3) in Section 8.5 and then use Proposition 8.3 for the case a, b ∈ W • k and the monotone coupling Lemma 3.1 to prove Proposition 8.3 in the general case in Section 8.6. (1) A partition is an infinite sequence λ = (λ 1 , λ 2 , . . . , λ r , . . . ) of non-negative integers in decreasing order λ 1 ≥ λ 2 ≥ · · · ≥ λ r ≥ · · · and containing only finitely many non-zero terms. The non-zero λ i are called parts of λ, the number of parts is called the length of the partition λ, denoted by l(λ), and the sum of the parts is the weight of λ, denoted by |λ|.
Based on the above preparation, we are ready to state the following lemma giving the distribution of avoiding Bernoulli line ensembles at time tT .
Lemma 8.7. Assume the same notation as in Definition 8.1, denote m = tT , n = T − tT and assume m, n ∈ N. Then, the avoiding Bernoulli line ensemble at time m has the following distribution: where λ 1 ≥ λ 2 ≥ · · · ≥ λ k are integers.
Proof. Notice that if we shift all x i , y i and λ i by the same integer, both sides of (8.11) stay the same and so we may assume that all of these quantities are positive by adding the same large integer to all the coordinates. We then let κ be the partition with parts κ i = y T i , µ be the partition with µ i = x T i and λ be the partition with parts λ i for i = 1, . . . , k. All three partitions have length k. In view of (8.10) we see that the right side of (8.11) is precisely Let Ω(0, T, x T , y T ) be the set of all avoiding Bernoulli line ensembles from x T to y T , and analogously define Ω(0, m, x T , λ) and Ω(0, n, λ, y T ). Then we get by the uniformity of the measure P 0,T, x T , y T avoid,Ber that (8.13) LHS of (8.11) = |Ω(0, m, x T , λ)| · |Ω(0, n, λ, y T )| |Ω(0, T, x T , y T )| .
8.3. Proof of Proposition 8.2 for a, b ∈ W • k . In this section we prove a few technical results we will need later as well as Proposition 8.2 for the case when a, b have distinct entries.
Lemma 8.8. Suppose that p ∈ (0, 1) and R > 0 are given. Suppose that x ∈ [−R, R] and N = pn + √ nx ∈ [0, n] is an integer. Then where the constant in the big O notation depends on p and R alone. Moreover, there exist positive constants C, c > 0 depending on p alone such that for all large enough n ∈ N and N ∈ [0, n], Remark 8.9. Notice that when R > 0 is fixed and N ∈ [pn − R √ n, pn + R √ n] we have N ∈ [0, n] for all large enough n so that our insistence that N ∈ [0, n] in the first part of Lemma 8.8 does not affect the asymptotics. The second part of the lemma, equation (8. 16), also trivially holds if N ∈ [0, n] since e N (1 n ) = 0 in this case by Definition 8.5.
Proof. For clarity the proof is split into several steps.
Step 1. In this step we prove (8.15). Using Definition 8.5 we obtain We have the following formula [25] for n ≥ 1 (8.18) n! = √ 2πnn n e −n e rn , where 1 12n + 1 < r n < 1 12n Applying (8.18) to equation (8.17) gives Denote ∆ = √ nx = O n 1/2 , and we now use the Taylor expansion of the logarithm and the expression for N to get Plugging the two equations above into equation (8.19) we get We next observe that Step 2. In this step we prove (8. 16). If N = 0 or n we know that e N (1 n ) = 1 and then (8. 16) is easily seen to hold with C = 1 and any c ∈ (0, min(− log p, − log(1 − p))). Thus it suffices to consider the case when N ∈ [1, n − 1] and in the sequel we also assume that n ≥ 2.
In view of φ n ≤ C 1 + ψ n (s) and (8.24) we know that which proves (8.16) with C = e C 1 +C 2 .
Lemma 8.10. Assume the same notation as in Definition 8.1. Fix z ∈ R k such that z 1 > · · · > z k . Suppose that T 0 ∈ N is sufficiently large so that for T ≥ T 0 we have and define λ T i = z i √ T + ptT for i = 1, . . . , k (to ease notation we suppress the dependence of λ on T in what follows). Setting m = tT and n = T − m define We claim that (8.29) Proof. Let us write 1≤i,j≤k , and Then from Lemma 8.8 we have where the constants in the big O notation are uniform as z i vary over compact subsets of R.
Proof. (of Proposition 8.2 when a, b ∈ W • k ) Let us fix z ∈ W • k , and define λ T as in Lemma 8. 10. We also let x T i and y T i be sequences of integers such that · det e c 2 (t,p)b i z j k i,j=1 On the other hand, we have that when the entries of a, b are distinct The last two statements imply that H( z) ≥ 0 and from Lemma 8. 11 we have H( z) = 0 so that H( z) > 0 for z ∈ W • k . If z ∈ W k \ W • k then z i = z j for some i = j and then we see that H( z) = 0 since the matrices in determinants in the equation above for H( z) have i-th and j-th column that are equal, which makes the determinant vanish. This proves the first two statements in the proposition.
To prove the third statement observe that by the continuity, non-negativity of H( z) and the fact that it is strictly positive in the open set W • k we know that Z c ∈ (0, ∞] and so we only need to prove that Z c < ∞. Using the formula A i,σ(i) and the triangle inequality we see that where C = C 1 + C 2 . Since the right side of (8.37) is integrable (because of the square in the exponential) we conclude that H( z) is also integrable by domination and so Z c < ∞ as desired.
8.4. Proof of Proposition 8.3 for a, b ∈ W • k . For clarity we split the proof into several steps. Step 1. In this step we prove that Z c from Proposition 8.2 in the case when a, b have distinct entries satisfies the equation .
Let B λ (T ) be as in Lemma 8.10 for λ ∈ W k , with x T , y T as in the statement of the proposition. It follows from Lemma 8.7 that where we recall that m = tT and n = T − m. Taking the T → ∞ limit in (8.39) and using (8.32) we obtain For λ ∈ W k and T ∈ N we define Q λ (T ) to be the cube [λ 1 T −1/2 −pt √ T , (λ 1 +1)T −1/2 −pt √ T )× · · · × [λ k T −1/2 − pt √ T , (λ k + 1)T −1/2 − pt √ T ) and note that Q λ (T ) has Lebesgue measure T −k/2 . In addition, we define the step functions f T through f T ( z) = where d z represents the usual Lebesgue measure on R k .
In view of (8.29) we know that for almost every z = (z 1 , · · · , z k ) ∈ R k we have exp − c 1 (t, p)a 2 i + c 2 (t, p)b 2 i 2 . (8.43) We claim that there exists a non-negative integrable function g on R k such that if T is large enough |f T (z 1 , . . . , z k )| ≤ |g(z 1 , . . . , z k )| (8.44) We will prove (8.44) in Step 2 below. For now we assume its validity and conclude the proof of (8.38).
From (8.43) and the dominated convergence theorem with dominating function g as in (8.44) we know that e − c 1 (t,p)a 2 i +c 2 (t,p)b 2 i 2 d z.
Step 2. In this step we demonstrate an integrable function g that satisfies (8.44). Let us fix λ ∈ W k . If λ i ≥ x T i + m + 1 or λ i < x T i for some i ∈ {1, 2, . . . , k} we know that for all i, j ∈ {1, · · · , k}. To see the latter, suppose that there exist i, j such that (1 + p)m < |λ i − x T j + j − i|. Then we have When T is sufficiently large, the above inequality implies λ i − x T i ∈ [0, m] so that B λ (T ) = 0, and similar result holds for y T i − λ j + j − i, which justifies (8.47). From the definition of B λ (T ) we know B λ (T ) = C T · det[E(λ i − x T j + j − i, m)] k i,j=1 · det[E(y T i − λ j + j − i, n)] k i,j=1 , where E(N, n) = e N (1 n ) · exp −N log 1 − p p + n log(1 − p) + (1/2) log n , and C T = ( √ 2π) k (p(1 − p)) k/2 · exp(k log T − (k/2) log n − (k/2) log m). (8.48) Notice that C T is uniformly bounded for all T large enough, because and O T −1 is uniformly bounded.
In particular, we see that if z ∈ R k then either z ∈ Q λ (T ) for any λ ∈ W k in which case f T ( z) = 0 or z ∈ Q λ (T ) for some λ ∈ W k in which case (8.51) implies where C, c > 0 depend on p, t, k but not on T provided that it is sufficiently large. We finally see that (8.44) holds with g being equal to the right side of (8.52), which is clearly integrable.
Step 3. Our work in Steps 1 and 2 implies that the density ρ( z) we want to prove to be the weak limit of Z T has the form . (8.53) We fix a compact set K ⊂ W • k and for z ∈ K we define λ T ( z) ∈ W k through λ T i ( z) = ptT + z i T 1/2 for i = 1, . . . , k. In this step we prove that (8.54) lim T →∞ T k/2 · P 0,T, x T , y T avoid,Ber (L T 1 (m) = λ T 1 ( z), · · · , L T k (m) = λ T k ( z)) = ρ( z), We can now let → 0+ and n → ∞ above and apply the monotone convergence theorem to conclude that the right side converges to U ρ(z)dz. Here we use that ρ is continuous and non-negative. Doing this brings us to (8.56) and thus we conclude the statement of the proposition.
We now turn to the proof of Proposition 8.2.
Proof. (of Proposition 8.2) For clarity we split the proof into two steps.
We note that if |σ| < u then there exist i ∈ {1, . . . , p} and j 1 , j 2 ∈ {1, . . . , m i } such that j 1 = j 2 and σ m 1 +···+m i−1 +j 1 = σ m 1 +···+m i−1 +j 2 . The latter implies that D σ f ( a, z) = 0 since by (8.68) the latter is the determinant of a matrix with two equal rows. An analogous argument shows that where σ i ∈ S m i (the permutation group of {0, 1, . . . , m i − 1}). Using the multi-linearity of the determinant we obtain for all such σ that Summing over all σ we conclude that where in deriving the above we used the formula for a Vandermonde determinant, cf. [21, pp. 40]. Combining the latter with (8.66) and (8.67) we conclude that Step 2. In this step we conclude the proof of the proposition. In view of (8.64) and the fact that H + ( z) > 0 for z ∈ W • k (we proved this in Section 8.3) we conclude that H( z) ≥ 0 for z ∈ W • k . Also by Lemma 8.11 we know that H( z) = 0 for z ∈ W • k and so indeed, H( z) > 0 for z ∈ W • k . Furthermore, we know that H( z) = 0 for z ∈ W k \ W • k since the determinants in the definition of H( z) vanish due to equal columns when z ∈ W k \ W • k . Finally, we observe that by (8.69) and (8.70) we know that there exist positive constants D, d > 0 independent of provided it is sufficiently small such that where as usual z 2 = k i=1 z 2 i . In view of (8.71) and the dominating convergence theorem, we conclude that H( z) is integrable and since it is continuous and positive on W • k we conclude that Z c ∈ (0, ∞) as desired.
The above proof essentially shows the following statement.
Corollary 8.12. Let a, b ∈ W k . Let ρ ± be as (8.62), and let ρ be as in Proposition 8.2 for the two vectors a, b. Then ρ ± weakly converge to ρ as → 0+.
Proof. We use the same notation as in the proof of Proposition 8.2 above. As the proofs are analogous we only show that ρ + weakly converges to ρ. We claim that for any Borel set B ⊂ W k Assuming the validity of (8.72) we see that for any Borel set B ⊂ W k we have which proves the weak convergence we wanted. Thus we only need to show (8.72).
In view of (8.64) we know that −u−v H + (z) converges pointwise to C( m) · C( n) · H(z) and then (8.72) follows from the dominated convergence theorem, once we invoke (8.71). 8.6. Proof of Proposition 8.3 for any a, b ∈ W k . We fix the same notation as in Section 8.5 and suppose that > 0 is sufficiently small so that a ± , b ± ∈ W • k . To prove the proposition it suffices to show that for any c ∈ R k we have that (8.74) Taking the limit as T → ∞ in (8.74) and applying our result from Section 8.4 we obtain (8.75) Taking the → 0+ limit in (8.75) and invoking Corollary 8.12 we arrive at (8.73). This suffices for the proof.