Intermediate disorder limits for multi-layer semi-discrete directed polymers

We show that the partition function of the multi-layer semi-discrete directed polymer converges in the intermediate disorder regime to the partition function for the multilayer continuum polymer introduced by O’Connell and Warren in [24]. This verifies, modulo a previously hidden constant, an outstanding conjecture proposed by Corwin and Hammond [5]. A consequence is the identification of the KPZ line ensemble as logarithms of ratios of consecutive layers of the continuum partition function. Other properties of the continuum partition function, such as continuity, strict positivity and contour integral formulas to compute mixed moments, are also identified from this convergence result.

In the case d = 1, Z β 1 is a solution to the stochastic heat equation with multiplicative white noise with delta initial data [1]. Moreover, Z β 1 was shown to be the universal scaling limit of the partition function for discrete directed polymers in the intermediate disorder regime introduced by Alberts, Khanin and Quastel [1]. In this scaling limit the strength of the random environment is scaled to zero in a critical way as the size of the discrete system grows to infinity. Similarly, when d > 1, Z β d was shown to be the universal limit in the intermediate disorder regime for discrete directed polymers consisting of d non-intersecting simple symmetric random walks in [6].
Another random polymer model that has received recent attention is the O'Connell-Yor semi-discrete directed polymer introduced in [25], where the polymers are in continuous time but in discrete space. It was shown in [23] that the multi-layer version of this, which involves several non-intersecting polymer paths, has an algebraic structure related to Whittaker functions and the quantum Toda lattice. This multi-layer semi-discrete partition function is the main object of study in this paper and is defined precisely below. Definition 1.1. An up/right path in R × N is an increasing path which either proceeds to the right or jumps up by exactly one unit. For any τ > 0 and any x ∈ N, each sequence 0 < τ 1 < . . . < τ x < τ is associated to an up/right path X (τ ,x ) (·) which travels from the lattice point (0, 1) to (τ , x + 1), and which jumps between the points (τ i , i) and (τ i , i + 1) for 1 ≤ i ≤ x and otherwise always travels to the right. The list τ ∈ ∆ x (0, τ ) ⊂ R x can be thought of as the "jump times" of the up/right path; the list of jump times is in bijection with the up/right path X (τ ,x ) (·) and we therefore conflate the two notions with the convention that the paths are cadlag.
be an infinite family of independent standard Brownian motions on a probability space (Ω, F, P). Define the energy of the up/right path X (τ ,x ) (·) to be the following random variable on the probability space Ω: We can think of X (τ ,x ) (·) as a random up/right path in a natural way by taking the probability measure on the set of jump times τ ∈ ∆ x (0, τ ) ⊂ R x which is proportional to the Lebesgue measure on R x . If we denote by E the expectation with respect to this measure, we define for any β > 0, the directed polymer partition function Z β 1 (τ , x ), which is a random variable on the probability space Ω, by x + 2 x + 3 Figure 1: d non-intersecting up/right paths X (τ ,x ) started from X i (0) = i and ended at X i (τ ) = x + i for 1 ≤ i ≤ d. In this example d = 3.
We generalize this for d > 1 by taking multiple up/right paths as follows. Let X (τ ,x ) (·) = X non-intersecting. More specifically, the non-intersecting condition that we require is that (τ ) for all i < j and for all times τ ∈ (0, τ ). Notice now that all the jump times for the d up/right paths taken together can be thought of as a vector in R dx . We can think of X (τ ,x ) (·) as a random process by taking the probability measure on this list of jump times proportional to the Lebesgue measure on the subset of R dx of allowed configuration. (The process X (τ ,x ) defined in this way also has a natural interpretation as certain Poisson walkers conditioned to be non-intersecting: see Remark 2.6). Figure 1 shows a typical realization of these paths.
Denoting by E the expectation with respect to this measure, we define the partition function: In the case d = 1, it is shown in [11] that the semi-discrete partition function Z β 1 converges to Z β 1 from equation (1.1) in the intermediate disorder regime. An expository presentation of this proof is given in [4]. The main result of this article, Theorem 1.2, is to extend this to d > 1: a convergence result for semi-discrete polymers consisting of d non-intersecting up/right paths that start and end grouped together. Theorem 1.2. Fix d ∈ N, t > 0, z ∈ R, and β > 0. Recall for any τ > 0, x ∈ N, that Z β d (τ , x ) denotes the semi-discrete partition function for d non-intersecting up/right paths as in Definition 1.1. For any sequence β N with N 1 4 β N → β as N → ∞, we have the following convergence in distribution as N → ∞: The main technical tool in the proof of Theorem 1.2 is the L 2 convergence of the k-point correlation functions of the non-intersecting up/right paths to those of the nonintersecting Brownian bridges under the diffusive scaling (τ, x) ≈ N t, N t + √ N z . This is encapsulated in the following convergence result: Theorem 1.5. For any t > 0, z ∈ R, and k ∈ N, let ψ  The argument which shows that Theorem 1.5 implies Theorem 1.2 is carried out in Section 2 and uses the theory of Gaussian Hilbert spaces to directly connect the semi-discrete polymer and the continuum polymer. This is different than the method of polynomial chaos series developed in [3] which was used as an intermediate step for the convergence of discrete non-intersecting random walks established in [6].
The proof of Theorem 1.5 is carried out in Section 2.7, with technical lemmas deferred to later sections. The proof goes by extending the methods introduced in [6] which were used to prove a similar L 2 convergence result for discrete non-intersecting random walks. An additional complication that must be handled here is due to the exponentially rare event that a continuous-time random process takes many steps in a short amount of time. We extend the method of exponential moment control used in [6] in order to handle this type of rare event. Another complication is that the discrete Tanaka formula used in [6] does not apply to the continuous-time random processes studied here. To handle this, it is necessary to first "de-Poissonize" the processes before proving certain bounds, and then "re-Poissonize" to get back to the original model; this is carried out in Section 4.4.
Limits of semi-discrete directed polymers conjectured in Sections 2.3.3 and 2.3.4 of [5]. (Note that the numbering of equations in [5] refers to the published version and may differ from the latest arXiv version of that paper.) Definition 1.6 (Following Definitions 3.1 and 3.5 of [5]). For each M > 0, t > 0 define C(M, t, z) Let D    (1.6) The Lebesgue measure of this set is explicitly calculated in Lemma 2.20. For For any fixed t > 0 and z ∈ R, we have the convergence as M → ∞: Moreover, thinking of the LHS and RHS of equation (1.8) as stochastic process indexed by d ∈ N, and z ∈ R, the convergence holds for finite dimensional distributions of these processes and the convergence holds for the p-th moment of these processes for any p ≥ 1.  [5] does not effect any of the analysis of the KPZ equation carried out there, since these applications are based on studying H t 1 , defined below in Corollary 1.9, for which the constant c 1,t ≡ 1 has no effect.
, (1.9) where we take the convention Z 1 0 ≡ 1 and the constant c n,t is as in equation (1.7). Then the line ensemble {H t n (z)} n∈N,z∈R satisfies the requirements of being a KPZ t line ensemble as defined in Theorem 2.15 in [5].
Proof. In [5], the KPZ line ensemble was constructed by showing tightness and then extracting a subsequential limit from rescaled versions of the process Z 1,M d (see Theorem 3.9 and Lemma 5.1 in [5]). Corollary 1.7 identifies the finite dimensional distributions of this process, thereby showing that all the subsequential limits are the same, and identifying this unique limit. Following the construction of the KPZ line ensemble in Section 5 of [5] gives H t n as in equation (1.9).
Remark 1.10. The main result, Theorem 2.15, of [5] was to show the existence of a line ensemble which satisfies the requirements of being a KPZ line ensemble. Corollary 1.9 gives an explicit formula for the line ensemble constructed in [5] in terms of the partition functions from [24] by the definition in equation (1.9). It is reasonable to believe that this line ensemble is the unique line ensemble which satisfies the required properties of being a KPZ line ensemble, but this is currently unproven. Corollary 1.11. For fixed t > 0 and d ∈ N, the continuum partition function Z 1 d (t, z) is almost surely positive and continuous as function of z ∈ R Proof. This follows from Corollary 1.9 since the KPZ line ensemble H t n from [5] is continuous.
Remark 1.12. The strict positivity and continuity of Z 1 d was first proven in [19] by different methods. Note that the result in [19] is more powerful since it also proves continuity as t varies.  [5]). Then, for fixed d ∈ N,t > 0, the stochastic process H t d (z) + z 2 2t indexed by z ∈ R is stationary.
Limits of semi-discrete directed polymers Corollary 1.15. For any t > 0, k ∈ N and a list of indices r α ∈ N, 1 ≤ α ≤ k and list of coordinates x 1 < . . . < x k , the joint moments of the continuum random polymer are given by the following explicit contour integrals: where the constants c rα,t are as in equation (1.7), and the z α,j -contour is along C α + ıR for any constants C 1 > C 2 + 1 > C 3 + 2 > . . . > C k + (k − 1) for all j ∈ {1, . . . , r α } and E denotes expectation with respect to the random environment. (Note that because of the ordering of the contours, this formula only holds when x 1 ≤ . . . ≤ x k as in the hypothesis.) Proof. Proposition 5.4.6. in [2] explicitly calculates the contour integral on the RHS of 1.10 as the M → ∞ limit for the joint moments for the process Z t,M d defined in Corollary 1.7. Since the convergence in Definition 1.6 holds for finite dimensional distributions and moments, this establishes equation (1.10).
Remark 1.16. The result of Corollary 1.15 was originally conjectured in Remark 5.4.7 of [2]. Note that the constants c −1 rα,t are absent in the original formulas from Remark 5.4.7 of [2] because, just as in Remark 1.8, these constants were not known to appear in the convergence at the time. Corollary 1.15 also validates the use of these moment formulas in the physics literature, see [7]. (Only the r α = 1 formulas were used here, for which the missing constant has no effect since c 1,t ≡ 1.)

Outline
Subsections 2.1 and 2.2 contain the precise definitions of the stochastic processes used throughout the paper. Subsections 2.3 and 2.4 contain still more definitions and lemmas that reduce the proof of Theorem 1.2 to the convergence of certain chaos series; this proof is given in Subsection 2.5. Subsection 2.7 contains the proof of the main technical result, Theorem 1.5, with important estimates, Propositions 2.24, 2.25, 2.26 and 2.27, deferred to later sections. Subsection 2.6 contains the asymptotic analysis needed to prove Corollary 1.7. Propositions 2.24 and 2.25 are proven in Section 3 using methods involving orthogonal polynomials. Propositions 2.26 and 2.27 are proven in Section 5 using the machinery of overlap times and weak exponential moment control developed in Section 4.
We will use the superscript to denote quantities related to the endpoints of polymers; for example (t , z ) denotes the endpoint of non-intersecting Brownian bridges, τ denotes the final time for non-intersecting up/right paths, and x denotes the vertical displacement of each up/right path.
For convenience of notation, we will conflate k-tuples of space-time coordinates with their list of time and space coordinates, i.e. {(t 1 , z 1 ), . . . , (t k , z k )} with ( t, z). In the same spirit, we use the following shorthand for integrals: We also use a similar shorthand for k-fold stochastic integrals against a 1 + 1 dimensional white noise environment ξ(t, z), namelÿ For the semi-discrete coordinates that appear (where time is continuous but space is discrete) we use the following notation: We use the notation P, E to refer to the probability measure and its expectation on non-intersecting random walks defined precisely in Definitions 2.4 and 2.5. In contrast, we will use the probability space (Ω, F, P) for the disordered environment that our random walks go through and E for the expectation with respect to this random environment. The L 2 (P) norm for mean-zero random variables on this probability space We use d ∈ N to denote the number of Brownian motions or up-right paths in the non-intersecting ensembles.   (See Section 3 of [29] for details on this h-transform.) We will use the following fact about this process: for any continuous function f :

Non-intersecting Brownian motions and bridges
where B(t) are d independent standard Brownian motions.
The process D (t ,z ) is constructed by starting with the process D(t) ∈ W d from Definition 2.1 and applying the Markovian construction of a bridge process. (See Proposition 1 of [10] and Section 2 of [24] for more details.) functions for this process. This is defined by:  [24]) For any z ∈ R, t > 0, k ∈ N, the function ψ (t ,z ) k ∈ L 2 ∆ k (0, t ) × R k . Moreover for any β > 0, the following series is absolutely convergent

Non-intersecting Poisson processes and non-intersecting Poisson bridges
Definition 2.4 (Non-intersecting Poisson processes). We denote by X(τ ) ∈ N d , τ ∈ (0, ∞) an ensemble of d non-intersecting Poisson processes and use E x 0 [·] to denote the expectation over this ensemble started from the initial condition X(0) = x 0 . This is the Markov process obtained by conditioning d independent rate one Poisson processes not to intersect by applying a Doob h-transform with the Vandermonde determinant The transition probabilities are therefore given by where q τ ( x, y) is the probability for d iid Poisson processes to go from x to y in time τ without intersections. By the Karlin-MacGregor theorem, introduced in [15], this is given by . The measure on these processes is the conditional measure one gets by starting d independent Poisson processes from δ d (0) and then conditioning on the positive probability event that there have been no intersections between them for all τ ∈ (0, τ ) and that they end exactly at δ d (x ) at time τ . By the Karlin-MacGregor theorem, the transition probabilities for this Markov process are given explicitly by Radon-Nikodym derivative given by

Iterated stochastic integrals
In this section we will show how the partition function Z β d (τ , x ) can be identified as a chaos series of iterated stochastic integrals against Brownian motions.
where we recall the notation for semi-discrete sums from Section 1.4. Let P denote the probability w.r.t. non-intersecting Poisson bridges X (τ ,x ) described in Definition 2.5. Define the k-fold stochastic integral EI Remark 2.8. Note that from the theory of iterated stochastic integrals (see e.g. Chapter 7 of [12]), we have the following Itô isometry between L 2 (P) and L 2 ∆ k (0, t ) × N k for these stochastic integrals (see e.g. Theorem 7.6 in [12]): Lemma 2.9. For any τ > 0 and x ∈ N we have that Proof. This holds since x +d Proof. By using the Itô isometry from equation (2.1), and the inequality from Corollary 5.3, we can now bound the L 2 (P) norm by the k-th moment of the overlap time random variable which is specified in Definition 4.1: The change of order of sum and expectation is justified by the monotone convergence theorem since the overlap time is always non-negative. The result then follows by the Lemma 2.11. Let E denote the expectation over non-intersecting Poisson bridges X (τ ,x ) described in Definition 2.5. Recall the definition of I k and EI k from Definition 2.7. We have the following equality (as random variables in L 2 (P)): Proof. The result amounts to an interchange of the order of the stochastic integration and the expectation E. This is justified by a stochastic Fubini theorem for multiple stochastic integrals: see Theorem 5.13.1 in [26]. The required integrability condition is clear in this case since the integrand, k i=1 1 x i ∈ X(τ i ) , is non-negative and bounded above by 1. Lemma 2.12. We have the following equality (as random variables in L 2 (P)): Proof. First notice that the infinite series from (2.2) is guaranteed to converge by the estimate from Lemma 2.10. To see the equality, we will show that given any > 0, the difference between the LHS and the RHS of equation (2.2) has an L 2 (P) norm less than . Given such an > 0, we first find an M ∈ N so that E ∞ k=M β k I k X (τ ,x ) (τ ) < . This can be achieved since we have by an application of Jensen's inequality, Tonelli's theorem, and the fact that the individual terms I k are orthogonal in L 2 (E). Thus we can find such an M ∈ N to bound this above by , since we recognize this is as the tail of an absolutely convergent series by an application of Lemma 2.9. A similar result holds for are orthogonal in L 2 (P), and since the sum also is convergent by Lemma 2.10. Once such an M is chosen, we have by the triangle inequality and Lemma 2.11 applied to the first M terms that Since this holds for any > 0, this completes the proof.
Proof. Recall the definition of the energy of the i-th line H X This gives is a single stochastic integral, the Wick exponential is given by the chaos series (see Theorem 7.3 from [12]): The desired result then follows by application of the interchange of infinite sum and expectation from Lemma 2.12.

1+1 dimensional white noise
In this section we will couple the semi-discrete partition function Z β d to the continuum polymer Z β d . This is achieved by constructing the Brownian motions that define Z β d from integrals of the white noise environment (see [12] for the background on these integrals). This coupling approach is also used in [4] in the proof of convergence of the single-line (i.e. d = 1) semi-discrete polymer to the continuum random polymer.
and denote by S (N ) the image of (0, ∞) × N through this map: See Figure 2 for an illustration of this map. Also define the intervals Any function f : S (N ) k → R can be extended to a function f : Note that, since f is constant on these cells, we havë to the white noise field ξ by the prescription that:

(2.4)
Proof. By the Itô isometry, since the area of the region , we can make the following variance computation: By properties of the 1+1 dimensional white noise, we also observe that the integral on the RHS of equation (2.4) defines a Gaussian random variable, that the increments for disjoint Limits of semi-discrete directed polymers time intervals are independent, and that the process B (N ) x (·) admits a modification which has almost surely continuous sample paths. Hence it must be that B Definition 2.16. For t > 0 and z ∈ R, we will define the rescaled (and compensated) non-intersecting Poisson processes by: See Figure 2 for an illustration of these processes. Define the rescaled k-point correlation function for ψ and declaring that ψ  to define the iterated stochastic integral EI Proof. The identity is immediate from Definition 2.7 and Definition 2.16 using the fact that ψ

Convergence of chaos series -Proof of Theorem 1.2
In addition to the L 2 convergence result of Theorem 1.5, and the set up of the coupling from the previous subsection, we will need the following proposition: Limits of semi-discrete directed polymers Moreover, for any γ > 0 and any > 0, there exists N γ, so that we have that The proof of Proposition 2.19 is deferred to Section 5, where it is proven using tools developed in Section 4.
Proof. (Of Theorem 1.2) We will explicitly construct the coupling for which the convergence happens in L 2 (P); the convergence in distribution is an immediate consequence, and we will separately argue the convergence in L p (P) for p = 2 afterwards. We present the proof only for fixed d ∈ N, t ∈ (0, ∞), z ∈ R, but the method of proof easily extends to finite dimensional distributions of the process by considering finite linear combinations and using the Cramer-Wold device.
Couple the random variables Z β d and Z β N d by taking the Brownian motions B to be as defined as in the coupling from Lemma 2.15, and define for each k ∈ N the k-fold stochastic integrals: The desired result is hence reduced to the convergence as N → ∞ of the chaos series in equation (2.6). It suffices to show the convergence in the simpler case when N 1 4 β N = β, since the hypothesis β N N 1 4 → β and Proposition 2.19 guarantee the error made by this replacement can be made arbitrarily small.
Notice by the Itô isometry for 1+1 dimensional white noise that these stochastic integrals are naturally related to the It is clear then for each fixed k ∈ N, that J To see this, take any > 0, and use the convergence results of Propositions 2.19 and Proposition 2.3 along with the Itô isometry to find N β, ∈ N and β, ∈ N so large so that With this choice, we have finally by the triangle inequality in L 2 (P), and the termwise

<2 .
Since this holds for any > 0, we have the desired convergence in L 2 (P).
The L 2 (P) convergence proven directly implies L p (P) convergence for 1 ≤ p ≤ 2. To see the convergence in L p (P) for p > 2, we first use the hypercontractive property of a fixed Wiener chaos to see that there is a constant c p so that stochastic integrals J (N ) k and J k have 1 : .
(2.8) Hence, the infinite series ∞ j=1 β k J k is seen to have finite L p (P) norm by Proposition 2.3, and by Proposition 2.19, we can find N p,β ∈ N large enough so that the infinite series from equation (2.6) has L p norm which is uniformly bounded: Since these norms are finite for any p, we now apply the Holder inequality in the form , from which the L p (P) convergence follows from the L 2 (P) convergence and the uniform bound on the L 2(p−1) norm in equation (2.9).

Proof of Corollary 1.7
Lemma 2.20. Recall from Definition 1.6 that D (2.10) Proof. The jump times of such a non-intersecting ensemble are in bijection with pairs this is a direct application of the hook-length formula: see Corollary 7.2.14 in [28] or [27]). Combining these gives the desired result.
We now use to further simplify: We now use Stirling's formula With this definition, the limit N → ∞ is the same as the limit M → ∞. Notice also that lim N →∞ N 1 4 β N = 1 by this definition. Define also the shorthands τ . Observe the following limit as M → ∞: We now use the following general Gaussian scaling relation for Z β

L 2 convergence -Proof of Theorem 1.5
The main technical result that was needed in the proof of Theorem 1.2 was the L 2 convergence from Theorem 1.5. This is proven by a similar general strategy as the proof of Theorem 1.13 from [6], which was an analogous convergence result for nonintersecting simple random walks rather than non-intersecting Poisson processes. There are several additional complications in this case due to the fact that the processes here evolve in continuous time. The proof goes by dividing the set of space-time coordinates ∆ k (0, t ) × R k into four parts and analyzing contribution to the L 2 norm on each one separately.
Definition 2.22. Fix any t > 0 and k ∈ N. For any η > 0, define the set S η ⊂ ∆ k (0, t ) by: For any parameters δ, η, M > 0, we subdivide ∆ k (0, t ) × R k into the following four sets: The set D 1 (δ, η, M ) can be thought of as the "typical" part of the space ∆ k (0, t ) × R k while the sets D 2 (δ, η, M ), D 3 (δ) and D 4 (M ) can be thought of as "exceptional" sets. This subdivision is chosen in order to make D 1 (δ, η, M ) a compact set on which the function ψ (N ),(t ,z ) k has no singularities. This essentially reduces L 2 convergence on D 1 (δ, η, M ) to proving pointwise convergence on D 1 (δ, η, M ). All of the singularities/noncompactness issues occur on the exceptional sets D 2 (δ, η, M ), D 3 (δ) and D 4 (M ) where we will separately argue that they have a negligible contribution to the L 2 norm in (1.5). With this strategy in mind, the core of the proof of Theorem 1.5 is divided into Propositions 2.24, 2.25, 2.26 and 2.27; each Proposition handles one of these four sets. is the k-point correlation function for Poisson processes, while in [6] simple symmetric random walks were studied. Propositions 2.24 and 2.25 are proven in Section 3 using methods involving orthogonal polynomials. Propositions 2.26 and 2.27 are proven in Section 5 using the machinery of overlap times and weak exponential moment control developed in Section 4.

Determinantal kernels and orthogonal polynomials
In this section we prove Proposition 2.24 and Proposition 2.25 by using the determinantal structure of the non-intersecting processes. The methods used here are similar to those from Section 3 of [6]. In [6], non-intersecting simple symmetric random walk bridges, for which the Hahn orthogonal polynomials arise, were studied. Here we study non-intersecting Poisson bridges, for which the Krawtchouk orthogonal polynomials arise. The limiting object for both are non-intersecting Brownian bridges, which are related to the Hermite polynomials.

Determinantal kernel for non-intersecting Brownian bridges
We recall some useful definitions and facts about non-intersecting Brownian bridges which were given in detail in Section 3.1. from [6].
. For z, z ∈ R and t, t ∈ (0, t ), define the kernel K (t ,0) (t, z); (t , z ) by: where p j (y), j ∈ N, y ∈ R are the normalized Hermite polynomials: Finally, for any z ∈ R, we will define: for non-intersecting Brownian bridges D (t ,z ) . We have that ψ

Determinantal kernel for non-intersecting Poisson bridges
Definition 3.3. The Krawtchouk polynomials are a family of orthogonal polynomials parametrized by the two parameters N ∈ N and p ∈ (0, 1) and given explicitly in terms of the hypergeometric function 2 F 1 by The first few polynomials are: See [16] for more details on the Krawtchouk polynomials. Fix τ > 0 and x ∈ N. For any τ > 0, x ∈ N, and any 0 x) which are defined in terms of the Krawtchouk with parameters depending on τ, x, τ , x by: (We will refer to R j andR j without the superscripts for ease of notation whenever there is no ambiguity in this). Finally define the kernel K (t ,x ) P for pairs of space-time coordinates and define the rescaled version of this, for a pair of space-time coordinates (t, z) ∈ (0, t ) × R, (t , z ) ∈ (0, t ) × R: for rescaled non-intersecting Poisson bridges X (N ),(t ,z ) . We have that ψ Proof. It suffices to show that K (τ ,x ) P is the determinantal kernel for non-intersecting Poisson bridges that start at δ d (0) and end at δ d (x ), and the result for K (N ),(t ,z ) follows from the rescaling in the definitions of ψ  . Assume that the result holds for j now. To prove the induction step, we compare the three term recurrence for the Krawtchouk polynomials to the three term recurrence for the Hermite polynomials. These are (see [16]): This gives the following three term recurrence for G By the inductive hypothesis, the RHS of equation (3.7) is equal to Proof. This follows by the definitions R j andR j in terms of Krawtchouk polynomials from Definition 3.3 and the asymptotics from Lemma 3.5. For R j , the parameters from Lemma 3.5 are to be fixed as M = N t + R j can be done analogously, but it is easier to note that the transformation (t, z) → (t − t, z − z) takesR j to R j in this scaling limit. (The extra factor of (−1) j that appears inR comes out by simplifying using H j (−y) = (−1) j H j (y) and we have also used α t −t = α t ) Lemma 3.7. Fix t > 0 and z ∈ R. For all δ, η, M > 0, we have the following pointwise convergence uniformly over all pairs (t, z) , (t , z ) ∈ (0, t )×R that satisfy z, z ∈ (−M, M ), t, t ∈ (δ, t − δ) and |t − t | > η: Proof. Define the variables (which depend on N ), τ, τ , τ > 0 and x, x , x ∈ Z by τ , x The convergence holds uniformly and is stated precisely in Proposition A.2. The convergence of the first term of equation (3.5) is a direct application of this Poisson CLT. Notice that uniformly over all t , t with |t − t| > η that we have n − n > N η.
By application of Proposition A.2 and the definition of τ , τ, x , x, we have that uniformly over all such t , t and all z, z and it is clear To see the convergence of the remaining d terms consider as follows. We again apply the local central limit theorem Proposition A.2 to the j-th term of the sum in the definition of K (τ ,x ) P from equation (3.5) to see uniform convergence for the Poisson weights that appear. Combining these asymptotics with the asymptotics for R j andR j from Corollary 3.6 we have the following limit for the j-th term that appears in our limit, corresponding to the j-th term in the sum from equation (3.5): After grouping the terms appropriately, it is verified that this is exactly equal to the corresponding j-th term in Definition 3.1 for the kernel K (t ,z ) for non-intersecting Brownian bridges.
Corollary 3.8. Fix t > 0 and z ∈ R. For any δ, M > 0, there exist constants C < K = C < K (δ, M ), and C ≥ K = C ≥ K (δ, M ) so that for pairs (t, z); (t , z ) with t, t ∈ (δ, T − δ), |t − t| > η and z, z ∈ (−M, M ) we have Proof. When t ≥ t , the first term in the definition of K (N ),(t ,z ) and K (t ,z ) is 0, and the proof of Lemma 3.7 shows that regardless of η, K (N ),(t ,z ) converges uniformly to K (t ,z ) for t, t ∈ (δ, t − δ) and z, z ∈ (−M, M ). Thus when t ≥ t , since K (t ,z ) is bounded by C ≥ K here by Lemma 3.4 from [6], and since the convergence in Lemma 3.7 is uniform, it follows that K (N ),(t ,z ) is also bounded. Let C ≥ K be a constant large enough to bound both of them.
To see the inequality when t < t we must consider the first term. By applying the bound from Corollary A.3 to the first term in K (N ),(t ,z ) , along with the bound √ t − t < √ t , we have by the triangle inequality that: This bound gives the desired result.
Proof. It is easily verified by Corollary 3.8 that C D1, are given by k × k determinants of the kernels K (t ,z ) and K (N )(t ,z ) respectively. Since determinants are polynomial functions of the matrix entries, the existence of the bound by C D1 (δ, η) follows by the bound for K (t ,z ) (·) ≤ C D1,K in Corollary 3.5 from [6] and the bound K (N ),(t ,z ) (·) < C D1,K in Corollary 3.9. Now notice that Lemma 3.7 establishes uniform convergence K (N ),(t ,z ) (t i , z i ); (t j , z j ) → K (t ,z ) (t i , z i ); (t j , z j ) for any pairs (t i , z i ) and (t j , z j ) chosen from the list ( t, z) ∈ D 1 (δ, η, M ). Since the entries are bounded, this uniform convergence of the entries implies uniform convergence of the k × k determinant.

Bound on D 2 (δ, η, M ) -Proof of Proposition 2.25
Lemma 3.10. Fix t > 0 and z ∈ R. For any δ, M > 0, there exists a constant C D2 = C D2 (δ, M ) such that for all (t 1 , z 1 ); . . . ; (t k , z k ) ∈ D 2 (δ, η, M ) we have: Proof. This follows by applying Lemma 3.15 in [6] to the bounds on K (N ),(t ,z ) from Corollary 3.8 and then finally using the fact that K (N ),(t ,z ) is the determinantal kernel for ψ ( t, z) to see thaẗ We notice now from Definition 2.16 that the scaling N − k 2 makes the above exactly the probability of finding a particle occupying each position z 1 + t 1 , . . . , z k + t k at the times t 1 , . . . , t k respectively. Summing these probabilities simply counts the d paths: Limits of semi-discrete directed polymers We hence get the bound: Notice that since (t i+1 − t i ) − 1 2 is integrable around the singularity at t i+1 = t i , the integrand in equation (3.9) has finite total integral when integrated over the whole range of times t ∈ ∆ k (δ, t − δ). Since lim η→0 1 S c η = 0 a.s., we have by the dominated convergence theorem that the RHS of equation (3.9) tends to 0 as η → 0. This gives the desired result.

Overlap times
In this section we extend the method of overlap times used for discrete polymers in [6] to be able to apply them to the semi-discrete polymers studied here. This overlap time can also be thought of as the semi-discrete version of the local times between non-intersecting Brownian motions studied in Section 4 of [24]. We prove in this section that the overlap time has a property called "weak exponential moment control". This property is then used in Section 5 to bound the L 2 norm of the k-point correlation functions.

Weak exponential moment control -definition and properties Definition 4.2. We say that a collection of non-negative valued processes
is "weakly exponential moment controlled as t → 0" if the following conditions are met: i) For any fixed t ∈ [0, t ],γ > 0, there exists N γ ∈ N so that: ii) For any fixed γ > 0, and > 0, there exists N γ, ∈ N so that: iii) For any fixed t ∈ [0, t ], > 0 and γ > 0, there exists N γ, ∈ N so that: Remark 4.3. The notation of "exponential moment controlled" without the adjective "weak" appears in Definition 4.3 of [6]. Here we weaken the definition by taking the sup over N > N γ, rather than sup over all N ∈ N, and allowing for an error of size in properties ii) and iii). This extension is necessary because it allows us to handle exponential rare events that arise in the continuous-time processes we study. Note that the exponential moment control defined in [6] always implies weak exponential moment control by setting N γ, = 1 everywhere. This relaxation is needed in the semi-discrete setting because the semi-discrete processes under consideration have the potential to be arbitrarily large in a finite amount of time (as opposed to discrete simple symmetric random walks, whose height cannot exceed the number of steps the process takes). This leads to exponential rare "bad" events: the of room created by the weaker definition leaves space for these errors.
Proof. In Lemma 4.7 of [6] it is proven that such processes are exponential moment controlled in the sense of Definition 4.3 from that paper. Since exponential moment control implies weak exponential control, the result follows.
Proof. The proof is very similar to the argument from Lemma 4.4 in [6]. Since each Z (N ) (t) is non-negative, there is no harm in rearranging the order of the terms in the infinite sum to arrive at: The desired result now holds by property iii) from Definition 4.2 of weak exponential moment control with parameter mγ and choosing N γ, ,m = N mγ, .
is also weakly exponential moment controlled as t → 0.
Proof. The proof is very similar to the argument from Lemma 4.5 in [6]. Property i) and ii) of the weak exponential moment control are easily verified by application of the Cauchy-Shwarz inequality: To see property iii) for W (N ) (t), we argue as in Lemma 4.5 in [6] by the Cauchy-Schwarz inequality that By property i) now, we can find N γ ∈ N so for all N > N γ we have a uniform upper bound over E exp 2γW (N ) (t) and E exp 2γY (N ) (t) . The desired limit as → ∞ of equation (4.2) follows by an application of Lemma 4.5.
Proof. By using integration by parts, we have: from which properties i) and ii) follow from the weak exponential moment control of W (N ) (t) and by choosing N γ, large enough so that Cγ c √ N −γ < 1 2 for N > N γ, . To see property iii) consider: Thus for N > γ 2 /c 2 we have that the following infinite sum is finite (again all terms are non-negative so there is no harm in rearranging the terms of the sum): We now notice that for N > γ 2 /c 2 , the second term of equation (4.3) goes to 0 as → ∞. Along with property iii) of the weak exponential moment control for W (N ) , this yields property iii) for Z (N ) as desired.

Bounds on positions of non-intersecting Poisson processes
The bounds in this subsection are needed as an ingredient to prove weak exponential moment control for the overlap times.
Then there are constants c, C (which depend on d) so that for all N and for any fixed t > 0 we have the following inequality: Proof. The proof is by induction on d, using the reflected construction of non-intersecting random walkers from Section 2.1 of [30]. We explicitly give the argument for the top lineX d here; the proof for the bottom lineX 1 is analogous.
The case d = 1 is clear since in this case X 1 (·) is simply a standard Poisson process and the needed estimate is a consequence of the stronger general result about Poisson processes proven in Lemma A.1. Now suppose the result holds for d − 1. The reflected construction in [30] is a coupling of the process X of d non-intersecting walkers started from δ d (0) and the process Y of d − 1 non-intersecting walkers started from δ d−1 (0). In this coupling, the process Y is first constructed, and then the top line X d is realized as a Poisson process which is pushed upward by the top line of the Y ; symbolically this is: where P (t) is a rate 1 Poisson process independent of the process Y , and δ is the Dirac delta. Denoting byP (τ ) we see that for any γ > 0: This inclusion follows since if sup 0<τ <TȲd−1 (τ ) ≤ γ 2 , then in order for the processX d (τ ) to advance from position γ 2 to γ, the processX d (τ ) will need a boost of at least γ 2 which can only come from the processP (τ ). We also have by the inductive hypothe- . On the other hand, by Lemma Proof. The cases k = d and k = 1 are exactly Lemma 4.8. Now notice that for 1 < k < d, because the walkers are always ordered so that X 1 (t) < X k (t) < X d (t), we have: and the desired inequality then follows by a union bound.

Inverse gaps of non-intersecting Poisson processes
In this subsection, we prove bounds on the inverse gaps, The methods here are similar to those used for non-intersecting random walks in [6].
By the KMT coupling [17], we can couple these processes with d iid Brownian motions, B(t) = (B 1 (t), . . . , B d (t)) started from B(0) = (0, 0, . . . , 0) so that for absolute constants K 1 , K 2 , K 3 > 0 we have at integer times that: for all x ∈ R. (This can be done because each Poisson variable can be realized as a sum of iid mean zero random variables,P (i) where ξ is a Poisson(1) random variable and we have used the exponential Chebyshev On the event E τ ∩ A c ,n we expand the Vandermonde determinant and use the bound |P i (τ ) − P j (τ )| < 4n to get the bound: is exponentially small by equation (4.8), this contribution tends to zero as τ → ∞. The contribution on the event E c τ ∩ A c ,τ is also seen to be negligible by the following calculation: where we have employed the generalized Cauchy-Schwarz/Holder inequality E [ τ is a large deviation event. By an exponential Chebyshev inequality for Poisson random variables, we have that P(E c τ ) ≤ d exp (−τ (2 ln 2 − 1)) is exponentially small. Hence this too vanishes as τ → ∞. Since the total contribution on the event A c ,τ vanishes as τ → ∞, it must be bounded for all τ by some constant C .
The contribution to equation (4.6) on the event A ,τ is seen to be bounded above by (1) by an identical argument employed in the proof of Lemma 4.11 of [6]. A union bound completes the result. Lemma 4.12. Fix any indices 1 ≤ a < b ≤ d. There is a universal constant C g a,b that bounds the expected inverse gap size uniformly over all initial conditions x 0 ∈ W and all times n ∈ N. Namely: Proof. Using Lemma 4.11, the proof will follow by the same method as the proof of Lemma 4.13 of [6]. The only other ingredient in this method is the following estimate for the hitting time ν τ, = inf{t ≥ 0 : x 0 + P (t) ∈ S τ, } (where P (τ ) are iid ordinary Poisson processes): In the setting of random walks in discrete time, this follows directly by application of Lemma 7 and/or Lemma 8 from [8]. The same argument applies to random walks in continuous time as we have here. (The proof of Lemma 7 in [8] goes by looking at blocks of steps with n steps and applying the central limit theorem to each block when n → ∞. The argument works equally well with compensated Poisson trajectories over time τ in the limit τ → ∞.)

De-poissonization -non-intersecting multinomial random walks
In this section we will construct "de-Poissonized" versions of the random processes X(τ ) and X (τ ). This is needed in order to apply the discrete Tanaka theorem in the next subsection.
Definition 4.13. Recall from Definition 2.4 that X(τ ), τ > 0 denotes d non-intersecting Poisson processes. Let X (τ ) be an independent copy. We define a pair of stochastic processes in discrete time Y (n), Y (n), n ∈ N, which are the de-Poissonized version of X(τ ), X (τ ) as follows. First let τ n be the time at which the processes have made their n-th jump: and then set Y (n), Y (n) to be the position of the processes at this time: We refer to the pair Y (n), Y (n) as the de-Poissonized version of the pair X(τ ), X (τ ). Lemma 4.14. The time between jumps are independent of each other and are each exponentially distributed with mean (2d) −1 , (i.e. τ n+1 − τ n ∼ Exp(2d)) and the processes Y , Y are Markov processes that evolve according to the following rules: Proof. We observe from Definition 2.4, by explicitly calculating the determinant that appears, that the time until the next jump for the non-intersecting Poisson processes can be calculated by This shows that the time until the next jump is exponentially distributed with mean (2d) −1 . Since by definition the X, X random walk is absolutely continuous with respect to iid Poisson processes, we know that almost surely only one jump occurs at any time. Hence, we have only to consider jumps of size 1 in each individual component. By again computing the determinant that appears in Definition 2.4, we find the jump rates are characterized by: This shows that each walker is individually a Poisson process with jump rate h d ( x) −1 h d ( x+ e i ) for the X process, and h d ( x ) −1 h d ( x + e i ) for the X process when at the position X, X = ( x, x ). (Note that since the function h d is harmonic for the simple random walk, the total jump rate of both the X and X process is always d.) By the definition of the Y , Y process and the fact that only one jump occurs at a time, we get the desired result.
Corollary 4.15. Let ξ, ξ ∈ N d × N d be the multinomial random vector whose probability distribution is: Let Z(n), Z (n) n ∈ N be the stochastic process whose increments are given by an iid sequence of these multinomial random vectors Z(n) Proof. We firstly notice that the transition probabilities for each walk individually can be calculated from the transition rates given in Lemma 4.14 and the identity = 1 (see e.g. Theorem 2.1 [18] for this identity). This shows that each process Y (n) and Y (n) taken individually is a Markov process with respect to its own filtration with transition probabilities given by: Moreover, we notice that the interaction between the walks Y and Y is that one jumps precisely when the other does not. The result of the Corollary then follows by noticing that the jump rates for Y , Y exactly match those of the Doob h-transform by the Vandermonde determinant for the multinomial walks.
Remark 4.16. Corollary 4.15 shows that the de-Poissonized process Y (n), Y (n) constructed in Definition 4.13 has the same law as non-intersecting multinomial random walks. It will also be convenient for us to think of the reverse construction: first constructing the non-intersecting multinomial walks Y (n), Y (n) , and then using this to build the non-intersecting Poisson processes X(τ ), X (τ ) . Corollary 4.17 records this construction.
Proof. By the result of Lemma 4.14 of the de-Poissonized random walks Y , Y , we know that Y (n) = X(τ n ) where the random times τ n d = n i=1 ξ i are distributed as a sum of n exponential random variables of mean (2d) −1 . Thus, denoting by ρ τn (·) the density of the τ n random variable and applying the bound from Lemma 4.11, we have Since τ n is a sum of n iid exponential random variables of mean (2d) −1 , it is easily verified that the above expectation is bounded. (One can explicitly compute E n/τ n = √ nΓ(n − 1 2 )Γ(n) −1 for n ≥ 2.) Lemma 4.19. For any t > 0 and any indices 1 ≤ a < b ≤ d, the collection of processes is weakly exponential moment controlled as t → 0.
Proof. Using the bound from Lemma 4.18, the proof follows exactly in the same way as the proof of Lemma 4.14 from [6], which is obtained by estimating the moments of the random process.
is weakly exponential moment controlled as t → 0.
Proof. Let {ξ i } ∞ i be the family of mean (2d) −1 exponential random variables that relate Y , Y with X, X . Set τ n = n i=1 ξ i . Notice that to prove the lemma it suffices to show that t ] is weakly exponential moment controlled, because then the result follows by the inequality: and the fact that a sum of weakly exponential moment controlled random variables is again weakly exponential moment controlled by Lemma 4.6 (it is easily verified that is a sum of tN mean zero random variables which have finite exponential moments.) But by the definition of the de-Poissonized random walks Y , Y , we know that Y ( tN ) = X(τ tN ). Writing ρ τ tN for the density of τ tN , and letting C be the constant from Lemma 4.9, make the following estimate for any α ∈ R: where we have split the integral into the contribution from s ≤ N t and s > N t to get the last inequality. Notice that typically τ tN ∼ 1 2d tN , so P τ tN > tN is a large deviation event; an application of the exponential Chebyshev inequality gives P τ tN > tN ≤ is weakly exponential moment controlled.
Proof. By expanding the Vandermonde determinants that appear in equation (4.10), we have that: where we have applied the inequality from Lemma 4.15 of [6], which holds since 1 x k −xi ≤ 1. Finally then: , and the result follows by the weak exponential moment control established in Lemma 4.19 and since weak exponential moment control is preserved under finite sums by Lemma 4.6.

Overlap times of non-intersecting multinomial random walks
In this subsection we establish the exponential moment control for overlap times by using a discrete version of Tanaka's formula. This is like Tanaka's formula in that it relates the overlap time to increments of the random walk. For our purposes, we only need an upper bound on the overlap time which simplifies the proof somewhat. This inequality will bound the overlap time by a finite sum of quantities, each of which is analyzed to establish exponential moment control. The methods here are similar to those used for non-intersecting random walks in [6].  Proof. First write that The result then follows by applying Lemma 4.22 twice, first to the B-process with C = A(i + 1) and then again to the A-process with C = B(i), and then summing the resulting inequality from i = 0 to n.
Then, for any fixed t > 0, and any indices 1 ≤ k < ≤ d, the collection: is weakly exponential moment controlled as t → 0.
Proof. For notational convenience, we use the shorthand ∆F (i) We will apply the upper bound for the overlap time from Lemma 4.23, to the processes:  By Lemma 4.6, to see the exponential moment control for N − 1 2 Q k, [0, tN ], we have only to verify that the four terms that appear on the RHS of equation (4.14) are each weakly exponential moment controlled. The first two terms on the RHS of equation (4.14) are weakly exponential moment controlled by Lemma 4.20. We show that are weakly exponential moment controlled as t → 0 as follows. First notice that by the triangle inequality . . , Y (n + 1), Y (n + 1) . Its increments are given by is F n−1 measurable and since Y (·) is a Markov process. Moreover, since ∆A(n) ∈ {−1, +2d − 1}, we also notice from equation (4.17) that |M (n) − M (n − 1)| ≤ 2d. We can therefore apply Azuma's inequality for martingales with bounded differences (see e.g. Lemma 4.1 of [21]). This gives that for any N ∈ N is exponential moment controlled as desired.

Overlap times of non-intersecting Poisson processes and bridges
In this section we prove that the overlap times for non-intersecting Poisson processes are weakly exponential moment controlled by comparison to the overlap time for the de-Poissonized walks.
is weakly exponential moment controlled as t → 0.
Proof. By Lemma 4.14, we know that we can construct a coupling of the non-intersecting Poisson processes X(t), X (t) t > 0 and the non-intersecting multinomial walks of iid mean (2d) −1 exponential random variables so that X(τ n ) = Y (n) and X (τ n ) = Y (n) where τ n ∆ = n i=1 ξ i . In this coupling, the overlap time O k, between X and X can be written in terms of the where η(t) = max {n : τ n ≤ t} is the number of steps which have been taken up to time t. Since the ξ i are independent of the walk Y , the only thing that is relevant for the distribution of the above is the number of times i for which Y k (i) = Y (i). This is exactly counted by the discrete overlap times for the multinomial walkers Q k, [0, η(tN )]. In particular, if we label the indices i for which {Y k (i) = Y (i)} as i 1 , i 2 , . . ., then we have: where c 1 , c 2 are some to-be-determined constants that depend on d.
If we choose c 1 and c 2 to be any constants so that c 1 ln 2d 2d+1 +1 < 0 and c 2 ln 2d 2d−1 −1 < 0, then these probabilities are both exponentially small. Thus by the inclusion from equation (4.18) we have: It is easily verified from Definition 4.2 and the conclusion of Lemma 4.24 that for any fixed positive constants c 1 , c 2 that the process c −1 Proof. The determinant that defines q τ in this case is explicitly calculated as part of the proof of Proposition 3.3 in [18].
From our exact formula from Proposition 4.26 for q τ δ d (0) , x and the time reversal of this formula which yields a similar formula for q τ x, δ d (x ) , we have: Thus we conclude after plugging these formulas into equation (4.19) and observing that the Vandermonde factors h d cancel out that we remain with (4.20) Putting in now the scaling τ = N t,τ = N t , and hence, putting this result back into equation (4.20), we conclude that is weakly exponential moment controlled as t → 0.
Proof. The proof is very similar to the proof of Proposition 4.23 from [6] using the exponential moment control for the non-intersecting Poisson processes from Lemma 4.25, the Radon-Nikodym bound between Poisson processes and Poisson bridges from Lemma 4.27, and the fact that weak exponential moment control is closed under addition as in Lemma 4.6.

L 2 bounds -Proof of Propositions 2.19, 2.26, 2.27
This section uses the weak exponential moment control established in Proposition 4.28 to get bounds the L 2 norm of the k-point correlation function ψ (t ,z ) k . These arguments are a semi-discrete version of those used in Section 5 of [6].
is weakly exponential moment controlled as t → 0, then for each t ∈ [0, t ], there exists N 0 such that Z (N ) (t) has moments of all orders which are uniformly bounded in N : Moreover, for any fixed k, the k-th moment can be made arbitrarily small in the following precise sense: for any > 0, there exists N ,k large enough so that: Proof. Fix any γ > 0 and then use the inequality x k ≤ k! γ k e γx for x ≥ 0 and property i) of the weak exponential moment control to find N γ ∈ N so large so that we have: which is finite by property i) of the exponential moment control from Definition 4.2. This establishes the first conclusion of the lemma. To see the second point, for any fixed k ∈ N and > 0, choose γ large enough so that γ k > 2k!, and then apply property ii) of the weak exponential moment control to find N γ,1 large enough so that we have the following: The desired inequality follows by expanding the RHS of equation (5.2) as a j-fold integral/sum. We then switch from an un-ordered integral over τ ∈ (s, s ) j to an ordered integral over τ ∈ ∆ j (s, s ) at the cost of the factor j!, which completes the result.
Limits of semi-discrete directed polymers Corollary 5.3. For 0 < s < s < t , we have that: Proof. Notice that since the processes X (τ ,x ) and X (τ ,x ) are independent, we have where we have applied the definition of ψ  can be written as a semi-discrete sum as in equation (2.3).
Proof. (Of Proposition 2.19.) By Corollary 5.4 applied to each term, we have for any The interchange of expectation with the infinite sum is justified by the monotone con-   ψ (N ),(t ,z ) t, z 2 d td z < , (5.5) since once this is proven we can use a union bound and D 3 (δ) = k j=1 D 0,j 3 (δ) ∪ k j=1 D t ,j 3 (δ) is a union of these 2k pieces. We will show only the bound in equation (5.5) for D 0,j 3 as the result for D t ,j 3 follows in an analogous way. We first observe thaẗ  Since this lim sup as δ → 0 is less than /2, there exists δ > 0 small enough to verify equation (5.5) as desired.
Proof. (Of Proposition 2.27) Let W (N ),(t ,z ) ∆ = max i∈{1,...,d} sup t∈[0,t ] X (N ),(t ,z ) i (t) be the largest absolute value achieved by the ensemble at any time t ∈ [0, t ], and let W (N ),(t ,z ) be the same for an independent copy X (N ),(t ,z ) . By the definition of the set D 4 we have that LHS of (2.13) is bounded above by N k 2ˆ t∈∆ k (0,t ) Since X (t ,z ) and X ,(t ,z ) are independent, we write this as:  Proof. We show that P sup 0<τ <tNP (τ ) > y √ N and P inf 0<τ <tNP (τ ) < −y √ N both separately obey this type of inequality, and the result will follow by a union bound. Fix any T > 0, x > 0. SinceP (τ ) is a martingale, we have by Doob's inequality for the running maximum of any sub-martingale that for any λ > 0: where we have used the minimizing value λ = ln 1 + x T to get the last inequality. We now use the Taylor series inspired bound z − ln(1 + z)(1 + z) = − 1 2 z 2 + 1 2´z 0 (z−t) 2 (1+t) 2 dt ≤ − 1 2 z 2 + 1 2´z 0 z 2 (1+t) 2 dt = − 1 and define φ r,s ≡ 0 if r ≥ s. Consider a random configuration X ∈ X given by the following prescription: Then, for any k ∈ N, and any list of space-time coordinates {(r i , x i )} k i=1 ∈ {1, . . . , m} × N k we have the following determinantal formula for the probability to find these spacetime points occupied: where the kernel K (which does not depend on k) is explicitly given by: Remark A.6. Typically this type of measure arises in the context of non-intersecting processes as a consequence of the Lindström-Gessel-Viennot/Karlin-MacGregor formula.
The difficultly in practice is inverting the matrix A which appears in the formula for the kernel. The approach we will follow goes by using row and column manipulations to rewrite the functions φ in terms of orthogonal polynomials. Because these polynomials are orthogonal, the matrix A becomes diagonal and finding A −1 is possible.
The result of the Lemma then follows by the definition of X (τ ,x ) as the Markov process of Poisson processes conditioned on non-intersection and with initial and final conditions x (0) and x (m+1) respectively.
Proof. This is verified directly from the definition of R j (τ, x) in terms of the hypergeometric function 2 F 1 from Definition 3.3. From this definition we have that: On the other hand, we have: The RHS of equations (A.4) and (A.5) are seen to be equal by the identity x! = (x−i)!(x) −i . A very similar calculation holds forλ j (τ, x).
Proof. First notice that x∈N µ(τ, x)µ(τ , y − x) = µ (τ + τ , y). This is the well-known fact that a sum of two independent Poisson variables is again Poisson (or in other words, the convolution of two Poisson weights is again Poisson). The identities then follow by the observation from Lemma A.9 that λ j andλ j are linear combinations of weights µ.
Proof. This follows by applying row operations on the determinants that appear in Lemma A.7 to create the linear combinations that appear in Lemma A.9.