Positive random walks and an identity for half-space SPDEs

The purpose of this article is threefold. First, we introduce a new type of boundary condition for the multiplicative-noise stochastic heat equation on the half space. This is essentially a Dirichlet boundary condition but with a nontrivial normalization near the boundary which leads to inhomogeneous transition densities (roughly, those of a Brownian \textit{meander}) within the associated chaos series. Secondly, we prove a new convergence result of the directed-polymer partition function in an octant to the multiplicative stochastic heat equation with this type of boundary condition, which in turn involves a detailed analysis of the aforementioned inhomogeneous Markov process. Thirdly, as a corollary, we prove a surprising equality-in-distribution for multiplicative-noise stochastic heat equations on the half space with \textit{different} boundary conditions. This identity may be seen as a precursor for proving Gaussian fluctuation behavior of supercritical half-space KPZ at the origin.


Introduction and context
The present work will focus on three related objects: uniform measures on collections of nearest-neighbor non-negative paths (e.g., Brownian meander), directed polymers weighted by such measures, and multiplicative-noise stochastic partial differential equations (SPDE) in a half-space.

Half-space stochastic heat equations
We begin our discussion with SPDE's. The multiplicative-noise stochastic heat equation has been a frequent subject of research within stochastic analysis and mathematical physics in recent years. This equation arises naturally in the context of directed polymers and interacting particle systems, as a weak scaling limit [Cor12]. In spatial dimension one, the multiplicative-noise stochastic heat equation is also related to the so-called KPZ equation via the Hopf-Cole transform, and may be solved by the classical Itô-Walsh construction [Wal86] or by more modern techniques such as regularity structures [HL18]. In the present article, we consider the stochastic heat equation with multiplicative noise on a half-line: where ξ is a Gaussian space-time white noise on R + × R + . Naturally one needs to impose boundary conditions at X = 0 in order to make sense of this equation. In the present work we consider two types of boundary conditions, Robin and Dirichlet. First let us write the Robin boundary condition of parameter A ∈ R: ∂ X Z(T, 0) = AZ(T, 0). (1.1) This type of homogeneous boundary condition has been considered in [CS18,Par19,GPS20,BBCW18] in the context of interacting particle systems, and a robust solution theory has been developed in [GH19] using techniques of [Hai14]. This boundary condition transforms into a Neumann boundary condition for the half-space KPZ equation upon taking the logarithm. Next, we consider the Dirichlet boundary condition for the half-space SHE: Z(T, 0) = 0. (1.2) This type of boundary condition was considered, for instance, in [GLD12], in the context of directed polymers near an absorbing wall. Again, one can make sense of the equation using classical techniques of [Wal86] or more modern ones such as [Hai14]. Our main result compares these two types of boundary conditions; specifically it allows us to interchange information about the initial data with that of the boundary condition imposed on the SHE: Rob (T, X) denote the solution of (SHE) with Robin boundary parameter A as in (1.1) and delta initial data Z 2 )X , where B is a standard Brownian motion independent of ξ. Then for each T ≥ 0 we have the following equality of distributions: Positive random walks and an identity for half-space SPDEs This will be a consequence of Theorem 2.4, which in turn will use the work of [BBC20,Wu18] and Theorem 2.2 (our main technical result) as inputs. Let us now discuss the motivation for this result, the contexts in which it has arisen, and the methods used to prove it.
To give some motivation towards (1.3), we now explain it using the exact solvability framework developed in [BBC20], which is a crucial input to the proof of (1.3). Both the left and right sides of (1.3) have interpretations in terms of partition functions of a certain family of probabilistic models known as directed polymers (see Section 1.2). Specifically, the left side of (1.3) can be related to a polymer that is modeled on a Brownian motion which gets reweighted according to its local time at zero, whereas the right side can be related to a polymer that is modeled on a Brownian motion conditioned to remain positive. In [BBC20], the authors use certain nontrivial symmetries of Macdonald polynomials in order to obtain information about the large-scale behavior of discrete versions of these polymer models and others (which is similar in theme to, and builds on, older works of [BC14,COSZ14,OSZ14,IS04,BR01]). One particular result in that paper (Proposition 8.1) is a highly non-obvious identity in distribution for directed polymers with log-gamma weights, that effectively allows one to switch some of the bulk weights of the random environment with those on the boundary without changing the distribution of the associated partition function. Our main goal was to take the SPDE limit of that identity, which effectively gives Theorem 1.1 under the appropriate scaling. Hence our result can be viewed as a special case of more general algebraic principles that may be used to extract certain nontrivial symmetries in certain half-space models.
The right side of (1.3) equals (∂ X Z (A) Dir )(T, 0). It is not clear why this derivative should even exist in the first place, since the spatial regularity of Z Dir is much worse than C 1 .
One of our main technical results, given in Section 4, is that the limit in (1.3) is indeed well-defined (Corollary 4.3). In fact we will prove something stronger: the limit in the right side of (1.3) simultaneously exists for all T ≥ 0 almost surely, and is Hölder 1/4− as a function of T almost surely.
In order to convince the reader that (1.3) is at least plausible, let us verify formally that the expectations are the same on both sides of the equation. Let P (A) Rob (T ; X, Y ) denote the Robin boundary heat kernel and let P Dir (T ; X, Y ) denote the Dirichlet boundary one, where by heat kernel we mean the fundamental solution of the heat equation with the associated boundary condition started from the delta measure at point X. Letting P (T ; X) = 1 √ 2πT e −X 2 /2T , one may verify directly that these kernels are given by the following explicit formulas for T, X, Y ≥ 0:  Positive random walks and an identity for half-space SPDEs Theorem 1.1 suggests a duality between the initial data of a solution to the half-space SHE and the boundary conditions one imposes on it. It may be interesting to see if more general versions of this hold. For example, could it be possible that the identity holds as a process in T and not just in the one-point sense? Using this type of idea, one may potentially obtain useful information about objects of interest, such as the Neumann-boundary Kardar-Parisi-Zhang (KPZ) equation that was considered in [CS18]. It was conjectured in [Par19] that one has the almost-sure convergence Rob (T, 0) = − 1 24 , A ≥ −1/2 (A + 1/2) 2 − 1 24 , A ≤ −1/2, which would give the exact law of large numbers for Neumann-boundary KPZ. Unfortunately Theorem 1.1 alone is not enough to obtain this result. Nevertheless, it is plausible and even hopeful that a clever use of (1.3) (perhaps combined with some new ideas and techniques) could lead to quantitative results that are close to the above expression. Indeed, despite the fact that on the Robin side of (1.3) there is no visible phase transition at A = −1/2, the appearance of the term A + 1/2 on the Dirichlet side already indicates the presence of a nontrivial change in large-scale behavior at A = −1/2. Section 1.3 of [Par19] includes a further discussion of this. More than just computing the above limit, we are also interested in computing the limiting distribution of the fluctuations around the mean value. These should be of order T 1/2 and Gaussian in the A < −1/2 case, and they should be of order T 1/3 and random-matrix theoretic otherwise (with separate cases when A = −1/2 and A > −1/2). See for instance [Par19,BBCW18,BBCS18,BBC16].
The main technical difficulties in the present work are of an analytic nature: translating the discrete identity in [BBC20] to that of (1.3) required us to prove a general convergence result for directed polymers, stated below as Theorem 1.2. As we will now see, this involves the analysis of an interesting object in its own right: the Brownian meander.

Directed polymers weighted by positive random walks
This brings us to the method of proof of Theorem 1.1. As suggested above, it will be proved using an approximation via directed polymers with very specific weights, where a discrete version of this identity holds.
Directed polymers are natural probabilistic objects that were first introduced in [HH85,IS88]. They generalize directed first-and last-passage percolation and have deep connections to statistical mechanics and stochastic analysis. Specifically, we consider an environment {ω i,j } (i,j)∈Z ≥0 ×Z consisting of i.i.d., mean-zero, finite-variance random variables. The standard deviation of the weights is referred to as the inverse temperature. One may define a partition function Z ω (n, x) as a sum over all directed nearest-neighbor simple random walk paths (i, γ i ) 0≤i≤n of length n starting from (0, x), of the product of all weights e ωi,γ i along the path. Similarly, there is also a natural way to define random Markovian transition densities associated to this environment ω, wherein a nearest-neighbor path γ has probability proportional to the product of weights e ωi,γ i along it. As is standard practice in statistical mechanics, one may then ask questions about the existence of infinite-volume limits of these path measures and their typical fluctuation scale, as well as the typical scale and shape of the fluctuations of the partition function itself [Com17].
Many seminal results in these directions have been proved, perhaps most notably that there is a phase transition which becomes apparent in high dimensions. Specifically, in spatial dimensions greater than two, there is a strictly positive critical value of the inverse temperature below which weak disorder holds, meaning that the fluctuations of EJP 27 (2022), paper 45. a typical polymer path look like Brownian motion and one may construct infinite-length path measures [CY06,Com17]. Such polymers are said to exhibit weak disorder. In contrast, lower-dimensional polymers at any nonzero inverse temperature are known to be characterized by strong disorder, meaning that the path fluctuations are quite different and there is no sensible notion of an infinite volume Gibbs measure [Com17]. The results of [AKQ14a,AKQ14b] examined the partition function in a regime that lies between strong and weak disorder. Specifically, in spatial dimension one, they scaled the inverse temperature of the model like n −1/4 and simultaneously applied diffusive scaling to the partition function, and there they observed that the fluctuations are governed by (SHE) and that the path measures themselves have a continuum analogue. Recent work of [CD20,CSZ18] has investigated the intermediate-disorder behavior in two spatial dimensions, where the scaling n −1/4 is replaced by (log n) −1/2 . In a different direction, [Wu18] extended the work of [AKQ14a] to the case of half-space polymers with Robin boundary condition.
We will be interested in the analogous half-space question of intermediate-disorder fluctuations of the directed polymer partition function associated to uniform non-negative path measures. Specifically, let • P n x denote the uniform probability measure on the collection of all paths (γ i ) 0≤i≤n such that γ 0 = x, |γ i+1 − γ i | = 1 for i < n, and γ i ≥ 0 for all i ≤ n.
• ω i,j be i.i.d. mean-zero, variance-one random variables that are uniformly bounded from below by a deterministic constant.
• f n be a sequence of functions bounded uniformly by a function growing at-worst exponentially fast near infinity such that f n (n 1/2 ·) converges (as n → ∞) to some function f (·) in the Hölder space C α loc (R + ), for all α ∈ (0, 1/2). Letting E n x denote the expectation with respect to P n x , and setting S to be the canonical process associated to P n x , one defines a directed-polymer partition function as follows: Note that the expectation is taken only with respect to the random walk, conditional on the environment ω i,j , which is always assumed to be independent of the walk. We consider the rescaled partition function Z n (T, X) := Z ω n (nT, n 1/2 X), (1.4) where the quantity on the right side is defined by linear interpolation between points of the lattice L := {(x, n) ∈ Z 2 ≥0 : n − x ∈ 2Z}. In a manner analogous to [AKQ14a] we show that Z n converges in law to a random continuous space-time field. The natural candidate for such a limit would be a continuum analogue of Z ω k (n, x), where the expectation E n x over positive discrete random walks is replaced by that of continuous ones. Indeed the limiting space-time field can be described as follows: it has the formal Feynman-Kac interpretation that takes as its input the so-called Brownian meander [DIM77, DI77] on a finite time interval, and exponentially weighs it by its integral against a space-time white noise field. More precisely, if P T t (X, Y ) denotes the inhomogeneous Markov transition density at time t of Brownian motion started from X and conditioned to stay positive until time T ≥ t, then this limiting space-time field Z necessarily solves the multiplicative-noise SPDE on the half-space that is given in Duhamel form by where ξ is a space-time white noise and f is the limiting function from the third bullet point above. An important step towards proving Theorem 1.1 will be to show that a solution of (1.5) exists and makes sense even when X = 0, and then to show that it can in turn be related to the derivative of the solution of the Dirichlet-boundary SHE at the origin. This will all be done in Section 4; more specifically we will show that the solution of (1.5) equals T, X > 0, (1.6) where Z Dir solves (SHE) with Dirichlet boundary condition (1.2) with the same initial data as Z , and Φ is the cdf of a standard normal variable so that Z (T, 0) = (2πT ) 1/2 lim X→0 Z Dir (T,X) X . We then have the following result.
Theorem 1.2. The sequence of processes Z n defined in (1.4) converge in law to the solution of (1.5) as n → ∞. The convergence occurs in the sense of finite-dimensional distributions. If we assume that the ω i,j have p > 8 moments, then distributional convergence holds when the space C(R + × R + ) is equipped with the topology of uniform convergence on compact sets.
This theorem will be proved in Section 5.2 in greater generality (where the distribution of the weights ω may vary with n), see Proposition 5.9 and Theorem 5.11. It is actually a simplified version of Theorem 2.2 which is the true input to proving Theorem 1.1. The main difficulty towards this result will be in obtaining the necessary estimates for the inhomogeneous transition densities (and their discrete analogues) appearing in (1.5). Thus the proof of Theorem 1.2 will lead to some new technical results related to the uniform measures P n x and their continuum analogues. These will be collected in appendices at the end of the paper. To illustrate a few such results, we will prove a coupling result for such random walks in the nearest-neighbor case, and then we will use that coupling to show the following concentration property: there exist constants c, C > 0 (independent of n, x ≥ 0) such that for all u > 0 and all k ≤ n one has that We remind the reader that S i is the conditioned walk. The study of such random walks started with the invariance principle of [Ig74], further generalized in [Bol76]. Later, the study expanded considerably, with local limit theorems [Car05] and expansions to heavytailed increments [CC08]. We will see that some of the estimates we derive are similar in spirit to some of those works, but the intricate details are somewhat different. We will give proofs of many of these technical results because the highly specific estimates needed to prove Theorem 1.2 were not found in those references (since our random walk does not necessarily start at zero).
It should be noted that we work with a simplified version of the partition function as opposed to much of the previous literature: [ AKQ14a,CSY03] and related works. There the partition function Z ω k (n, x) is defined with weights e k −1/4 ω i,S i instead of the quantity 1+k −1/4 ω i,Si that we have used in (1.4) above. The reason for this is that the latter object is mathematically simpler because it is already renormalized (has expectation exactly 1 rather than approximately 1), and hence leads to simpler proofs and less stringent moment restrictions. However, it should be noted that the exponential version is more natural from the physical point of view, and entire results such as [DZ16] have been devoted to finding the correct renormalization and phase transition behavior for that version as a function of the moment assumptions.
Outline: In Section 2, we prove Theorem 1.1 as Theorem 2.4, which uses [BBC20] and [Wu18] as important inputs. In Section 3, we will introduce and state some estimates EJP 27 (2022), paper 45. about the transition densities associated to positive random walks, though the proofs are postponed to the appendices. In Section 4, we will develop the existence and uniqueness theory of the limiting SPDE (1.5) from Theorem 1.2, and as a corollary we prove that ∂ X Z Dir (T, 0) exists. In Section 5, we prove Theorem 1.2 by using the estimates developed in the appendices. In the appendices we derive some elementary but powerful bounds related to the measures P n x , which are crucial for the proofs in the main body.

Main results
In this section, we show how to prove Theorem 1.1. We denote non-negative reals as R + and non-negative integers as Z ≥0 .
We will use the notion of mild solutions for SPDEs throughout this article. Thus for completeness, we begin by giving the formal definition of such a solution, although it is peripheral to the main goals of the section.
Let ξ be a space-time white noise defined on a probability space (Ω, F, P), and let µ be an independent random Borel measure on R + . A continuous space-time process Z Dir = (Z Dir (T, X)) T,X≥0 is a mild solution of the Dirichlet-boundary SHE with initial data µ if P-almost surely, for all X, T ≥ 0 one has that where the integral against ξ is meant to be interpreted in the Itô-Walsh sense [Wal86].
The fact that this object exists will be established as a special case of the results in Rob of (SHE) is very similar, but one replaces the Dirichlet heat kernel with the Robin boundary one throughout. We refer the reader to Section 4 of [Par19] for more details, including the existence/uniqueness of this Robin boundary version.
The proof of Theorem 1.1 will be obtained by approximating both Z (A) Dir and Z (A) Rob by the partition function of a directed polymer with log-gamma weights. For these weights we use a known identity that allows us to switch the boundary weights with those on the initial data without changing the distribution of the partition function along the boundary [BBC20] (Proposition 8.1). The approximation argument will strongly emulate the arguments given in [Wu18,AKQ14a] although there are new challenges that make the convergence result rather difficult and technical. These additional difficulties are a byproduct of the inhomogeneous Markov transition densities for random walks conditioned to stay above zero.
Let us explicitly state the Dirichlet-boundary approximation result now. For each n ∈ N, let ω n = {ω n i,j } i≥j≥0 denote a random environment indexed by the principal octant of Z 2 with the following properties: • The "bulk-environment" random variables {ω n i,j } i≥j≥1 are i.i.d., and the "lowerboundary" random variables {ω n i,0 } i≥0 are also i.i.d. These two collections are independent.
The following result is the primary technical contribution of this work.
Theorem 2.2. In the above notations and assumptions, the sequence of processes Z n converges in distribution (in the sense of finite-dimensional marginals, as n → ∞) to the unique space-time process satisfying (1.5) (equivalently given by (1.6)) with initial data Z (0, X) = Z Dir (0, X) = e σB X +(µ− 1 2 σ 2 )X , where B is a standard Brownian motion independent of the space-time white noise ξ. If we assume that all weights ω n i,j have more than eight moments bounded independently of n, then distributional convergence holds when the space C(R + × R + ) is equipped with the topology of uniform convergence on compact sets.
We will see that Theorem 2.2 is essentially equivalent to a more complicated version of Theorem 1.2, where the distribution of the weights ω depends on n and the domain of the polymer paths has been changed from a quadrant to an octant of Z 2 , which makes the geometry more challenging to work with. Accordingly, the proof of this theorem will proceed in two steps: first by reducing the claim of the theorem to that of Theorem 1.2 with a specific initial data (which will be achieved in Section 5.1), and then proving Theorem 1.2 which is simpler thanks to known methods and is done in Section 5.2. Remark 2.3. There are really two different regimes in which one should interpret Theorem 2.2. One regime is X > 0, where the result merely says that Z n (nT + n 1/2 X, nT ) converges to Z Dir (T, X). The more interesting regime is X = 0, in which case the theorem says that (πnT /2) 1/2 Z n (nT, nT ) converges in law to lim X→0 Z Dir (T,X) 2Φ(X/ √ T )−1 , i.e., X .
An advantage of our approach is that the proof will simultaneously cover both regimes. In fact, we will see that convergence even takes place in a parabolic Hölder space of the appropriate regularity provided that the weights have more than eight moments.
We now combine this result with the Robin boundary result of [Wu18] and the loggamma identities of [BBC20] in order to obtain the following result, which clearly implies Theorem 1.1. In what follows, we denote by Γ −1 (θ, c) the inverse-gamma distribution of shape parameter θ and scale parameter c, i.e., the law of the random variable cX, where X has pdf given by We will also write E[Γ −1 (θ, c)] = c θ−1 and var(Γ −1 (θ, c)) = c 2 (θ−1) 2 (θ−2) to denote respectively the expectation and variance of such a random variable. For n ∈ N, let ζ 1 n = {ζ 1 n (i, j)} i≥j≥0 and ζ 2 n = {ζ 2 n (i, j)} i≥j≥0 be fields of independent random variables with the following distributions Let Z 1 n and Z 2 n denote the associated partition functions, i.e., Here the sum is taken over all upright paths γ from (0, 0) to ( nT , nT ) that stay in the octant {(i, j) : i ≥ j ≥ 0}.
√ nZ 1 n converges in distribution as n → ∞ to the left-hand side of (1.3).
There are now three things to verify, corresponding to the three bullet points preceding Theorem 2.2. Using the fact that one gets the desired asymptotics on E[ω i,j ] and on E[(ω i,j ) 2 ], with µ = −A and σ 2 = 1.
This proves the corollary (and thus also Theorem 1.1).
Once again we would like to emphasize the tremendous importance of [BBC20] as the primary input to proving the preceding theorem, and thus the main result (1.3). It may be interesting to explore more robust methods that might give a direct proof of (1.3) using purely stochastic analytic methods instead of exact solvability, but we have tried and this seems out of reach for us at the moment. With Theorem 2.4 in place, we will now shift the goals of the paper to the analytical and technical aspects focusing on the methods used to prove Theorem 2.2.
Positive random walks and an identity for half-space SPDEs Figure 1: A graphical description of Theorem 2.4. The weight of a given path is the product of the weights along it, and the partition function Z α n for α ∈ {1, 2} is given by summing the weights of all upright paths from (0, 0) to ( nT , nT ) that stay in the octant. We have represented the SPDE limits by their respective (purely formal) Feynman-Kac representations.
Since the sum defining the partition function in the preceding results is over all upright paths that stay in the principal octant of Z 2 , it is natural to relate those quantities to reflecting random walk measures. However, if one does asymptotics in Corollary 2.4, she may verify that ζ 2 n (j, j) → 1/2 in probability as n → ∞. What this means is that instead of pure reflection, our random walk path loses mass by a factor of 1/2 each time it hits zero. Hence, it is clear that the analysis in proving Theorem 2.2 will involve taking a close look at these random walk measures, as well as directed polymers weighted by such measures, as suggested in the introduction.
More precisely, fix some x ∈ Z ≥0 , and define a sample space of non-negative random walk trajectories by Define a sub-probability measure µ n x and a probability measure P n x on Ω n x by µ n x (S) := 2 −n , P n x (S) := , for all S ∈ Ω n x .
As an intermediate step in proving Theorem 2.2, we obtain the following result.
Theorem 2.5. With the above notation, the following are true.
1. (Markov Property) Fix n, x ≥ 0. Let S = (S k ) n k=0 denote the coordinate process associated to P n x , i.e., S is a Ω n x -valued random variable with law P n x . Then (S k ) n k=0 is a time-inhomogeneous Markov process, in fact conditionally on (S k ) K k=0 with K < n, the process (S k+K ) n−K k=0 is distributed according to P n−K S K . One has explicit transition densities for 0 ≤ i 1 < ... < i k ≤ n: where p n i is given in Definition 3.2 below.
3. (Concentration) There exist C, c > 0 such that for every x ≥ 0, every 0 ≤ m ≤ k ≤ n, and every u > 0 one has that 4. (Convergence of Transition Densities) Let p N n be as in Item (1). One has the convergence (n/2) 1/2 p 2 T n 2 tn (2 n 1/2 X/ where P T t is the transition probability for a certain (inhomogeneous) Markov process defined in Definition 3.4 below. Moreover, for fixed (t, T, X) the convergence in the Y -variable occurs in L p (R + , e aY dY ) for every p ∈ [1, ∞).
The first part of the theorem is elementary and the last part is a more local version of the results of [Ig74,Bol76]. The third part is new as far as we know, and the second part will simply follow from the local central limit theorem. All proofs may be found in the appendices, except for (1) which is proved in Section 3.
Remark 2.6. One can actually formulate an invariance principle for this family of measures. This was done in greater generality in [Ig74,Bol76]. Fix X, x . Then the processes ] converge in law (with respect to the uniform topology on C[0, T ], as N → ∞) to a time-inhomogeneous Markov process B on [0, T ] whose transition densities P T t (X, Y ) are given by the limit in Item (4). This limiting process B may be interpreted as a standard Brownian motion conditioned to stay positive until time T ; see Proposition 3.5. This invariance principle will be immediate from the results of Appendix A, but it will not be needed for the results above.
Let us now discuss the basic idea of the proof of Theorem 2.2 in the special case when (T, X) = (1, 0) because this is enough to give the main idea. Denote by E KRW the expectation with respect to a reflected random walk of length 2n that is started from 0 and killed at the origin with probability 1/2, i.e., the one whose transition density is EJP 27 (2022), paper 45. equal to p (1/2) n which is defined in Section 3 below. By rotating the picture appropriately, one rewrites the partition function appearing in Theorem 2.2 as a discrete Feynman-Kac formula for this killed walk: for all i, j.
• The expectation E KRW is taken only with respect to the random walk S, i.e., conditional on the ω n i,j (which are always assumed to be independent of S).
can be thought of as a sort of "initial data" for the above discrete Feynman-Kac representation.
• {survival} is the event that the random walk survives up to time 2n (or equivalently, up to time T n ). Now, using Theorem 2.5(2) with x = 0, one finds that P KRW (survival) ≈ 2/πn. Moreover, we can make the approximation T n ≈ 2n for reasons justified later, see Proposition 5.8. This essentially reduces the octant geometry to that of a quadrant, thus reducing the theorem statement to that of Theorem 1.2, which is simpler as we see below. Combining this with the above gives (2.4) In the notation of Theorem 2.5, the killed random walk conditioned to survive has law P n x and the associated Markov process has transition densities p N n . Using theorem 2.5(1), the expectation in the preceding expression may be expanded as one may convince herself (using Donsker's principle and the law of large numbers together with the third bullet point preceding Theorem 2.2) that as n → ∞, (2.7) EJP 27 (2022), paper 45.
for a Brownian motion B. Then taking the limit of (2.5) as n → ∞ by using Theorem 2.5(4) (with some uniformity estimates), one obtains the Wiener-Itô chaos series with the convention x 0 = 0, t 0 = 0, t k+1 = 1, and where the P T t are the conditional heat kernels from the limit in Theorem 2.5(4), and ξ is a space-time white noise. But (as we will see in Proposition 4.2 below) this chaos series is precisely equal to where the initial data is e σB X +(µ− 1 2 σ 2 )X , and Φ is the cdf of a standard normal, which implies that Φ(0) = 1/2 and Φ (0) = π/2 giving the equality above. This will complete the argument for Theorem 2.2. Note that no part of the argument relies on the finer details of the weights ω n i,j beyond their mean and variance.

Uniform measures on collections of positive paths
In this section we will introduce the inhomogeneous heat kernels p N n associated to random walks conditioned to stay positive. We begin with an elementary discussion of the properties of these measures, and later we state technical estimates about these measures that will be necessary in subsequent sections, though their proofs are postponed to the appendices.
Definition 3.1. For n ∈ Z ≥0 and x ∈ Z, let p n (x) denote the standard heat kernel on Z (i.e., the transition function for a discrete-time simple symmetric random walk started from zero). Then we define n, x, y ≥ 0.
The kernels p (1/2) n have the following probabilistic interpretation. Consider a simple symmetric random walk (S n ) n≥0 with S 0 = 0 on the integer lattice Z. Impose the condition that this random walk gets killed, i.e., enters an auxiliary death state, at the first instance that it hits the value −1. Equivalently one can consider a random walk reflected at 0 that dies independently with probability 1/2 each time it attempts to move from site 0 to site 1. Then p (1/2) n (x, y) is the probability of the following event: the walk started from x is at position y at time n.
Definition 3.2. We define the following quantity for integers 0 ≤ n ≤ N The probabilistic relevance of these kernels p N n will be demonstrated shortly in Proposition 3.3. As in Theorem 2.5, let Then denote by P N x the uniform probability measure on Ω N x , and let S denote the coordinate process associated to this measure (e.g., S can be the identity map on Ω N x ).
In plainer terms, S is a simple symmetric random walk of length N conditioned to stay non-negative throughout its course.
This proves Theorem 2.5(1) and shows that the p N n (x, ·) are probability measures.
where s 1 * s 2 denotes the concatenation of paths. This immediately implies that given . This also implies that are conditionally independent given S M . Therefore, in order to prove the given formula for transition densities, it suffices to prove the claim for n = 1; then the claim for general n follows from the conditional independence and induction (recall that n is the number of indices 0 ≤ i 1 < ... < i n ≤ N appearing in the transition formula).
To prove the formula for n = 1 it suffices by conditional independence to assume that i n = N . Note that P N x is the probability measure associated to the killed random walk conditioned to survive, so that which proves the claim.
Next we introduce the continuum analogues of the previously introduced measures. We will generally use capital letters to distinguish macroscopic variables from lowercase microscopic ones.
Definition 3.4. Let P t (X) := e −X 2 /2t / √ 2πt denote the standard heat kernel on the whole line R. Recall the Dirichlet boundary heat kernel We then define the inhomogeneous kernel for 0 ≤ t ≤ T and X, Y > 0 : x −∞ e −u 2 /2 du is the cdf of a standard normal. For X = 0, one analogously defines the quantity for Y > 0 and T ≥ t ≥ 0: which is the limit of the previously defined P T t (X, Y ) as X → 0.
We now discuss the relevance of these kernels as Markov transition densities. Specifically, for X > 0 define W T X to be the probability measure on C([0, T ], R + ) obtained by conditioning Brownian motion on [0, T ] started from X to stay strictly positive until time T . 1 We define B to be the canonical process associated to W T X . One can also define W T 0 as the weak limit of the W T X as X → 0. The fact that this limiting measure exists is not difficult but not entirely trivial either (see the appendices). It is called the Brownian meander and has been studied extensively in [DIM77, DI77, CM81, Ig74] and subsequent papers on the subject.
Proposition 3.5. Fix some T, X > 0 and let W T X be as defined above, and let B denote the associated canonical process. Consider the kernels P T t defined before. Then for The same statements hold true for X = 0. Before moving on to the proof, we remark that when X = 0 and t n = T , the above formula for transition densities reduces to When t n = T the numerator 2Φ(Y n / √ T − t n ) − 1 should be interpreted as 1. When X = 0 this expression becomes 0/0, and one needs to take the limit, which gives the formula stated in the above proposition.
Proof. Assuming X > 0 the proof is analogous to that of Proposition 3.3. Basically one first shows that if S < T then the conditional law of (B t+S ) t∈[0,T −S] given (B t ) t∈[0,S] is equal to W T −S B S , and furthermore that (B t+S ) t∈[0,T −S] and (B t ) t∈[0,S] are conditionally independent given B S . This may be proven by a single computation using the basic properties of standard Brownian motion.
As in the proof of Proposition 3.3, this then reduces the claim to proving the formula for n = 1 and t n = T . In turn, this follows by noticing that W T X is the same as Brownian motion killed at zero but conditioned to survive. Hence one finds that which proves the claim.
This concludes the introductory material on the subject, and we now state several technical estimates on these inhomogeneous heat kernels that are used heavily in the sequel. The proofs may be found in Appendix B.
Proposition 3.6. Fix τ ≥ 0. Then for n ≥ 0, define P n (t, T ; X, Y ) := (n/2) 1/2 p 2 T n 2 tn (2 n 1/2 X/ Then for each fixed X, T, t ≥ 0, as n → ∞ the map Y → P n (t, T ; X, Y ) converges pointwise and in L p (R + , e aY dY ) to P T t (X, Y ) for all p ≥ 1 and a ≥ 0. Furthermore for all X, T ≥ 0, the map (t, Y ) → P n (t, T ; X, Y ) converges pointwise and in L p (dt ⊗ e aY dY ) to P T t (X, Y ) for all p ∈ [1, 3) and a ≥ 0 (as n → ∞).
We refer the reader to Proposition B.6 of the appendix for the proof. We remark that the factors of 2 appearing in the definition of P n are only necessary due to the periodicity of the simple random walk.
Proposition 3.7. Let a, τ > 0 and let P T t be the kernels from Definition 3.4. Then there exists a constant C = C(τ, a) such that for all X, Y ≥ 0, all θ ∈ [0, 1/2], and all s ≤ t ≤ T ≤ τ one has the following The proof may be found as the very last thing in Appendix B. We remark that these bounds will be the key behind the proofs of Section 4 below.

Existence of the derivative in Dirichlet SHE
Note that in order to prove the identity (1.3), one first needs to prove that the mild solution of Z Dir exists and that the limit on the right-hand side of (1.3) also exists. In this section we actually do something much stronger. We will prove that the mild solution of Z Dir and the aforementioned limits not only exist, but in fact one almost surely has the simultaneous existence of lim X→0 Z Dir (T,X) X for all T ≥ 0, for a fixed initial data. Furthermore this limit is Hölder-continuous as a function of T .
All of this will essentially be proved in a single step by showing that for X, converges uniformly over compact subsets of (T, X) ∈ R + × R + , where t 0 := 0, X 0 := X, ξ T (X, t) := ξ(X, T − t) for a space-time white noise ξ, and f is some random initial data with at-worst exponential growth at infinity. Then we will show almost trivially that when X, T > 0 this chaos series equals Z Dir (T, X)/ 2Φ(X/ √ T ) − 1 , where Φ is the cdf of a standard normal and Z Dir satisfies the conditions of Definition 2.1. This would simultaneously prove existence of Z Dir and also the desired limit. This is because we know the above chaos series extends continuously to X = 0, which means lim X→0 exists, which is equivalent to showing that lim X→0 Z Dir (T,X) X exists (for all T ≥ 0, a.s.).
In order to prove the uniform convergence of this chaos series, we are going to use the inhomogeneous heat kernel estimates stated at the end of Section 3. The proofs may EJP 27 (2022), paper 45. be skipped without any effect on the readability of Section 5, although some ideas are similar to ones used there.
With this motivation, we now move on to the main results of this section. Given some possibly random initial data f : R + → R + , recall from (1.5) the following Duhamel-form SPDE: where ξ is a space-time white noise and so the above should be interpreted as an Itô integral. Since Z appears on both sides of this relation, it is not clear that a solution would even exist. Thus we have the following result, which will be proved by rigorously expanding (4.1) into the chaos series mentioned above.
Theorem 4.1. Fix a, τ > 0 and suppose that we have some random function-valued initial data f satisfying sup Then, a unique solution to the SPDE (4.1) with initial data f exists in the class of space-time functions Z (T, X) that satisfy Furthermore, the solution Z may be constructed in such a way that its law is supported on the space of functions that are Hölder-continuous of exponent 1/2− in the X variable and 1/4 − in the T variable on any compact subset of (T, Proof. This is adapted from the proofs given in [Par19, Section 4]. Informally, one argues as follows: define the following sequence of iterates: In other words, u n is just the n th term of a chaos series given by the expansion of (4.1).
Thus it is clear that the desired solution to (4.1) should be given by n≥0 u n . Hence, in order to formalize these ideas, we will show that the series n≥0 u n converges in the appropriate Banach space of random space-time functions.
To this end, let us define a Banach space B of C(R + )-valued processes u = (u(T, ·)) T ∈[0,τ ] that are adapted to the natural filtration of ξ and with norm given by Then define a sequence of functions F n : [0, τ ] → R for n ≥ 0 by Positive random walks and an identity for half-space SPDEs where u n are the iterates defined above. By Itô isometry, it is clear that Now by (3.2) we have that where C may depend on a and τ . Furthermore one notes that the F n are increasing functions of T , and therefore T → T 0 (T − S) −1/2 F n (S)dS is also increasing (which may be verified by making the substitution S = T U ). Combining this fact with (4.2) and (4.3), where C does not depend on n. Now, we claim that F 0 (T ) ≤ C (with C = C(a, τ )). Indeed, by Jensen's inequality and Fubini's theorem, one has where in the last inequality we used (3.1) together with the assumption that E[f (X) 2 ] ≤ Ce aX . This proves that F 0 ≤ C, which means that one may iterate (4.4) to obtain F n (T ) C n T n/2 /(n/2)!, which implies that n≥0 u n B < ∞. This completes the proof of existence. The proof of uniqueness is essentially the same. Indeed, if Z and Z were two solutions in B that are started from the same initial data f , then an application of Itô's isometry reveals that Then one iterates as above and may obtain that the left-hand side is bounded above (uniformly in T, X) by C n T n/2 /(n/2)!, and by letting n → ∞ this tends to zero. Now we address the Hölder regularity. Let u n be the iterates defined above. We know that u 0 is a smooth function of (T, X) ∈ (0, ∞) × [0, ∞) because it is the solution to the deterministic (i.e., noiseless) version of SPDE (4.1) which is just an inhomogeneous heat equation (e.g., one may simply differentiate u 0 under the integral sign). Thus, it suffices to prove that the function Z 0 := Z − u 0 = n≥1 u n has the required Hölder regularity, so this is what we will do.
In the third line we used (3.3) with θ = 1 2 − γ, and in the final line we made a substitution S = T a. Note that the final integral is bounded independently of n, so it may be absorbed into the constant (which will then depend on γ). Using hypercontractivity of the Ornstein-Uhlenbeck semigroup associated to the Gaussian noise ξ, we can bound the p th moments of elements of the homogeneous Wiener chaoses in terms of their second moments.
Specifically, if p ≥ 2 then Equation 7.2 of [Hai16] says that: Using Minkowski's inequality and summing over all n, we then obtain Here D(p, T ) : , which is independent of X, Y and increasing as a function of T . This is enough by Kolmogorov's criterion to ensure that Z 0 is Hölder continuous of exponent 1/2 − γ − (on compact sets) in the spatial variable.
For the temporal regularity, one computes Let us call the integrals in the final expression I 1 and I 2 respectively. As before, one has E[u n (U, Z) 2 ] ≤ e aZ F n (U ) ≤ e aZ C n U n/2 (n/2)! . Then one uses (3.4) with θ = 1 2 − γ to bound the inner integral of I 1 by and one also uses (3.2) to bound the inner integral of I 2 as Then one finally performs the integral over U on the respective domains, and one can obtain (as in the spatial case) that I 1 + I 2 ≤ C n+1 e 2aX T (n+1)/2 |T − S| 1 2 −γ /(n/2)!. Then one uses hypercontractivity and sums over n (exactly as in the spatial case), to get that EJP 27 (2022), paper 45.
Here D(p, T ) is an increasing function of T (the same one as before), so it can be bounded from above on any compact set of (T, X). This is enough to give Hölder regularity of 1 4 − γ 2 − in time, by Kolmogorov's criterion.
Next, we discuss the relationship of the Z that we have constructed in Theorem 4.1 with the Dirichlet-boundary SHE.
Proposition 4.2. Any solution of the SPDE (4.1) must a.s. satisfy the following relation where Z Dir solves the Dirichlet-boundary SHE as in Definition 2.1 with the same initial data f .
Proof. One notes the following relation for X > 0, which is immediate from Definition 3.4: so that A is indeed a mild solution to the Dirichlet-boundary SHE.
One thing we have not addressed is the uniqueness of solutions to the Dirichletboundary SHE in some large enough class of random space-time functions. This can be obtained from Theorem 4.1 with minimal work, and with the same conditions on the initial data f , one can in fact obtain existence/uniqueness in the space of ξ-adapted space-time functions A satisfying sup T ≤τ, X≥0 E[A(T, X) 2 ] < ∞. In this section we use a discrete chaos expansion together with the methods of [AKQ14a,CSZ17a] and the heat kernel estimates of the previous sections in order to prove Theorem 2.2. The first step (Section 5.1) is to simplify the geometry of the region where our directed polymer lives, and then (in Section 5.2) we will prove the convergence result in the simpler domain.
As a notational convention, we will usually write C for constants, and we will not generally specify when irrelevant terms are being absorbed into the constants. We will also write C(a), C(a, p), C(a, p, K) whenever we want to specify exactly which parameters the constant depends on. This will not always be specified, though. This applies throughout the paper. Please be warned that we will freely use many different bounds from the appendices in the following proofs, so the reader may wish to skim those estimates first.

Reduction from the octant model to the quadrant model
In this section, we reduce the technicality of working with the partition function in an octant to working with it in a quadrant, which simplifies many computations. The dichotomy here is that the quadrant has a simple geometry that makes polymerconvergence results of the desired type quite straightforward; on the other hand, the octant has the advantage that one has nice identities such as those of Corollary 2.4(3) which fail for a quadrant. Hence, one viewpoint is simpler for technical computations while the other is well-adapted for exact solvability. The results of this section are specific to the case of our positive random walk measures; however, the general outline and arguments that will be given may be easily modified for other random walk measures, such as the reflecting walk, as long as the analogous heat kernel bounds hold. Thus this section may potentially prove useful to other works of a similar flavor.
In what follows, we fix a sequence ω n = {ω n i,j } i,j≥0 of i.i.d. random environments with n ∈ N. As always, we denote by E (resp. P) the expectation (resp. probability) with respect to the environment ω n i,j and we denote by E n x (resp. P n x ) the expectation (resp. probability) with respect to the positive random walk measures of Section 3. Furthermore T n will denote the first time that this random walk (i, S i ), started from (0, x) with x ≥ 0, hits the diagonal line {(j, 2n − j) : j ≥ 0}.
First we need an estimate on the variance of the discrete chaos series terms. Lemma 5.1. Let p N n (x, y) be the positive random walk transition probabilities given in Definition 3.2. Then there exist constants B, C, K > 0 such that for all x, n, k ≥ 0 and a ≥ 0, where (k/2)! is a shorthand for Γ(1 + k/2).
Proof. We first state a bound, which is Proposition B.3 in the appendix: there exist constants C, K > 0 such that for all x ≥ 0, all N ≥ n ≥ 0, all a ≥ 0, and all p ≥ 1 one has that y≥0 p N n (x, y) p e ay ≤ C p (n + 1) −(p−1)/2 e ax+Ka 2 n .
Thus the desired sum is bounded above by which as a Riemann sum approximation is bounded above by (say) twice where B > 0. Hence the lemma is proved.
Now we use the variance bound in conjunction with Doob's martingale inequality to get a bound on the expected supremum in the partition function.
Lemma 5.2. Take a sequence ω n = {ω n i,j } of random environments with variance uniformly bounded above by 1. Furthermore let {z n 0 (x)} x≥0 be some sequence of non-negative stochastic processes, independent of the ω n , with the property that E[z n 0 (x) 2 ] ≤ Ke an −1/2 x for some constants K, a that are independent of n and x. Then there exists a constant C such that for all n, x ≥ 0 one has that Proof. First we fix some n ∈ N and we note that the process is a P-martingale in the k variable with respect to the filtration (F n k ) k≥0 , where F n k is generated by z n 0 and {ω n i,j } 0≤j≤i≤k . Therefore by Doob's martingale inequality it is clear that E[sup 0≤k≤n (M n k ) 2 ] ≤ 4E[(M n n ) 2 ]. This reduces our work to proving the claim without the supremum inside the expectation (and replacing k by n in the upper limit of the product). To do this, we set x 0 := x and we write EJP 27 (2022), paper 45. where x 0 := x by convention. By Jensen we then have that We know by assumption that E[z n 0 (x k+1 ) 2 ] ≤ e an −1/2 x k+1 . Thus we find that the expectation of the last expression is bounded above by Ce an −1/2 x k because of the inequality y≥0 p N n (x, y)e ay ≤ Ce ax+Ka 2 n , which holds by Proposition B.1 in the appendix. Thus by This completes the proof.
We now introduce a class of Banach spaces that will be useful for describing convergence of initial data: We turn C α δ into a Banach space by defining the norm of f to be the above quantity. A straightforward consequence of Arzela-Ascoli is that C α e(δ) embeds compactly into C α e(δ ) for α < α and δ < δ . The key estimate of this section is as follows: Theorem 5.4 (Key Estimate). Fix α ∈ (0, 1). Suppose that (z n 0 (x)) x∈Z ≥0 is a family of deterministic non-negative functions such that the linearly interpolated and rescaled family z n 0 (n 1/2 x) are bounded with respect to the norm of C γ e(δ) for some γ, δ ∈ (0, 1).
Proof. By the triangle inequality, we have E(x, n) ≤ E 1 (x, n) + E 2 (x, n), where We separately show that both of these satisfy the desired bound. Henceforth when we write n α we actually mean n α . First we consider E 1 . First we establish a martingale inequality. If (M k ) k≥0 is a martingale defined on any probability space, then note that for r ≤ n one has that and by Doob's inequality one has that sup r≤k≤n |M k − M r | p ≤ p p−1 M n − M r p , therefore one has that Now let us fix some n ∈ N. Let us define a martingale This is a P-martingale in the k variable, for fixed n ∈ N. Consequently, using (5.2) with p = 2 gives us E sup k∈[n−n α ,n] Computing the right-hand side, one gets Note that the latter sum starts at = 1 rather than = 0 which is crucial. These expressions come from writing (1 + n −1/4 ω n +n−n α ,S +n−n α ) − 1 , and then expanding both products and taking expectations. The subtraction of 1 from the second product is what causes the sum defining F n to start at = 1 rather than = 0.
Positive random walks and an identity for half-space SPDEs By Jensen and the fact that z n 0 (x) ≤ Ce an −1/2 x (with say a = 2δ where δ is the same as in the theorem statement) we then have that where we used Proposition B.1 in the last bound. Then by repeatedly applying Proposition B.3, note that F n (i k ,x k ) is bounded above by Consequently the entirety of (5.4) is bounded above, after again applying Proposition B.3 several more times, by We rewrite that as e an −1/2 x multiplied by 0≤k≤n−n α 1≤i1<...<i k ≤n−n α +1 1≤ ≤n α 1≤j1<...<j ≤n α +1 Except for the factor n − (1−α)/2 we recognize a Riemann sum approximation for This series may be bounded by k≥0 ≥1 C k+ / (k+ )/2 ! which converges absolutely to a constant independently of n. Since ≥1 in all expressions above, the left over factor n − (1−α)/2 is at worst n −(1−α)/2 . Summarizing the bounds, we showed that E[E 1 (n,x)] is bounded above by at worst Ce an −1/2 x n −(1−α)/2 which implies the desired result on E 1 . Now we consider E 2 (x,n). Since z n 0 is bounded in C γ e(δ) we have the following bound with C independent of x,y,n: |z n 0 (n 1/2 x)−z n 0 (n 1/2 y)|≤C|x−y| γ e δ(x+y) .
Using positivity of B n k := k 1 (1+n −1/2 ω n i,Si ) we then find that where the final equality follows from the Markov property of the positive random walk S.

Now we recognize that
where C,K are independent of y,N , by Propositions A.9 and B.1. Consequently we find for k∈[n−n α ,n] that Combining our bounds, we find that Now for any λ>0, (e λS k ) k is a P n x -submartingale because (S k ) is a submartingale (Lemma A.2) and since x →e λx is increasing and convex for any λ. Thus letting G k denote the filtration generated by the first k steps of the n-step positive random walk S, we find where we used Lemma 5.2 in the last bound, with z n 0 (x):=e 2δn −1/2 x . Combining (5.5) and (5.6) gives the required result.
Next we give some Kolmogorov-type moment conditions that ensure tightness of the sequence z n 0 of initial data in C α e(δ) .

Proposition 5.5.
Suppose that {z n } n≥1 is a family of random functions on R that satisfies the following moment conditions for some constants a, p, β, C independent of n, x, y.
• there exist positive integrable random variables D(n) such that sup n E[D(n)] < ∞ and z n (x) ≤ D(n)e a|x| .
Then assuming p > 1/β, there exist δ > a and α < β − p −1 such that (z n ) is tight with respect to the topology of C α e(δ) . Before the proof, we remark that when we apply this result, the functions will be defined on R + as opposed to all of R and thus the absolute values on x, y are unnecessary. Furthermore, the z n appearing in the proposition statement will actually be rescaled and linearly interpolated functions z n 0 (n 1/2 x).
Proof. Recall from earlier that C α e(δ) embeds compactly into C α e(δ ) whenever δ > δ and α < α. Therefore to prove the lemma, it suffices to show that if the two inequalities in the lemma statement hold uniformly over a family F of real-valued functions, then there exist α, δ such that lim a→∞ sup z∈F P( z C α e(δ) > a) = 0.
We actually show something stronger, namely that under the given assumptions, there exists C > 0 such that for all a > 0 sup n∈N P( z n C α e(δ) > a) ≤ Ca −1 . To prove (5.7), the following fact will be useful to us: For any γ ∈ (0, 1), the γ-Hölder seminorm [f ] γ of a function f : [0, 1] → R is equivalent (as a seminorm) to the quantity The exact choices of α, δ will be specified later, but for now let them denote generic constants. Now to prove (5.7) let us write for a function z, where denotes the absorption of some universal constant which can depend on α, δ but not on the function z. Likewise let us note that Consequently we find that Now, with z n uniformly satisfying the bounds given in the lemma statement, let us bound these terms A(z n , δ) and B(z n , α, δ) individually to obtain (5.7). We will do this by using the hypotheses in the lemma. Note that by a brutal union bound and Markov's inequality followed by the hypothesis z n ≤ D(n)e an −1/2 x , we have P(A(z n , δ) > a) ≤ The series converges to a finite value independent of n as long as δ is chosen larger than a. Next we control B, which will also just use a brutal union bound and Markov's inequality: The double series converges to a finite value independent of n so long as δ, α are chosen so as to satisfy δ > 2a and 1 + (α − β)p < 0. This is permissible so long as p > β −1 .
Lemma 5.6. Let (X n ) n≥0 be a non-negative L 1 supermartingale. Then Proof. We apply Doob-Meyer decomposition to write X = M −A, where M is a martingale with M 0 = X 0 , and A 0 is a non-decreasing process with A 0 = 0. Then M is a positive martingale and X ≤ M . Doob's first martingale inequality then gives Since M 0 = X 0 , letting N → ∞ gives the claim because the right side does not depend on N and the left side approaches P sup n X n > a by monotone convergence.
Proof. Before proving either bullet point, we prove a preliminary bound. By Taylor expanding u p near u = 1 we see (1 + n −1/4 ω n i,0 ) p = 1 + pn −1/4 ω n i,0 + 1 2 (p 2 − p)n −1/2 (ω n i,0 ) 2 + o(n −1/2 ), which has expectation roughly 1 + n −1/2 (pµ + p 2 −p 2 σ 2 ) + o(n −1/2 ). For some a = a(p) this is bounded above by 1 + an −1/2 , and so we see that With this preliminary bound in mind, we proceed to the proof of the first bullet point. It suffices to prove the claim when y = 0 (i.e., z n 0 (y) = 1), by independence of the multiplicative increments of z n 0 . Let us begin by writing Let us denote these expectations on the right side as E 1 and E 2 , respectively. We bound each of these separately. For E 1 , one notes by using (5.8) that 5.8)) in the first inequality, and we used 1 − e −v ≤ v in the second one. Finally, note that u p e u ≤ Cu p/2 e 2u for some C > 0 independent of u, and applying this with u = an −1/2 x already gives the desired bound on E 1 . Now we bound E 2 . This is the difficult part, and one needs to somehow exploit cancellations that occur at the quadratic scale (e.g., via a Burkholder-type inequality).
Next, we finally prove the octant-quadrant reduction theorem, i.e., that we can replace T n with 2n as discussed in the proof sketch at the end of Section 2. Let us reformulate the main notational conventions here: • S is a simple symmetric random walk of length n started from x and conditioned to stay positive throughout its course (i.e., the canonical process associated to the measures P n x ). We assume n − x is even.
•ω n i,j is defined to be ω n for all i, j of the same parity, where ω n i,j is a family of random environments satisfying the conditions of the three bullet points before Theorem 2.2, but now the bulk random variables are indexed by all pairs (i, j) with |i| ≥ j.
• T n is the first time that n − i = S i .
We remark that all conditions of Theorem 5.4 are almost satisfied by this environment. The only caveat is that the sequence of initial data is not deterministic, however by Propositions 5.5 and 5.7 and Skorohod's Lemma (and the fact that z n 0 are independent of the bulk weights) we may choose a probability space on which z n 0 → z 0 almost surely with respect to the topology of C α e(δ) for some choice of α, δ ∈ (0, 1). Here z 0 (x) is a geometric Brownian motion with the appropriate diffusion and drift coefficients. Note that a.s. convergence is stronger than a.s. boundedness in that norm which is the condition required in Theorem 5.4. Thus there is no loss of generality in assuming that the initial data are in fact deterministic.

Proposition 5.8 (Octant-Quadrant Reduction).
In the notation of the bullet points immediately above, we define the following random variable for n, x ≥ 0: Let x n be a sequence of non-negative integers such that x n ≤ Cn 1/2 for some C > 0. Then E (x n , n) → 0 in probability.
Proof. First we will show that n P n xn (T n ≤ n − n 2/3 ) < ∞. By Borel-Cantelli, this would imply that all P n xn may be coupled to the same probability space in such a way that one almost surely has T n > n − n 2/3 for large enough n. Then the result follows immediately from Theorem 5.4 by taking α = 2/3 in the definition of E(x, n). Note that the choice of exponent 2/3 is arbitrary and could be replaced by any α > 1/2.
The right side is summable as a function of n, completing the proof.
Note that by equations (2.3) and (2.4) and the surrounding discussion (but replacing n above by 2n), the above proposition reduces the proof of Theorem 2.2 to that of Theorem 1.2 but with varying weights, so this is what we focus on now.

Convergence for the quadrant model
In this section we finally complete the main goals of the paper. Unless otherwise stated, we always implicitly assume the following: • All families {ω n i,j } of i.i.d. weights satisfy the assumptions that were stated in the bullet points before Theorem 2.2.
With the reduction (Proposition 5.8) finished, we define a partition function in the quadrant that is modified to take parity into account. Specifically, given (n, x) in the EJP 27 (2022), paper 45.
can be any sequence of functions converging weakly and also satisfying the two bullet points of Proposition 5.7). Consider the following family of diffusively rescaled processes Z n (T, X) := Z n (nT, n 1/2 X), T, X ≥ 0, where we interpolate linearly between points of the lattice L. We will now show that Z n converge in law as n → ∞ with respect to the topology of uniform convergence on compact subsets of R + × R + to the solution of (4.1). The first step for doing this is proving tightness in the appropriate Hölder space. This part is not necessary if one is only interested in following the minimal logical flow for the proof of Theorem 1.1, and thus some of the proofs are not included. As always we denote X p := E[|X| p ] 1/p . Proposition 5.9 (Tightness). Let Z n be defined as in (5.11), and assume that (for each k), the i.i.d. weights {ω k i,j } i,j have p > 8 moments, bounded independently of k. Then for every a ≥ 0, θ ∈ [0, 1), and compact set K ⊂ [0, ∞) 2 there exists C = C(a, p, θ, K) > 0 such that one has the following estimates uniformly over all pairs of space-time points (T, X), (S, Y ) ∈ K: Z n (T, X) p ≤ C,  In particular, the laws of the Z n are tight with respect to the topology of uniform convergence on compact subsets of C(R + × R + ).
The restriction p > 8 is only necessary to obtain tightness in the Hölder space. Using more elegant arguments, this may be extended to p ≥ 6 (see Appendix B of [AKQ14a]).
The one-point convergence result will only require two moments though.
Proof. Note that the functions Z k defined in (5.10) satisfy the following Duhamel-form Define the martingale M r (x, n, k) This is a martingale in the r-variable, with respect to the filtration F k r := σ({ω k i,j } 1≤i≤r;j≥0 ). This is because Z k (i, y) is F k r -measurable, and F k r is independent of the mean-zero random variables ω k r,y with y ≥ 0. Applying Burkholder-Davis-Gundy and then Minkowski's inequality to M r (x, n, k) shows that Next, we notice that since the ω k i,y are independent of Z k (i, y), another application of Burkholder-Davis-Gundy (or in this case, its more elementary version for independent sums, the Marcinkiewicz-Zygmund inequality) shows that Since p ≤ p 0 and the p th 0 moments of ω k i,y are bounded independently of k, i, y it follows that ω k i,y 2 p may be absorbed into the constant. Combining (5.15),(5.16),(5.17), one finds that Now, we note that z k 0 (y) p ≤ e ak −1/2 y by (5.8). Hence, y p n n (x, y) z k 0 (y) p may be bounded above by Ce ak −1/2 x+Ka 2 k −1 n by Proposition B.1. After this, we set x 0 := x and i 0 := 0 and we iterate (5.18). Then we get where we use r C k k −r/2 n r/2 /(r/2)! ≤ e C 2 n/k and then rename B := Ka 2 + C 2 . Now replace x by n 1/2 X, n by nT , and k by n. This will give Z n (T, X) 2 p ≤ Ce aX+BT . But e aX+BT can be bounded from above on any compact set, proving (5.12).
The proofs of (5.13) and (5.14) use similar ideas (e.g., Burkholder-Davis-Gundy, convexity inequalities like Minkowski and Jensen, and the recursive relations satisfied by Z k ) and will be left out. Now we need to argue tightness from these estimates. But this is a direct corollary of the Kolmogorov continuity criterion (two-parameter version), Prokhorov's theorem, and the Arzela-Ascoli Theorem. Note that the condition p > 8 is needed to obtain a positive exponent in Kolmogorov's criterion. Now that we have proved tightness, we only need to obtain convergence of finitedimensional marginals of Z n to those of SPDE (4.1). Thanks to the Cramer-Wold device (and linearity of integration with respect to space-time white noise) this will not be more difficult than just proving convergence of one-point marginals. This can be done by using the convergence result in Proposition 3.6 together with the machinery developed in the papers [AKQ14a,CSZ17a].
Specifically, we will use Theorem 2.3 of [CSZ17a], which in turn was inspired by the results of Section 4 in [AKQ14a]. We state this result in a version that is adapted to our own context. Throughout, we will fix T > 0 and we will denote ∆ k (T ) := {(t 1 , ..., t k ) : 0 < t 1 < ... < t k < T, t i ∈ R}. Also denote by ∆ n k (T ) := {( t1 n , ..., t k n ) : 0 < t 1 < ... < t k < T n, t i ∈ Z}, and let (R d ) n := (n −1/2 Z) d . Then define L n k := ∆ n k (T ) × (R k ) n , and equip L n k with the σ-finite measure that assigns mass n −3/2 = n −1 · n −1/2 to each distinct space-time point ( t n , x √ n ). We denote by L 2 (L n k ) the L 2 -space associated to this measure.
EJP 27 (2022), paper 45. Theorem 5.10 (Theorem 2.3 of [CSZ17a]). For each n ∈ N, let {ω n i,j } i,j≥0 be a family of random weights with mean zero and var(ω n i,j ) = σ 2 + o(1) (as n → ∞). Let {F n k } n,k∈N be a family of functions defined on L n k . Suppose that F k : ∆ k (T ) × R k → R is a family of continuous functions such that F n k − F k L 2 (L n k ) → 0 as n → ∞, for every k ∈ N.
Furthermore assume that sup n k≥0 Then define random variables Then X n converges in distribution as n → ∞ to the random variable where ξ is a space-time white noise on R + × R.
We refer the reader to Section 4 of [AKQ14a] for an explanation of the scaling exponent n −3k/4 . With this result in place, we are now ready to prove the main result of this section, which is a generalization of Theorem 1.2 to the case where the weights ω vary with n.
Theorem 5.11. Let Z n be as defined in (5.11). Then the finite-dimensional marginals of Z n converge to those of SPDE (4.1). More precisely, if F ⊂ R + × R + is finite, then (Z n (T, X)) (T,X)∈F converges in law to (Z (T, X)) (T,X)∈F where Z solves (4.1) with initial data Z (0, X) = e σB X +(µ−σ 2 /2)X for a standard Brownian motion B.
Proof. Using the discussion at the end of Section 2 (more specifically, equations (2.6) and (2.7)), we know that z n 0 (n 1/2 X) converges in law to a geometric Brownian motion with drift, specifically e σB X +(µ−σ 2 /2)X . We exploit Skorohod's lemma to couple all of the z n 0 to the same probability space in such a way so that this convergence occurs a.s. uniformly on compact sets. Fix x, t > 0. In our case, we set F n k (t 1 , ..., t k ; x 1 , ..., x k ) := where P T t was given in Definition 3.4, P n was defined in Proposition 3.6 and where (x 0 , t 0 ) := (x, t). The condition that sup n k≥0 F n k 2 L 2 (L n k ) < ∞, follows quite simply from Lemma 5.1. Also the condition that F n k − F k L 2 (L n k ) → 0 as n → ∞, follows by inducting on the last statement in Proposition 3.6.
By Theorem 5.10, we conclude that the one-point marginals of Z n converge to those of the solution of (4.1). The proof for multi-point marginals is similar, but one defines a new familyF n k by taking linear combinations of the F n k that are defined above, then one applies the Cramer-Wold device to make the conclusion.

A Preliminary estimates and concentration of measure
The purpose of this appendix is to gather estimates for the simple symmetric random walk conditioned to stay positive. The results and proofs are classical in spirit, and the literature on such measures is extensive [Ig74,Bol76,Car05,CC08,DIM77] etc. However, we will only give a brief exposition of those selected estimates that apply to our nearest-neighbor weights, many of which we could not find in the above references, and might be applicable to other models.
We recall the uniform positive random walk measures P n x and the three associated quantities (p N n , p (1/2) n , and ψ) that were defined in Section 3. The main goal of this appendix will be to prove the following concentration inequality for the measures P n x : where C, c are independent of n, x, k with k ≤ n. This will in turn allow us to prove various L p moment bounds that are used in Section 5. The methods used in proving these results will be coupling arguments and martingale techniques, some of which might be useful in and of themselves. More specifically, the main key will be to notice that for fixed n ∈ N, the process M n k := is a P n x -martingale with respect to the k-variable. Moreover we will use the fact that (S k ) is itself a submartingale. First we state a few preliminary lemmas.
Lemma A.1. Let ψ(x, N ) be as in Definition 3.2. Then there exists a constant C > 0 such that for all x, N ≥ 0 one has Furthermore for each x ≥ 0 one has that lim N →∞ √ N ψ(x, N ) = (x + 1) 2/π.
Note that this already proves Theorem 2.5(2). Furthermore note that the upper and lower bounds on ψ are strong enough to give an upper and lower envelope on ψ, i.e., 1+w . We now proceed to the proof.
. Then q ≥ 2, so q + 2 ≤ e q , and thus Now we consider the case x ≤ 2 √ N . The local central limit theorem tells us that . This proves the lower bound. Finally, we prove the last statement about the limit. For this, let us write The local limit theorem tells us that for each u, the quantity √ N p N (u) oscillates back and forth between 2/π and zero (depending on the parity of N ) as N becomes large. This already implies that N 1/2 times the right side converges to 1 + x 2/π. Lemma A.2 (Monotonicity). Fix n ∈ N. Then ψ(x, n) is an increasing function of x. Thus, p n 1 (x, x + 1) ≥ 1/2 ≥ p n 1 (x, x − 1) for all x, n ≥ 0. Furthermore p n 1 (x, x + 1) is a decreasing function of x, and p n 1 (x, x − 1) is an increasing function of x.
The proof is straightforward.
Proposition A.3 (Coupling lemma for Positive Walks). Fix n ∈ N and x ≥ 0. There exists a coupling Q n x,x+1 of the measures P n x and P n x+1 that is supported on pairs (γ, γ ) of paths such that |γ i −γ i | ≤ 1 for all i ≤ n. More generally, for fixed n ∈ N, the measures {P n x } x≥0 may all be coupled together in such a way that the coordinate processes associated to neighboring values of x are never more than distance 1 apart.
We make an inductive construction as follows. Let S 0 = x and S 0 = x + 1.
Suppose that S 0 , ..., S k and S 0 , ..., S k have been constructed in such a way that |S i − S i | = 1 for all k. If S k = S k + 1, we define We know by lemma A.2 that one of these cases must hold. Similarly, if S k = S k − 1, then we define (S k+1 , S k+1 ) in a symmetric fashion. This completes the inductive step.
By Proposition 3.3, S is distributed as P n x and S is distributed as P n x+1 . The proof of the more general statement is very similar. One simply uses a uniform coupling together with the Lemma A.2, and the argument is a straightforward generalization of the one given above for two values of x.
is a martingale with respect to the natural filtration of S. Furthermore it has bounded In the special case when k = n, one has the explicit form f (n,n) ( Proof. We suppress the superscript (k, n) on f from now on. Letting F k denote the natural filtration of S, it is a consequence of the Markov property that f (S i , i) = E n x [S k |F i ], which shows that M is a martingale in the i-variable for fixed x, n, k.
To prove that it has bounded increments, first note that By the coupling lemma (Proposition A.3), this is bounded in absolute value by 1. Consequently, one finds that which gives the desired result.
For the final statement, if k = n one may compute E n−i y). Now the claim follows from the fact that y → y + 1 is a unipotent eigenfunction of the semigroup p (1/2) , i.e., y≥0 (y + 1)p (1/2) n (x, y) = x + 1 for every n, x ≥ 0. Lemma A.5. Let b ≥ 0. There exists a constant C = C(b) > 0 such that for all n ≥ 0 and all x, y, z ≥ 0 one has EJP 27 (2022), paper 45. The proof will be omitted, because it is fairly standard and follows the same train of estimates given in works such as [DT16]. The main point is to note that the standard heat kernel on Z satisfies x∈Z p n (x)z x = (z + z −1 ) n 2 −n , so we can use Cauchy's integral formula to write it as follows: Then we choose the contour C cleverly (specifically a circle of radius e bn −1/2 centered at the origin) and finally use the fact that p (1/2) n can be written in terms of p n via Definition 3.1. See Appendix A of [DT16] for details on obtaining bounds in this way.
Lemma A.6. There exists a constant C > 0 such that for all x ≥ 0 and all n ≥ k ≥ 1 one has that Proof. We consider two cases, k > n/2 and k ≤ n/2.
In fact, it is even true that S forms a P n x -submartingale and thus E n x [S k ] is an increasing function of k for every n. This follows immediately from Lemma A.2 after noticing that E n x [S k+1 |F k ] = S k + (2p n−k 1 (S k , S k + 1) − 1) ≥ S k . Now, from the preceding proposition, we know that M k := S k +1 ψ(S k ,n−k) forms a martingale. Thus, we see that where we applied the lower bound of Lemma A.1 in the final bound. Since k > n/2, we see that n 1/2 ≤ 2 1/2 k 1/2 , which gives the desired bound in this case. Case 2. k ≤ n/2. First we use the coupling lemma (Proposition A.3) to see that Iterating this x times shows that Thus we only need to show that E n 0 [S k ] ≤ Ck 1/2 . To prove this, let us write E n 0 [S k ] = y≥0 p n k (0, y)y. Now we write p n k (0, y) = p (1/2) k (0, y) ψ(y,n−k) ψ(0,n) . By Lemma A.1 we know 1 ψ(0,n) ≤ C √ n. Furthermore we also know from the same lemma that ψ(y, n − k) is bounded above by 1 ∧ (Cy(n − k) −1/2 ), which is in turn bounded above by 1 ∧ (Cyn −1/2 ) since k ≤ n/2. Moreover, we also know from Lemma A.5 that p Thus, we find that √ n e −y/ √ k n 1/2 (n −1/2 y 2 ) + y≥ √ n e −y/ √ k (n 1/2 y) .
Let us refer to the two sums inside the square brackets on the right side as J 1 and J 2 , respectively.
Combining the bounds of J 1 and J 2 with (A.3), we obtain the desired bound.
Finally we have our concentration theorem, the main result of this appendix.
Theorem A.7 (Concentration). As before, let S = (S k ) 0≤k≤n denote the canonical process associated to P n x . Then there exist C, c > 0 such that for every x ≥ 0, every 0 ≤ k ≤ n, and every u > 0 one has that In other words, on time scales of length k, the path measure P n x concentrates on spatial scales of order √ k around x. The idea of the proof is to exploit the martingales from Proposition A.4 and apply well-known concentration inequalities for boundedincrement martingales. The Gaussian decay constant c will be obtained as 1/32, which is not sharp (presumably c = 1/2 should be possible, but we do not have a proof).
Proof. Throughout this proof, x, n, and k will be fixed. Let us write Let us refer to the terms on the right side as p 1 , p 2 respectively. First we bound p 2 . Recall from Lemma A.2 that p n 1 (x, x + 1) ≥ 1/2 ≥ p n 1 (x, x − 1) for all n, x ≥ 0. This trivially shows that S is a submartingale, which directly gives the claim for p 2 by Azuma's inequality [Azu67] for submartingales, with c = 1/2. Now we will bound p 1 , which is more difficult. Letting M = (M In the last inequality, we used the fact that (u − Ck 1/2 ) 2 ≥ 1 2 u 2 − C 2 k. This, in turn, is because (a + b) 2 ≤ 2(a 2 + b 2 ). Combining the bounds on p 1 and p 2 shows that Since (S i ) is a submartingale (Lemma A.2) and since x → e λx is increasing and convex it follows that the process (e λSi ) n i=0 is a P n x -submartingale as well. Thus, we may apply Doob's martingale inequality to see that Now we split the integral as  Setting λ = u 8k gives a bound of C(e −u 2 /8k + uk −1/2 e −u 2 /16k ). Now one simply notes that r ≤ Ce r 2 /32 , so that uk −1/2 ≤ Ce u 2 /32k . This gives the desired bound on p 1 , where the constant appearing in the theorem statement is c := 1/32.
We now give a slightly generalized version of the concentration theorem.
Corollary A.8. In the same setting as the previous theorem, there exist C, c > 0 such that for every x ≥ 0, every 0 ≤ m ≤ k ≤ n, and every u > 0 one has that Here, C, c are the same as in the previous theorem. But Theorem A.7 tells us that g(k, n, x, u) ≤ Ce −cu 2 /k independently of x, n.
Corollary A.9. Let p > 0. There exists a constant C = C p > 0 such that for every x ≥ 0 and every 0 ≤ k ≤ m ≤ n, one has By Corollary A.8, this is bounded above by where we made a substitution y = (k − m) −1/2 u in the first equality.
By Arzela-Ascoli, the preceding corollary clearly implies tightness of the diffusively rescaled process mentioned in Remark 2.6. Indeed we can use this to easily recover classical results such as [Ig74,BJD06] in this nearest-neighbor case, for instance by showing that any subsequential limit has the same finite-dimensional marginal distributions as W T X which in turn can be shown e.g. by Proposition B.6 below.

B Heat kernel estimates for conditioned walks
We now prove various estimates for the heat kernels p N n defined in Section 3. Not much motivation will be given here, but the content of Sections 4 and 5 has illustrated the applicability of these estimates. The methods used in proving these bounds will be elementary bounds together with the results of Appendix A (specifically Propositions A.5 and A.3, and Theorem A.7).
Proposition B.1. There exist constants C, K > 0 such that for all x ≥ 0, all N ≥ n ≥ 0, and all a ≥ 0 one has that  ae au e −c(u−x) 2 /n du ≤ e ax + C · an 1/2 e ax+ a 2 n 4c .
We remark that c = 1/32 from the proof of Theorem A.7, so we can obtain K = 9 in the preceding proposition. Conjecturally, the optimal value of K should be 1/2, as is the case for the simple random walk (as seen from cosh(a) ≤ e 1 2 a 2 ). Lemma B.2. Fix b > 0. There exists C = C(b) > 0 such that for all x ≥ 0 and all N ≥ n ≥ 0 one has that We remark that this bound is fairly strong, and many of our estimates could have been derived from this result rather than from the concentration theorem (but only in a weaker form because the decay is merely exponential rather than Gaussian).
Proof. We consider four different cases.
N −n is monotone decreasing in the last line. Since n < N/2 it follows that N N −n 1/2 ≤ 2 1/2 so that term may be absorbed into C.
Case 3. n < N/2 and y ≥ x. Then Here we noted y ≥ x in the second line, and we used the fact that x → x+1+ N −n is monotone decreasing in the third line. In the final line, we used N N −n 1/2 ≤ 2 1/2 (since n < N/2) and also the first bound of Lemma A.5. Now, we know that the bound (B.1) is true for all b, in particular it is true with b replaced by b + 1, after perhaps making the constant bigger. Thus we see that This completes the proof of all cases.
Then the claim follows immediately from Proposition B.1.
We now bound space-time differences of the heat kernels p N n .
Lemma B.4. There exists a constant C > 0 such that for all x, y, z ≥ 0 one has that Proof. Without loss of generality, assume y ≥ z. It suffices to prove the bound in the case y = z + 1. In the general case, one simply adds the bound y − z times. Let us write p N n (x, z + 1) − p N n (x, z) = p (1/2) n (x, z + 1)ψ(z + 1, N − n) − p Let us call the two terms of the last expression I 1 , I 2 respectively. From here, one considers two cases (x ≤ √ N and x ≥ √ N ) and bound I 1 , I 2 separately each time. The arguments are similar to the ones above, so the proof is not included.
Proposition B.5. Fix p ≥ 1. There exists a constant C = C(p) > 0 such that for all x, y ≥ 0, all N ≥ n ≥ m ≥ 0, and all a ≥ 0 one has that z≥0 p N n (x, z) − p N n (y, z) 2p e az ≤ Ce a(x+y)+Ka 2 n n In the spatial bound (B.2), the constant C grows at worst exponentially in p.
We remark that in the special case that p = 1 and a ≤ Cn −1/2 , one has that n 1 2 − 3 2 p + a p n 1 2 −p ≤ Cn −1 and similarly for m. This is the case in which this bound will most often be applied.
Proof. We first start out by proving an auxiliary bound: are never a distance more than |y − x| apart (i.e., sup n≤N |S x n − S y n | ≤ |x − y| a.s.). Let E denote the expectation with respect to the coupled measure. Now, by writing p N n (x, z) − p N n (y, z) 2 = p N n (x, z) p N n (x, z) − p N n (y, z) − p N n (y, z) p N n (x, z) − p N n (y, z) we may write z≥0 p N n (x, z) − p N n (y, z) 2 e az = E N x [(p N n (x, S n ) − p N n (y, S n ))e aSn ] − E N y [(p N n (x, S n ) − p N n (y, S n ))e aSn ] = E[(p N n (x, S x n ) − p N n (y, S x n ))e aS x n ] − E[(p N n (x, S y n ) − p N n (y, S y n ))e aS y n ] = E[(p N n (x, S x n ) − p N n (x, S y n ))e aS x n ] + E[p N n (x, S y n )(e aS x n − e aS y n )] + E[(p N n (y, S y n ) − p N n (y, S x n ))e aS y n ] + E[p N n (y, S x n )(e aS y n − e aS x n )].
Let us refer to the terms in the last expression as J 1 , J 2 , J 3 , J 4 , respectively. Since J 1 and J 3 occupy symmetric roles, it suffices to bound J 1 and then the analogous bound for J 3 automatically follows. The same thing happens for J 2 and J 4 . With this understanding, we will only prove the desired bound for J 1 and J 2 . Let us start by bounding J 1 . By Lemma B.4, we see that Here we applied the coupling in the second inequality. Applying the definition of J 1 and then Proposition B.1, we therefore obtain that This already gives the desired bound on J 1 . As discussed, the analogous bound on J 3 is obtained in an identical fashion, but one will get e ay instead of e ax . The final bound on J 1 + J 3 is then obtained by noting that e ax + e ay ≤ 2e a(x+y) . 1/2 ≤ 2 1/2 . Note that the constant C does not depend on p, which also proves the final sentence given in the theorem statement after noting that n −1 + an −1/2 p ≤ 2 p (n −p + a p n −p/2 ).
We move on to the temporal estimate (B.3). The main idea is to use Jensen's inequality together with the spatial estimate. Specifically, we start off by writing All that is left to do is to show that one has E N x [|S n−m − x| p e aSn−m ] ≤ Ce ax+Ka 2 (n−m) |n − m| p/2 . This is an easy consequence of the concentration theorem. Indeed, for any k ≤ N one may write and then the claim follows immediately from Propositions B.1 and Corollary A.9.
Next we prove a strong convergence result for the discrete kernels p N n to the continuous ones P T t from Definition 3.4, from which we can easily obtain estimates for the continuous kernels as well. In the case of Brownian meander at terminal time (X = 0 and t = T ), the following result is weaker than the local convergence result of [Car05], but we actually need it for all (t, T ) so we give an original proof.
Then for each fixed X, T, t ≥ 0, the map Y → P n (t, T ; X, Y ) converges pointwise and in L p (R + , e aY dY ) to P T t (X, Y ) for all p ≥ 1 and a ≥ 0 (as n → ∞). Furthermore for all X, T ≥ 0, the map (t, Y ) → P n (t, T ; X, Y ) converges pointwise and in L p (dt ⊗ e aY dY ) to P T t (X, Y ) for all p ∈ [1, 3) and a ≥ 0 (as n → ∞). From now on, we will abbreviate quantities such as p 2 T n 2 tn (2 n 1/2 X/ √ 2 , 2 n 1/2 Y / √ 2 ) by just writing p 2nT 2nt ((2n) 1/2 X, (2n) 1/2 Y ) instead. We hope that this abuse of notation will not cause any confusion, but in reality one should keep in mind that all quantities are only defined with even integers. The reason for this is the periodicity of the simple random walk: p N n (x, y) vanishes if n and x − y have different parity. If it were not for this parity consideration, we could take a limit of the simpler quantity n 1/2 p nT nt ( n 1/2 X , n 1/2 Y ).
Proof. First, let us prove pointwise convergence. Letting p n denote the standard heat kernel on all Z, we recall that p (1/2) n (x, y) = p n (x − y) − p n (x + y + 2).
Let F n denote the cdf associated to p n , so that ψ(x, n) = F n (x + 1) − F n (−x) = F n (x) + F n (x + 1) − 1. By uniformity of convergence of cdf's in the central limit theorem we know that F n (n 1/2 x) converges uniformly (on R) to Φ(x), where Φ is the cdf of a standard normal. From this it is clear that ψ(n, n 1/2 x) = F n (n 1/2 x) + F n (n 1/2 x + 1) − 1 converges uniformly to 2Φ(x) − 1 (because Φ has no atoms). In turn, one deduces that ψ(2nT, (2n) 1/2 X) = ψ(2nT, (2nT ) 1/2 X/ √ T ) converges to 2Φ(X/ √ T ) − 1. From here, completing the proof of pointwise convergence is easy using the local central limit theorem (though notice that X = 0 requires a separate proof) as done in earlier proofs.
EJP 27 (2022), paper 45. Now we will fix t, T, X, and we will address convergence in L p (R + , e aY dY ). The main idea is simply to use dominated convergence in conjunction with Lemma B.2. Specifically, that lemma (applied with b = 2at 1/2 /p) tells us that P n (t, T ; X, Y ) ≤ Ct −1/2 e −2a|X−Y |/p .

(B.5)
Here C is a constant independent of Y (but it will depend on t, a, p). Letting p ≥ 1, it is then clear from (B.5) that for fixed X, T, t, the sequence of maps Y → P n (t, T ; X, Y ) p e aY is dominated (uniformly in n) by a function that is integrable on R + . This is enough to guarantee by dominated convergence that R+ |P n (t, T ; X, Y ) − P T t (X, Y )| p e aY dY → 0.
Similarly, one uses (B.5) in conjunction with the dominated convergence theorem to obtain convergence in L p (R + ×R + , dt⊗e aY dY ) of (Y, t) → P n (t, T ; X, Y ). This argument only works for p ∈ [1, 3), since the singularity of R+ t −p/2 e −pt −1/2 |X−Y | dY ∼ t −(p−1)/2 fails to be absolutely integrable near t = 0, if p ≥ 3. Proof. The claims follow from the L 1 and L 2 convergence in Proposition B.6. More specifically, (B.6) follows from Proposition B.1 and convergence in L 1 (R + , e aY dY ). Next, (B.7) follows from Proposition B.3 and convergence in L 2 (e aY dY ). Expressions (B.8) and (B.9) with θ = 1/2 follow immediately from Proposition B.5 and convergence in L 2 (e aY dY ). The appearance of the terms Ka 2 n in the exponent will be absorbed into the constant because a effectively becomes replaced by n −1/2 a. The θ = 0 cases of (B.8) and (B.9) follow immediately from (B.7) and the fact that e aX + e aY ≤ 2e a(X+Y ) . The proofs for general θ then follow easily by geometric interpolation (i.e., min{a, b} ≤ a θ b 1−θ for all a, b ≥ 0 and all θ ∈ [0, 1]).