Semigroups for One-Dimensional Schr\"odinger Operators with Multiplicative Gaussian Noise

Let $ H:=-\tfrac12\Delta+V$ be a one-dimensional continuum Schr\"odinger operator. Consider ${\hat H}:= H+\xi$, where $\xi$ is a translation invariant Gaussian noise. Under some assumptions on $\xi$, we prove that if $V$ is locally integrable, bounded below, and grows faster than $\log$ at infinity, then the semigroup $\mathrm e^{-t {\hat H}}$ is trace class and admits a probabilistic representation via a Feynman-Kac formula. Our result applies to operators acting on the whole line $\mathbb R$, the half line $(0,\infty)$, or a bounded interval $(0,b)$, with a variety of boundary conditions. Our method of proof consists of a comprehensive generalization of techniques recently developed in the random matrix theory literature to tackle this problem in the special case where ${\hat H}$ is the stochastic Airy operator.


Introduction
Let I ⊂ R be an open interval (possibly unbounded) and V : I → R be a function. Let H := − 1 2 ∆ + V denote a Schrödinger operator with potential V acting on functions f : I → R with prescribed boundary conditions when I has a boundary. In this paper, we are interested in random operators of the form H := H + ξ, (1.1) where ξ is a stationary Gaussian noise on R. Informally, we think of ξ as a centered Gaussian process on R with a covariance of the form E[ξ(x)ξ(y)] = γ(x − y), where γ is an even almost-everywhere-defined function or Schwartz distribution. In many cases that we consider, γ is not an actual function, and thus ξ cannot be defined as a random function on R; in such cases ξ can be defined rigorously as a random Schwarz distribution, i.e., a centered Gaussian process on an appropriate function space with covariance E ξ(f )ξ(g) = R f (x)γ(x − y)g(y) dxdy, f, g : R → R.
where B is a Brownian motion and E x signifies that we are taking the expected value with respect to B conditioned on the starting point B(0) = x. Apart from the obvious benefit of making Schrödinger semigroups amenable to probabilistic methods, we note that the Feynman-Kac formula can in fact form the basis of the definition of H itself, as done, for instance, in [30]. Our purpose in this paper is to lay out the foundations of a general semigroup theory (or Feynman-Kac formulas) for random Schrödinger operators of the form (1.1). We note that, since we consider very irregular noises (i.e., in general ξ is not a proper function that can be evaluated at points in R), this undertaking is not a direct application or a trivial extension of the classical theory; see Section 1.1 for more details. As a first step in this program, we show that a variety of tools recently developed in the random matrix theory literature (e.g., [3,20,22,28,32,36]) to tackle special cases of this problem can be suitably extended to a rather general setting. The main restriction of our assumptions is that we consider cases where the semigroup e −tĤ is trace class, which implies in particular thatĤ must have a purely discrete spectrum. This paper is organized as follows. In the remainder of this introduction, we present a brief outline of our main results and discuss some motivations and applications. In Section 2, we give a precise statement of our results (our main result is Theorem 2.24, and our second main result is Proposition 2.10). In Section 3, we provide an outline of the proof of our main results. Finally, in Sections 4 and 5, we go over the technical details of the proof of our results.

Overview of Results
As mentioned earlier in this introduction, much of the challenge involved in our program comes from the fact that, in general, Gaussian noises are Schwartz distributions. This creates two main technical obstacles.
The first obstacle is that it is not immediately obvious how to define the operatorĤ.
Indeed, if we interpret ξ as being part of the potential ofĤ, then the action Hf " = " − 1 2 f + (V + ξ)f of the operator on a function f includes the "pointwise product" ξf , which is not well defined if ξ cannot be evaluated at single points in R. The second obstacle comes from the definition of e −tĤ . Arguably, the most natural guess for this semigroup would be to add ξ to the potential in the usual Feynman-Kac formula (1.2), which yields e −tĤ f (x) " = " E x exp −  However, this again requires the ability to evaluate ξ at every point.
The key to overcoming these obstacles is to interpret ξ as the distributional derivative of an actual Gaussian process. More precisely, let Ξ be the Gaussian process on R Semigroups for 1D Schrödinger Operators with Gaussian Noise (1.5) We note that this type of definition forĤ has previously appeared in the literature (e.g., [3,17,32,36]) for various potentials V on the half line I = (0, ∞) as well as V = 0 on a bounded interval I = (0, L) (L > 0). We also note an alternative approach outlined by Bloemendal in [2, Appendix A] that allows one (in principle) to recastĤ as the classical Sturm-Liouville operator x 0 Ξ(y) dy (1.6) through a suitable Hilbert space isomorphism. Our first result (namely, Proposition 2.10) is an extension of these statements: We provide a very succinct proof of the fact that, under fairly general conditions on Ξ and V , the form (1.5) corresponds to a unique self-adjoint operator with compact resolvent, including when I is the whole real line or a bounded interval with a nonzero potential.
The interpretation ξ = Ξ also leads to a natural candidate for the semigroup generated byĤ: Let L a t (B) (a ∈ R, t ≥ 0) be the local time process of the Brownian motion B so that for any measurable function f , we have Assuming a stochastic integral with respect to Ξ can meaningfully be defined, we may then interpret the problematic term in e −tĤ 's intuitive derivation (1.3) thusly: where Q is the process dQ(x) = V (x)dx + dΞ(x), which we assume to be independent of B. In the case where I = R, for example, this suggests that where E x now denotes the conditional expectation of B|B(0) = x given Ξ. This type of random semigroup has appeared in [20,22] in the special case where I is the positive half line (0, ∞), V (x) = x, and Ξ is a Brownian motion (so that ξ is a Gaussian white noise; see Example 2.28 for more details). Our second and main result (namely, Theorem 2.24) provides general sufficient conditions under which a Feynman-Kac formula of the form (1.7) holds (we refer to (2.14) for a statement of our Feynman-Kac formula when admits an explicit probabilistic representation of the form (1.7). We expect this connection to be fruitful in two directions. On the one hand, a good understanding ofĤ's spectrum could be used to study the geometric properties of the function u(t, x) := e −tĤ f (x), which we may interpret as the solution of the SPDE with multiplicative noise ∂ t u = −(Hu + ξu), u(0, x) = f (x).
We refer to Section 1.2.1 below for more motivation in this direction.
On the other hand, the Feynman-Kac formula can be used to study the properties of the eigenvalues and eigenfunctions ofĤ (we refer to [41] for classical examples of this involving the deterministic operator H). In particular, our Feynman-Kac formula provides a means of computing the "Laplace transforms" Tr e −tiĤ , t 1 , . . . , t > 0, (1.8) which characterize the distribution ofĤ's eigenvalues. In Sections 1.2.2 and 1.2.3, we discuss how the ability to compute (1.8) has led to applications in the study of operator limits of random matrices and the occurence of number rigidity in the spectrum of general random Schrödinger operators.

The Anderson Hamiltonian and Parabolic Anderson Model
The earliest occurrences of an operator of the form (1.1) in the literature appear to be [16,24]. The operator that is considered therein is the Anderson Hamiltonian, defined as A β := −∆ + x + ξ β , β > 0, (1.9) where ξ β is a Gaussian white noise with variance 4/β, and A β acts on I = (0, ∞) with Dirichlet or Robin boundary condition at the origin. The interest of studying this operator comes from the fact that its spectrum captures the asymptotic edge fluctuations of a large class of random matrices and β-ensembles. This was first observed by Edelman and Sutton in [15] and is based on the tridiagonal models of Dumitriu and Edelman [14]. The connection was later rigorously established by Ramírez, Rider, and Virág [36], and these developments gave rise to a now very extensive literature concerning operator limits of random matrices, in which general operators of the form (1.1) arise as the limits of a large class of random tridiagonal matrices. We refer to [46] and references therein for a somewhat recent survey.
In [22], Gorin and Shkolnikov introduced an alternative method of studying operator limits of random matrices by proving that large powers of generalized Gaussian βensembles admit an operator limit of the form (1.7) (see [22, (2.4)]). These results were later extended to rank 1 additive perturbations of Gaussian β-ensembles in [20]. In this context, our paper can be viewed as providing a streamlined and unified treatment of trace class semigroups generated by general operators of the form (1.1). In [18], this more general setting is used to extend the operator limit results in [20,22] to much more general random tridiagonal matrices, including some non-symmetric matrices that could not be treated by any previous method.

Number Rigidity in Random Schrödinger Operators
A point process is number rigid if the number of points inside any bounded set is determined by the configuration of points outside that set. The earliest proof of number rigidity appears to be the work of Aizenman and Martin in [1]. More recently, there has been a notable increase of interest in this property stemming from the work of Ghosh and Peres [21]. Therein, it is proved that the zero set of the planar Gaussian analytic function and the Ginibre process are number rigid. Since then, number rigidity has been shown to be connected to several other interesting properties of point processes (see, e.g., [19,Section 1.2] and references therein).
Due to their ubiquity in mathematical physics, there is a strong incentive to understand any structure that appears in the eigenvalues of random Schrödinger operators, including number rigidity. Up until recently, the only random Schrödinger operator whose eigenvalue point process was known to be number rigid was the stochastic Airy operator A β in (1.9) with β = 2 [4], thanks to the special algebraic structure present in the eigenvalues of this particular object (i.e., A 2 's eigenvalues generate the determinantal Airy-2 point process). In [19], we use the Feynman-Kac formula proved in this paper to show that number rigidity occurs in the spectrum ofĤ under very general assumptions on the domain I on which the operator is defined, the boundary conditions on that domain, the regularity of the potential V , and the type of noise; thus providing the first method capable of proving rigidity for general random Schrödinger operators.

Main Results
In this section, we provide detailed statements of our main results. Throughout this paper, we make the following assumption regarding the interval I on which the operator is defined and its boundary conditions. Assumption 2.1. We consider three different types of domains: The full space I = R (Case 1), the positive half line I = (0, ∞) (Case 2), and the bounded interval I = (0, b) for some b > 0 (Case 3).
In Case 2, we consider Dirichlet and Robin boundary conditions at the origin: In Case 3, we consider the Dirichlet, Robin, and mixed boundary conditions at the endpoints 0 and b: where α, β ∈ R are fixed. Throughout the paper, we make the following assumption on the potential V . Assumption 2.3. Suppose that V : I → R is nonnegative and locally integrable on I's closure. If I is unbounded, then we also assume that

Remark 2.4.
As is usual in the theory of Schrödinger operators and semigroups, the assumption that V ≥ 0 is made for technical ease, and all of our results also apply in the case where V is merely bounded from below on I.

Self-Adjoint Operator
Our first result concerns the realization ofĤ as a self-adjoint operator. As explained in the passage following equation (1.5), this is done through a sesquilinear form. We begin by introducing the sesquilinear form associated with H: Definition 2.5. Let L 2 = L 2 (I) denote the set of square integrable functions (equivalence classes up to measure zero) on I, with its usual inner product and norm Let AC = AC(I) denote the set of functions that are locally absolutely continuous on I's closure, and let We define the following inner product and norm on H 1 Substituting g (0) = −αg(0) and −g (b) = −βg(0) then yields E(f, g).
We now define the form associated withĤ as a random perturbation of E coming from the noise. We assume that the Gaussian process Ξ driving the noise is as follows: Assumption 2.7. Ξ : R → R is a centered Gaussian process such that: 1. Almost surely, Ξ(0) = 0 and Ξ has continuous sample paths. 2. Ξ has stationary increments, that is, for every x 1 , . . . , x , y 1 , . . . , y ∈ R ( ∈ N) such that x i ≤ y i for all 1 ≤ i ≤ and h ∈ R, the increments have the same joint distribution as the shifted increments We may now define ξ as the distributional derivative of Ξ: (note that we omit the boundary term −f (0)Ξ(0) in the formal integration by parts for Cases 2 and 3, since we assume that Ξ(0) = 0).
Hence f → ξ(f 2 ) extends uniquely to a continuous quadratic form on H 1 V that satisfies V , which we can then also extend to a sesquilinear form by the polarization identity: In particular, almost surely, we can define the sesquilinear form on the same form domain as E, that is, for all for all f, g ∈ D(E). An immediate corollary of Proposition 2.10 is the ability to study the spectrum ofĤ using the variational characterization coming from the formÊ: Definition 2.12. Let A be a semi-bounded self-adjoint operator with discrete spectrum. We use λ 1 (A) ≤ λ 2 (A) ≤ · · · to denote the eigenvalues of A in increasing order, and we use ψ 1 (A), ψ 2 (A), . . . to denote the associated eigenfunctions.
+∞; 2. the ψ k (Ĥ) form an orthonormal basis of L 2 ; and 3. for every k ∈ N, with ψ k (Ĥ) being the minimizer of the above infimum with unit L 2 norm.

Semigroup
We now state our main result regarding the Feynman-Kac formula for the semigroup generated byĤ. Thanks to Proposition 2.10 and Corollary 2.13, we know that under Assumptions 2.1, 2.3, and 2.7, the semigroup ofĤ is the family of bounded self-adjoint operators with spectral expansions (2.7) In order to state our Feynman-Kac formula for e −tĤ , we introduce some notations and further assumptions.

Preliminary Definitions
We begin with some preliminary definitions regarding the covariance of the noise ξ and the stochastic processes required to define our Feynman-Kac kernels.
Definition 2.14 (Covariance). Let us denote by PC c = PC c (I) the set of functions f : I → R that are càdlàg and compactly supported on I's closure. We say that f ∈ PC c is a step function if it can be written as To simplify forthcoming definitions and statements, we often extend the domain of f ∈ PC c to R, with the convention that f (x) = 0 for all x outside of I's closure (noting, however, that f 's extension need not be càdlàg on all of R). Let γ : PC c → R be an even almost-everywhere-defined function or Schwartz distribution (even in the sense that f, γ = rf, γ for every f , where rf (x) = f (−x) denotes the reflection map), such that the bilinear map f, g γ := is a semi-inner-product. We denote the seminorm induced by (2.9) as Remark 2.15. If γ is not an almost-everywhere-defined function, then the integral over γ(x − y) in (2.9) may not be well defined. In such cases, we rigorously interpret (2.9) as f * rg, γ = rf * g, γ .
Definition 2.16 (Stochastic Processes, etc.). We use B to denote a standard Brownian motion on R, X to denote a reflected standard Brownian motion on (0, ∞), and Y to denote a reflected standard Brownian motion on (0, b). Let Z = B, X, or Y . For every t > 0 and x, y ∈ I, we define the conditioned processes We denote the Gaussian kernel by We denote the transition kernels of B, X, and Y as Π B , Π X , and Π Y , respectively. That is, for every t > 0, u,v] (Z) (a ∈ I) denote the continuous version of the local time of Z (or its conditioned versions) on [u, v], for any measurable function f : I → R. In the special case where u = 0 and v = t, we use the shorthand L t (Z) := L [0,t] (Z). When there may be ambiguity regarding which conditioning of Z is under consideration, we use L [u,v] (Z x ) and L [u,v] (Z x,y t ). As a matter of convention, if Z = X or Y , then we distinguish the boundary local time from the above, which we define as

Noise
We now articulate the assumptions that the noise ξ must satisfy for our Feynman-Kac formula to hold. We recall from the introduction that we think of ξ as a centered Gaussian process with covariance E[ξ(x)ξ(y)] = γ(x − y), with γ as in Definition 2.14. Interpreting ξ(f )" = " R f (x)ξ(x) dx for a function f , this suggests that, as a random Schwartz distribution, ξ is a centered Gaussian process with covariance E [ξ(f )ξ(g)] = f, g γ . In similar fashion to Assumption 2.7, we want to interpret ξ as the distributional derivative of some continuous process Ξ, that is, corresponding to (1.4). If ξ's covariance is given by the semi-inner-product ·, · γ , then this suggests that Ξ's covariance is equal to This leads us to the following Assumption: Assumption 2.18. The centered Gaussian process Ξ : R → R satisfies Assumption 2.7. Moreover, there exists a γ : PC c → R as in Definition 2.14 that satisfies the following conditions.

12)
where f q := R |f (x)| q dx 1/q denotes the usual L q norm.

Semigroups for 1D Schrödinger Operators with Gaussian Noise
Then, for every f ∈ PC c , we define (2.13) where dΞ denotes stochastic integration with respect to Ξ interpreted in the pathwise sense of Karandikar [26] (see Section 3.2.1 for the details of this construction).

Remark 2.19.
Though this is not immediately obvious from the above definition, the pathwise stochastic integral (2.13) actually coincides with (2.4) for every f ∈ C ∞ 0 . We note, however, that the extension of ξ to PC c need not be linear on all of PC c , and thus may not be a Schwartz distribution in the proper sense on that larger domain. Our interest in defining the stochastic integral in a pathwise sense is that it allows to construct ξ as a random map from PC c to R that satisfies the following properties.
1. We can consider the conditional distribution of ξ L t (Z) given a fixed realization of Ξ, assuming independence between Z and Ξ.
In fact, any other pathwise stochastic integral that is an extension of (2.4) and satisfies these two properties leads to the same statement in Theorem 2.24 below. We point to Section 3.2.1 and Appendix A for the details of the proof that ξ has these two properties, and to Section 3.2.2 for an explanation of why any stochastic integral having these two properties gives rise to our main result.
Remark 2.20. The requirement that Ξ be a continuous process with stationary increments in Assumption 2.18 is redundant: Firstly, the covariance (2.11) implies that Ξ(x) − Ξ(y) corresponds to ξ(1 [x,y) ), which is stationary since the semi-inner-product ·, · γ is translation invariant. Secondly, if we construct Ξ using abstract existence theorems for Gaussian processes (which is possible since ·, · γ is a semi-inner-product), then the assumption (2.12) implies that Ξ has a continuous version by Kolmogorov's theorem for path continuity (see Section 3.3 for details). We nevertheless state these properties as assumptions for clarity.

Feynman-Kac Kernels
We now introduce the Feynman-Kac kernels that describeĤ's semigroup.
Definition 2.21. In Cases 2 and 3, let us define the quantities where α, β ∈ R are as in (2.1) and (2.2). For every t > 0, we define the (random) kernel K(t) : where we assume that Ξ is independent of B, X, or Y , and E x,y t denotes the expected value conditional on Ξ.
In the above definition, we use the convention for any c ∈ ∂I as well as e −∞ = 0. Thus, if we let τ c (Z) := inf{t ≥ 0 : Z(t) = c} denote the first hitting time of c, then we can interpret e −∞·L c t (Z) = 1 {τc(Z)>t} . In particular, if we remove the term ξ(L t (Z)) from the kernel (2.14), then we recover the classical Feynman-Kac formula for the semigroup of H. See Section 5.1 for more details.
Notation 2.23. Given a Kernel J : I 2 → R (such asK(t)), we also use J to denote the integral operator induced by the kernel, that is, We say that J is Hilbert-Schmidt if J 2 < ∞, and trace class if Tr[|J|] < ∞.

Main Result
Our main result is as follows. hold. Almost surely, e −tĤ is a Hilbert-Schmidt/trace class integral operator for every t > 0. Moreover, for every t > 0, the following holds with probability one.
Remark 2.25. We point to Section 3.2.1 and Appendix A for a justification of the wellposedness of the conditional expectation in (2.14) and that the kernelK(t) is Borel measurable, thus making quantities such as , which concern Case 2 in the special case where V (x) = x and Ξ is a Brownian motion. All other cases are new.

Remark 2.27.
Though this direction is not explored in this paper, we expect that one could prove (in similar fashion to, e.g., [25,Theorem 4.12]) that the kernelsK(t; x, y) admit continuous modifications in t, x, and y.

Optimality and Examples
We finish Section 2 by discussing the optimality of the growth condition (2.3) in our results and by providing examples of covariance functions/distributions γ that satisfy Assumption 2.18.

Optimality of Potential Growth
On the one hand, one of the key aspects of our proof of Proposition 2.10 for unbounded domains I is to show that the growth rate of the squared increment process (4.3), and the passage that follows). Given that the growth rate of stationary Gaussian processes (such as Ξ(x + 1) − Ξ(x)) is at most of order log |x| (e.g., Corollary B.2), and that in many cases there is also a matching lower bound (e.g., Remark B.4), the growth condition (2.3) appears to be the best one can hope for with the method we use to prove Proposition 2.10. It would be interesting to see if this condition is necessary forĤ to have compact resolvent (perhaps by using the Sturm-Liouville interpretation (1.6)). That being said, for the deterministic , it is well known that having a spectrum of discrete eigenvalues that are bounded below is equivalent to x+δ x V (y) dy → ∞ as x → ∞ for all δ > 0; hence it is natural to expect that V must have some kind logarithmic growth to balance the Gaussian potential.
On the other hand, condition (2.3) is necessary to have that that E K (t) 2 2 < ∞ for t > 0 close to zero, which is crucial in our proof of Theorem 2.24. Given that the deterministic semigroup e −tH is not trace class for small t > 0 when (2.3) does not hold, we do not expect it is possible to improve Theorem 2.24 in that regard. We refer to Remark 5.22 for more details.

Examples
Given the simplicity of Assumption 2.7, it is straightforward to come up with examples of Gaussian noises to which Proposition 2.10 can be applied. In contrast, Assumption 2.18 is a bit more involved. In what follows, we provide examples of covariance functions/distributions γ that satisfy Assumption 2.18.
Example 2.28. Let γ : PC c → R be an even almost-everywhere-defined function or Schwartz distribution.
1. (Bounded) If γ ∈ L ∞ (R), then we call ξ a bounded noise. Depending on the regularity of γ, in many such cases ξ can actually be realized as a continuous Gaussian process on R with covariance E[ξ(x)ξ(y)] = γ(x − y). 2. (White) If γ = σ 2 δ 0 for some σ > 0, where δ 0 denotes the delta Dirac distribution, then ξ is a Gaussian white noise with variance σ 2 . This corresponds to stochastic integration with respect to a two-sided Brownian motion W with variance σ 2 : , then ξ is a fractional noise with variance σ 2 and Hurst parameter H. This noise corresponds to stochastic integration with respect to a two-sided fractional Brownian motion W H with variance σ 2 and Hurst parameter H: 4. (L p -Singular) Let ∈ N and 1 ≤ p 1 , . . . , p < ∞. As a generalization of bounded and fractional noise, we say that ξ is an L p -singular noise if Our last result in this Section is the following.

Proposition 2.29.
For every covariance γ in Example 2.28, there exists a centered Gaussian process Ξ that satisfies Assumption 2.18.

Proof Outline
In this section, we provide an outline of the proofs of our main results. Most of the more technical results, which we state here as a string of propositions, are accounted for in Sections 4 and 5. Throughout Section 3, we assume that Assumptions 2.1 and 2.3 are met.

Outline for Propositions 2.9 and 2.10
In this outline, we assume that Assumption 2.7 holds. Let FC ⊂ C ∞ 0 be the set of real-valued smooth functions ϕ : I → R such that 1. supp(ϕ) is a compact subset of I in Cases 1, 2-D, and 3-D; 2. supp(ϕ) is a compact subset of I's closure in Cases 2-R and 3-R; and We begin with two classical results in the theory of Schrödinger operators. (For definitions of the functional analysis terminology used in this section, we refer to [ for all x ∈ I. Proposition 3.2. E is closed and semibounded on D(E), and FC is a form core for E. H is the unique self-adjoint operator on L 2 whose sesquilinear form is E, and H has compact resolvent. Lastly, · * is equivalent to the "+1 norm" induced by the form E, where we recall that the latter is defined as Although Lemma 3.1 and Proposition 3.2 can be proved using standard functionalanalytic arguments, we were not able to locate an exact statement in the literature that covers every case considered in this paper. For the sake of completeness, we provide a proof and references in Appendix C. Remark 3.3. Since · * and · +1 are equivalent, the claim that E is closed on D(E) and that FC is a form core is equivalent to the claim that D(E), ·, · * is a Hilbert space in which FC is dense.
The following proposition, which we prove in Section 4, is a generalization of a result that first appeared in [36], and also uses Lemma 3.1 as a crucial input: Proposition 3.4. The inequality (2.5) holds almost surely, and thus f → ξ(f 2 ) extends uniquely to a continuous quadratic form on Moreover, almost surely, for every θ > 0, there exists c = c(θ) > 0 such that Thanks to (3.1), almost surely, ξ is an infinitesimally form-bounded perturbation of E.

Outline for Theorem 2.24
We now go over the outline of the proof of our main result. Throughout, we assume that Assumption 2.18 holds. The outline presented here is separated into five steps. In the first step we provide details on the construction of the pathwise stochastic integral (2.13). In the second step, we introduce smooth-noise approximations ofĤ andK(t) that serve as the basis of our proof of Theorem 2.24. Then, in the last three steps we prove Theorem 2.24 using these smooth approximations.

Step 1. Stochastic Integral
If f ∈ PC c is a step function of the form (2.8), then we can define a pathwise stochastic integral in the usual way: Thanks to (2.11), straightforward computations reveal that for such f we have the isometry E ξ(f ) 2 = f 2 γ . According to (2.12), step functions are dense in PC c with respect to f 2 γ , and thus we may then uniquely define a stochastic integral ξ * (f ) for arbitrary f ∈ PC c as the L 2 (Ω) limit of ξ(f n ), where f n is a sequence of step functions that converges to f in · γ and L 2 (Ω) denotes the space of square-integrable random variables on the same probability space on which Ξ is defined.
We now discuss how ξ(f ) for general f ∈ PC c can be defined in a pathwise sense as per Karandikar [26]. Given f ∈ PC c , for every n ∈ N, define k(n) and −∞ < τ Then, we define the approximate step function as well as the pathwise stochastic integral On the one hand, as argued in Appendix A (see also [26,Section 1]), the pathwise s definition as a conditional expectation of ξ L t (Z) given Ξ. On the other hand, ξ(f ) retains its meaning as a stochastic integral, since for every f ∈ PC c , it holds that ξ(f ) = ξ * (f ) almost surely. Indeed, by combining the L 2 (Ω)-· γ isometry of ξ * , the definition of τ (n) k , and (2.12), we get that since this is summable in n we conclude that ξ(f (n) ) → ξ * (f ) almost surely, as desired.

Remark 3.5.
Let f ∈ C ∞ 0 (I), and suppose that we restrict our attention to the almostsure event on which Ξ is continuous. By a summation by parts, we note that for all n ∈ N. On the one hand, in Cases 1 and 2, we invariably have that On the other hand, since f is of bounded variation, we have convergence to the usual Riemann-Stieltjes integral: In particular, the pathwise stochastic integral defined in (3.2) can be seen as an extension of the Schwartz distribution Ξ as defined in Definition 2.8 to all of PC c . However, as noted in an earlier remark, ξ need not preserve its linearity on all of PC c .

Step 2. Smooth Approximations
A key ingredient in the proof of Theorem 2.24 consists of using smooth approximations of Ξ for which the classical Feynman-Kac formula can be applied, thus creating a connection betweenĤ as defined via a quadratic form and the kernelsK(t).
Remark 3.7. Since ε is smooth, the process Ξ ε = (Ξ * ε ) = Ξ * ε has continuous sample paths. Thanks to (2.11), straightforward computations reveal that Ξ ε is a stationary Gaussian process with mean zero and covariance for every x, y ∈ R, where the last equality follows from integration by parts.
Moreover, following-up on Remark 3.5, we note that the pathwise stochastic integral ξ is coupled to the random Schwartz distribution in the following way: For every f ∈ PC c , the function f * ε is smooth and compactly supported on I + supp( ε ) ⊂ R, and thus by Remark 3.5 we have that Definition 3.8. For every ε > 0, let us define the sesquilinear form on the form domain D(E), and the random kernel Since Ξ ε has regular sample paths, applying classical operator theory toĤ ε yields the following result: Proposition 3.9. For every ε > 0, the following holds almost surely: There exists a unique self-adjoint operatorĤ ε with dense domain D(Ĥ ε ) ⊂ L 2 such that 3.Ĥ ε has compact resolvent.
For every t > 0, e −tĤε is a self-adjoint Hilbert-Schmidt/trace class operator, and we have the Feynman-Kac formula e −tĤε =K ε (t). In particular, Moreover, as a direct consequence of the coupling (3.4) and the fact that ξ is a Gaussian process with covariance ·, · γ , we can show that the objects introduced in Definition 3.8 serve as good approximations ofĤ andK(t) in the following sense: Proposition 3.10. Almost surely, every vanishing sequence in (0, 1] has a further subsequence (ε n ) n∈N along which for all k ∈ N, up to possibly relabeling the eigenfunctions ofĤ if it has repeated eigenvalues. (3.10)

Step 3. Feynman-Kac Formula
We are now in a position to prove Theorem 2.24. We begin by proving that for every t > 0, e −tĤ =K(t) almost surely. Let us fix some t > 0. By Propositions 3.9-3.11, almost surely, there exists a vanishing sequence (ε n ) n∈N such that (3.5)-(3.7) holds for every ε n , and along which the limits (3.8) and hold. For the remainder of this step, we assume that we are working with an outcome in this probability-one event.
On the other hand, we note that This vanishes as n → ∞ for all k ∈ N. Moreover, the spectral expansion (3.7) and the . Thus e −tλ k (Ĥ) , ψ k (Ĥ) k∈N can be taken as the eigenvalue-eigenfunction pairs forK(t), concluding the proof thatK(t) = e −tĤ .

Step 5. Last Properties
We now conclude the proof of Theorem 2.24 by showing that, almost surely, e −tĤ is a Hilbert-Schmidt/trace class integral operator for every t > 0. By combining (3.13) with the fact that every Hilbert-Schmidt operator on L 2 has an integral kernel in L 2 (I × I) (e.g., [47, Theorem 6.11]), we need only prove that, almost surely, e −tĤ is trace class for all t > 0.
In the previous step of this proof, we have already shown the weaker statement that, for every t > 0, Tr[e −tĤ ] < ∞ almost surely. By a countable intersection we can extend this to the statement that there exists a probability-one event on which Tr[e −tĤ ] < ∞ for every t ∈ Q ∩ (0, ∞). Since λ k (Ĥ) → ∞ as k → ∞, there exists some k 0 ∈ N such that λ k (Ĥ) > 0 for every k > k 0 . Since k0 k=1 e −tλ k (Ĥ) is finite for every t and ∞ k=k0+1 e −tλ k (Ĥ) is monotone decreasing in t, the fact that Tr[e −tĤ ] < ∞ holds for t ∈ Q ∩ (0, ∞) implies that it holds for all t > 0, concluding the proof of Theorem 2.24. 2] (which we recall apply to Case 2 with V (x) = x), the argument presented here uses smooth approximations ofK(t) rather than random matrix approximations. Since the present paper does not deal with convergence of random matrices, this choice is natural, and it allows to sidestep several technical difficulties involved with discrete models. With this said, the proof of (3.8) is inspired by the convergence result for the spectrum of random matrices in [3, Section 2] and [36,Section 5]. We refer to Section 5 for the details.

Proof of Proposition 2.29
The main technical result in the proof of Proposition 2.29 is the following estimate, which is a direct consequence of [19,Lemma 4.2] (as shown in [19], (3.14) is a straightforward consequence of Young's convolution inequality). such that for every f ∈ PC c , it holds that (3.14) Whenever γ is such that ·, · γ is a semi-inner-product, we know from standard existence theorems that there exists a Gaussian process Ξ on R with covariance (2.11).
As argued in Remark 2.20, such a process must have stationary increments. To see that such Ξ have continuous versions, we note that for any 1 ≤ q ≤ 2 and and x < y such that y − x ≤ 1, one has 1 [x,y) 4 q = (y − x) 4/q with 4/q > 1. Thus, given that 1/(1 − 1/2p) ∈ (1, 2] for every p ≥ 1, it follows from Proposition 3.13 that there exists some constants c, r > 0 such that

Proof of Propositions 2.9 and 2.10
In this section, we complete the proof of Propositions 2.9 and 2.10. Following-up on the outline in Section 3.1, it only remains to prove Proposition 3.4.

Step 1. Reduction to a Simple Inequality
We begin by showing that Proposition 3.4 can be entirely reduced to the following claim: Almost surely, for every θ > 0, there exists c = c(θ) > 0 such that for every f ∈ C ∞ 0 . This is easiest to see in Cases 1, 2-D, and 3-D: On the one hand, in those cases (4.1) directly implies (3.1) for all f ∈ FC, which we can then extend to every f ∈ D(E) since FC is a form core for E. On the other hand, (4.1) implies that |ξ(f 2 )| ≤ max{θ, c} f 2 * , which yields (2.5). With (2.5) established, the unique continuous extension of ξ(f 2 ) to H 1 V then follows from the fact that C ∞ 0 is dense in the Hilbert space (H 1 V , ·, · * ).
To see how (4.1) implies the desired estimate in other cases, let us consider for example Case 2-R: By (4.1), almost surely, for everyθ > 0 there existsc > 0 such that At this point, controlling f (0) 2 with Lemma 3.1 yields the desired estimate (with the straightforward substitution θ :=θ(1 + ακ 2 )). Cases 3-R and 3-M can be dealt with in the same way.

4.2
Step 2. Proof of (4.1) We now complete the proof of Proposition 3.4 by proving (4.1). We begin with Cases 1 and 2. Following [32,36], we define the integrated process x ∈ R so that we can write Ξ(x) =Ξ(x) + Ξ(x) −Ξ(x) ; hence for every f ∈ C ∞ 0 , one has , it suffices to prove that almost surely, for every θ > 0, there exists c > 0 such that for all f ∈ C ∞ 0 . Thanks to Assumption 2.7, the processes x →Ξ (x) and x →Ξ(x) − Ξ(x) are continuous stationary centered Gaussian processes on R, and thus it follows from standard Gaussian suprema estimates (e.g., Corollary B.2) that there exists a finite random variable C > 0 such that, almost surely, for all x ∈ I. Since V (x) log |x| as |x| → ∞, for every θ > 0, there existsc 1 ,c 2 > 0 depending on θ such that for all x ∈ I. On the one hand, (4.2) and the above inequality imply that On the other hand, the same inequalities and |zz| ≤ 1 2 (z 2 +z 2 ) imply concluding the proof.

Proof of Theorem 2.24
In this section, we complete the outline for the proof of Theorem 2.24 provided in Section 3.2 by proving Propositions 3.9-3.11. This is done in Sections 5.6-5.9 below. Before we do this, however, we need several technical results regarding the deterministic semigroup e −tH and the behaviour of the local times L t (Z) and L t (Z). This is done in Sections 5.1-5.5.

Feynman-Kac Formula for Deterministic Operators
We begin by recording some standard results in semigroup theory. By the Feynman-Kac formula, we expect that e −tH = K(t) for the kernels K(t) defined as follows: To prove this, we begin with a reminder regarding the Kato class of potentials.  We use K loc = K loc (I) to denote the class of f 's such that f 1 K ∈ K for every compact subset K of I's closure.  While Theorem 5.4 follows from standard functional-analytic methods (e.g., [11]), we were not able to locate an exact statement in the literature that covers Cases 2-R and 3-M. We provide a full proof and references in Appendix D.
It is easy to see from (5.2) that locally integrable functions are in K loc so that, by Assumption 2.3, V ∈ K loc . Therefore, we have the following immediate consequence of

Reflected Brownian Motion Couplings
The local time process of the Brownian motion B is much more well studied than that of its reflected versions X or Y . Thus, it is convenient to reduce statements regarding the local times of the latter into statements concerning the local time of B. In order to achieve this, we use the following couplings of B with X and Y .

Half-Line
For any x > 0, we can couple B and X in such a way that X x (t) = |B x (t)| for every t ≥ 0. In particular, for any functional F of Brownian paths, one has  Under the same coupling, we observe that for every positive x, y, and t, one has Therefore, for any path functional F , it holds that According to the strong Markov property and the symmetry about 0 of Brownian motion, we note the equivalence of conditionings where we define the hitting time τ 0 as in Remark 2.22. Indeed, we can obtain the left-hand side of (5.7) from the right-hand side by reflecting (B x |B x (t) = −y) after it first hits zero and then taking an absolute value (see Figure 1 below for an illustration). Since (this is easily computed from the joint density of the running maximum and current value of a Brownian motion [40, Chapter III, Exercise 3.14]), we see that    (black) and its reflection after the first passage to zero (red).

Bounded Interval
For any x ∈ (0, b), we can couple Y x and B x by reflecting the path of the latter on the boundary of (0, b), that is,

Boundary Local Time
In this section, we control the exponential moments of the boundary local time of the reflected paths X and Y .  On the one hand, by Brownian scaling, we have the equality in law ).
Next, we aim to extend the result of Lemma 5.6 to the local time of the bridge processes Z x,x t . Before we can do this, we need the following estimate on Π Z . Lemma 5.7. For every t > 0, it holds that Proof. In all three cases, Π Z (t; x, x) ≥ 1/ √ 2πt, and thus it suffices to prove that sup (x,y)∈I 2 Π Z (t; x, y) < ∞. According to the integral test for series convergence, we note that for every b, t > 0 and z ∈ R, it holds that and similarly for the sum from k = −∞ to −z/2b ; hence (5.16) holds.
We finish this section with the following.
Proof. As it turns out, (5.17) follows from Lemma 5.6. The trick that we use to prove this makes several other appearances in this paper: Since the exponential function is nonnegative, for every θ > 0, an application of the tower property and the Doob h-transform yields x, x) dy. (5.18) If we condition on Z x,x t (t/2) = y, then the path segments are independent of each other and have respective distributions Z x,y t/2 and Z y,x t/2 . Since Π Z (t/2; ·, ·) is symmetric for every t > 0, the time-reversed process s → Z y,x ≤ E x,y t/2 e 2θL c t/2 (Z) , (5.19) where the equality in (5.19) follows from independence and the fact that local time is invariant with respect to time reversal, and the last term in (5.19) follows from Jensen's inequality.
Let us define the constant s t (Z) < ∞ as in (5.15). According to (5.18) and (5.19), we then have that for t > 0, Hence the present result is a direct consequence of Lemma 5.6.

L q Norms of Local Time
In this section, we obtain bounds on the exponential moments of the L q norms of the local times of B, X, and Y . Such results for B x are well known (see, for instance, [8,Section 4.2]). For X and Y and the bridge processes, we rely on the couplings introduced in Section 5.2 and the midpoint sampling trick used in the proof Lemma 5.8, respectively. Before we state our result, we need the following.
where the second line follows from Minkowski's inequality and (z +z) 2 ≤ 2(z 2 +z 2 ), and the third line follows from conditional independence of the path segments. The result then follows by taking a supremum over x ∈ I.
Proof. We begin by noting that L t (Z) 1 = t by (2.10), and thus the result is trivial if q = 1. To prove the result for 1 < q ≤ 2, we claim that it suffices to show that there exists nonnegative random variables R 1 , R 2 ≥ 0 with finite exponential moments in some neighbourhood of zero, as well as constants κ 1 , κ 2 > 1 such that for all t > 0. To see this, suppose (5.22) holds, and let θ 0 > 0 be such that E[e θR1 ] < ∞ for all θ < θ 0 . Then, for any fixed θ > 0, Thus, (5.21) now holds for t < 2(θ 0 /2θ) 1/κ1 = 2 1−1/κ1 (θ 0 /θ) 1/κ1 . Since κ 1 > 1, 2 1−1/κ1 > 1, and thus by repeating this procedure infinitely often, we obtain by induction that ( for every q > 1. Thanks to the large deviation result [8, Theorem 4.2.1], we know that for every q > 1, there exists some c q > 0 such that Thus, in Case 1 (5.22) holds with R 1 = L 1 (B 0 ) 2 q and κ 1 = 1 + 1/q. Consider now Case 2. By coupling X x (t) = |B x (t)| for all t > 0, we note that for every a > 0, one has L a t (X Thus, the proof in Case 2 follows from Case 1. Finally, consider Case 3. Recall the coupling of Y x and B x in (5.10), which yields the local time identity (5.11). The argument that follows is inspired from the proof of [10, Lemma 2.1]: Under the coupling (5.11),

In order for the integral
where c 1 , c 2 > 0 only depend on b and q: The inequality on the third line follows from Hölder's inequality; the equality on the fourth line follows from the fact that

; and the inequality on the last line follows from the fact that
By Brownian scaling and translation invariance, we have that , and κ 1 = 1 + 1/q and κ 2 = 2.
Arguing as in the passage following (5.18), where the inequality on the second line follows from a combination the triangle inequality and (z +z) 2 ≤ 2(z 2 +z 2 ), the equality on the third line follows from independence and invariance of local time under time reversal, and the inequality on the third line follows from Jensen's inequality.
With s t (Z) as in (5.15), similarly to (5.20) we then have the upper bound for every t > 0; whence the present result readily follows from Lemma 5.10.

Compactness Properties of Deterministic Kernels
We now conclude the proofs of our technical results with some estimates regarding the integrability/compactness of the deterministic kernels (5.1). In this section and several others, to alleviate notation, we introduce the following shorthand. Notation 5.12. For every t > 0, we define the path functional Lemma 5.13. For every p ≥ 1 and t > 0, Proof. Let us begin with Case 1. By Assumption 2.3, for every c 1 > 0, there exists c 2 > 0 large enough so that V (x) ≥ c 1 log(1 + |x|) − c 2 for every x ∈ R. Therefore, we have . The supremum of exponential moments of local time can be bounded by a direct application of Lemma 5.8. Then, by (5.9), we have that This term can be controlled in the same way as Case 1.
. This is finite by Lemmas 5.7 and 5.8.

Proof of Proposition 3.9
Suppose we can prove that for every ε > 0, the potential V + Ξ ε satisfies Assumption 2.3 with probability one (up to a random additive constant, making it nonnegative). Then, by Proposition 3.2, theĤ ε are self-adjoint with compact resolvent. Moreover, K ε (t) = e −tĤε and the properties (3.5)-(3.7) then follow from Corollary 5.5, and the fact that e −tĤ is trace class follows from Lemma 5.13 in the case p = 1. Thus, it only remains to prove the following: Lemma 5.14. For every ε > 0, there exists a random c = c(ε) ≥ 0 such that the potential V + Ξ ε + c satisfies Assumption 2.3 with probability one.
Proof. Since Ξ ε is continuous, V + Ξ ε is locally integrable on I's closure. Moreover, if we prove that |Ξ ε (x)| log |x| as x → ±∞, then the continuity of Ξ ε also implies that V + Ξ ε is bounded below and is such that The fact that |Ξ ε (x)| log |x| follows from Corollary B.2, since Ξ ε is stationary.

Proof of Proposition 3.10
Our proof of this result is similar to [3, Section 2] and [36, Section 5], save for the fact that we use smooth approximations instead of discrete ones. We provide the argument in full. Following we may extract a further subsequence along which (3) holds by the Arzelà-Ascoli theorem. Finally, in Case 3, (3) immediately implies (1), whereas in Cases 1 and 2, by combining (3) with the Vitali convergence theorem, to prove (1) it suffices to show that for every ε > 0, there exists K > 0 large enough and δ > 0 small enough so that The first of these conditions follows from the fact that sup n V 1/2 f n 2 < ∞ and that V (x) log |x|; the second follows from the uniform bound in Lemma 3.1.

Remark 5.16.
It is easy to see by definition of ·, · * that if f n → f in the sense of Lemma 5.15 (1)-(4), then for every g ∈ FC, one has lim n→∞ E(g, f n ) = E(g, f ).
We can reformulate Proposition 3.4 in terms of · * thusly: Lemma 5.17. There exist finite random variables c 1 , c 2 , c 3 > 0 such that We also have the following finite ε variant: Lemma 5.18. There exist finite random variablesc 1 ,c 2 ,c 3 > 0 such that for every ε ∈ (0, 1],c Proof. By repeating the proof of Proposition 3.4, we only need to prove that for every Arguing as in the proof of Proposition 3.4, it suffices to show that sup ε∈(0,1] sup 0≤x≤b |Ξ ε (x)| < ∞ almost surely and that there exist finite random variables C > 0 and u > 1 independent of ε ∈ (0, 1] such that for every x ∈ R, On the one hand, since the ε integrate to one, On the other hand, by Corollary B.2 and Remark B.3, for every x ∈ I and ε ∈ (0, 1], one which yields the desired estimate.
Proof. Clearly Ξ ε → Ξ pointwise, hence for (5.29) it suffices to prove that Since f g + f g is compactly supported and Ξ is continuous (hence bounded on compacts), the result follows by dominated convergence.
Let us now prove (5.30). Using again the fact that g and g are compactly supported, we know that there exists a compact K ⊂ R (in Case 3 we may simply take K = [0, b]) such that and similarly with f n replaced by f and Ξ * εn replaced by Ξ. Given that, as n → ∞, Ξ * εn 1 K → Ξ1 K in L 2 , f n g + f n g → f g + f g weakly in L 2 , and sup n f n g + f n g 2 < ∞, we conclude that lim n→∞ f n g + f n g , Ξ * εn = f g + f g , Ξ .
We finally have all the necessary ingredients to prove the spectral convergence. We first prove that there exists a subsequence (ε n ) n∈N such that lim inf n→∞ λ k (Ĥ εn ) ≥ λ k (Ĥ) (5.31) for every k ∈ N. Remark 5.21. For the sake of readability, we henceforth denote any subsequence and further subsequences of (ε n ) n∈N as (ε n ) n∈N itself.
By combining Remark 5.16 and (5.30), this means that l k g, f k = lim n→∞ λ k (Ĥ εn ) g, ψ k (Ĥ εn ) = lim n→∞Ê εn (g, ψ k (Ĥ εn )) =Ê(g, f k ) for all k ∈ N and g ∈ FC. That is, (l k , f k ) k∈N consists of eigenvalue-eigenfunction pairs ofĤ, though these pairs may not exhaust the full spectrum. Since the l k are arranged in increasing order, this implies that l k ≥ λ k (Ĥ) for every k ∈ N, which proves (5.31).

Proof of Proposition 3.11 Part 1
We begin by proving (3.9).

Step 1. Computation of Expected L 2 Norm
Our first step in the proof of (3.9) is to obtain a formula for E K ε (t) −K(t) 2 2 that is amenable to analysis, namely: where we recall the notation of A t (Z) from (5.24). We now prove (5.34).
By Fubini's theorem, that are independent of Ξ, and E Ξ denotes the expected value with respect to Ξ conditional on Z i;x,y t . For every f 1 , f 2 ∈ PC c , the sum ξ(f 1 ) + ξ(f 2 ) is Gaussian with mean zero and variance for all 0 < u < v < w, (5.34) is then a consequence of applying Fubini's theorem to (5.35) with the rearrangement Indeed, we note for instance that a similar argument applied to the terms on the second line of (5.35) then yields (5.34).

Step 2. Convergence Inside Expectation
With (5.34) in hand, our second step to prove (3.9) is to show that, for every x ∈ I, we have the almost sure limit This is a simple consequence of (2.12) coupled with the fact that if f ∈ L q for some q ≥ 1, then f * ε − f q → 0 as ε → 0.

Step 3. Convergence Inside Integral
Our next step is to prove that for every x ∈ I, we have the limit in expectation This is finite by Lemma 5.11 since 1 ≤ q i ≤ 2 for all 1 ≤ i ≤ . According to Young's convolution inequality, the fact that the ε integrate to one implies that f * ε q ≤ f q ε 1 ≤ f q . Thus, it follows from (2.12) that Since · γ is a seminorm, it satisfies the triangle inequality, and thus Given that L t (Z x,x 2t ) and L [t,2t] (Z x,x 2t ) are both smaller than L 2t (Z x,x 2t ), applying once again (2.12) and Young's inequality yields which is finite by Lemma 5.11. We therefore conclude that (5.39) holds, and thus (5.37) as well.

Step 4. Convergence of the Integral
Our final step in the proof of (3.9) is to show that (5.34) converges to zero. Given (5.37), by applying the dominated convergence theorem, it suffices to find an integrable function that dominates for every ε > 0. By Holder's inequality, this is bounded by for some constants C, θ > 0. This is finite by Lemma 5.11. We therefore conclude that (5.40) is dominated by an integrable function for all ε > 0; hence (3.9) holds.
Considering Case 1 for simplicity, it follows from (the reverse) Hölder's inequality that for every p > 1, the above is bounded below by for every x ∈ R. If V (x) ≤ c 1 log(1 + |x|) + c 2 for some c 1 > 0 and large enough c 2 > 0, then an argument similar to the proof of Lemma 5.13 (using the bound log(1 + |z +z|) ≤ log(1 + |z|) + |z| instead of (5.25)) yields the further lower bound for some finite ζ t > 0 that only depends on t; this blows up whenever t ≤ 1/c 1 . Thus, if we do not assume (2.3), then there is always some t 0 > 0 such that E[ K (t) 2 2 ] = ∞ for all t ≤ t 0 . Essentially the same argument implies that e −tH 2 = ∞ for all t ≤ t 0 for the deterministic operator H as well.
Arguing as in the previous section, the above is seen to be equal to

A Measurability of Kernel
We begin by proving that, in Case 1, for every realization of Ξ as a continuous function, (x, y) →K(t; x, y) can be made a Borel measurable function on R 2 . Notation A.1. Let C [0,t] be the set of continuous functions f : [0, t] → R, which we equip with the uniform topology. Let C = C(R) be the space of continuous functions f : R → R, equipped with the uniform-on-compacts topology; and let C 0 = C 0 (R) be the space of continuous and compactly supported functions f : R → R, equipped with the uniform topology. We use P 0,0 t to denote the probability measure of the Brownian bridge on C [0,t] , and assume that C is equipped with the probability measure of Ξ.
By Fubini's theorem, it suffices to prove that there exists a measurable map F : R 2 ⊗ C [0,t] ⊗ C → R such that for every (x, y) ∈ R 2 , ω ∈ C [0,t] , andω ∈ C, we can interpret e − Lt(B x,y t ),V −ξ(Lt(B x,y t )) = F (x, y), ω,ω (A.1) (here,ω ∈ C corresponds to a realization of Ξ, and (x, y), ω ∈ R 2 ⊗ C [0,t] corresponds to a realization of the Brownian bridge B x,y t with deterministic endpoints x and y and random dynamics given by the Brownian path B 0,0 t ). Indeed, if this holds, then for every realization of the noiseω ∈ C, we can define the Borel measurable function K(t; x, y) := which corresponds to the expected value of Π B (t; x, y) e − Lt(B x,y t ),V −ξ(Lt(B x,y t )) given Ξ. Given a realization ω ∈ C [0,t] of B 0,0 t and x, y ∈ R, we can construct a realization of B x,y t by using the measurable map F 1 : Next, we let F 2 : C [0,t] → C 0 be the measurable function that maps ω to its (continuous) local time. More precisely, let E ⊂ C [0,t] be the event on which the limit exists and is finite for all a ∈ R, and the resulting function a → L a t (ω) is an element of C 0 . (We know from [40, Chapter VI, Corollaries 1.8 and 1.9] that E has probability one under the law of B x,y t .) Then, for every ω ∈ C [0,t] , we define the function F 2 (ω) ∈ C 0 as (To see that this is measurable, note that ω → L a t (ω)1 {ω∈E} is measurable for every fixed a ∈ R, and that the Borel σ-algebra on C 0 is generated by evaluation maps.) Finally, let In order to prove that the diagonal x → K(t; x, x) is Borel measurable, we apply the same argument, except that x = y. Then, in order to prove the measurability in Cases

B Tails of Gaussian Suprema
Throughout this section, we assume that X(x) x∈T is a continuous centered Gaussian process on some index space T. We have the following result regarding the behaviour of the tails of X's supremum.
where Φ denotes the standard Gaussian CDF.
Using this Gaussian tails result, we can control the asymptotic growth of functions involving Gaussian Suprema.
Corollary B.2. Let T = R, and suppose that X is stationary. There exists a finite random variable C > 0 such that, almost surely, |X(x)| ≤ C log(2 + |x|), x ∈ R.

C.2.1 Step 1. Norm Equivalence and E is Semibounded
We begin by proving that · +1 and · * are equivalent and that E is semibounded. In Cases 1, 2-D, and 3-D, it suffices to observe that, because V ≥ 0, one has E(f, f ) = 1 2 f 2 2 + V 1/2 f 2 2 ≥ 0. In the other cases, where E(f, f ) contains boundary terms of the form −αf (0) 2 and −βf (b) 2 , we get that E is semibounded and the equivalence of norms from Lemma 3.1.

C.2.2 Step 2. E is Closed
Knowing that · +1 and · * are equivalent, to prove that E is closed, it suffices to show that D(E), ·, · * is a Hilbert space. This follows from the fact that Sobolev spaces and the L 2 space with measure V (x)dx are complete, noting further that g n − g 2 → 0 as n → ∞ for a sequence (g n ) n∈N ⊂ D(E) and g ∈ H 1 V implies by the fundamental theorem of calculus that g n → g pointwise. Hence in cases 2-D, 3-D, and 3-M, boundary conditions of the form g n (0) = 0 and g n (b) = 0 are preserved in the limit g. In particular, since local time is invariant under time reversal, we have (5.3).

D.2 Proof of (5.4)
For every x, y, z ∈ I and t,t > 0, if we condition the path Z x,y t+t on Z x,y t+t (t) = z, then the path segments h-transform, Z x,y t+t (t) has density z → Π Z (t; x, z)Π Z (t; z, y) Π Z (t +t; x, y) . for all z, we have that I K(t; x, z)K(t; z, y) dz is equal to (recall the notation A t from (5.24)) Π Z (t +t; x, y) I E e At(Z 1;x,z t )+At(Z 2;z,ȳ t ) Π Z (t; x, z)Π Z (t; z, y) Π Z (t +t; x, y) dz = Π Z (t +t; x, y) I E e A t+t (Z x,y t+t ) Z x,y t+t (t) = z P Z x,y t+t (t) ∈ dz dz = K(t +t; x, y), as desired.

D.3 Feynman-Kac Formula
We now complete the proof of Theorem 5.4 by showing that e −tH = K(t) for all t > 0.
The proof of this in Case 1 can be found in [43,Theorem 4.9]. For Cases 2-D and 3-D, we refer to [11, (34)  then for every t > 0, K n (t) − K(t) op → 0 as n → ∞.
Item (1) above implies that K(t) has a generator, so it only remains to prove that this generator is in fact H. By Lemma 5.13 in the case p = 1, we know that the K n (t) and Thus it suffices to prove that lim n→∞ ∞ 0K n,0 (2t; x, x) dx, lim n→∞ ∞ 0K n (2t; x, x) dx = ∞ 0 K(2t; x, x) dx.
Since X x,x 2t is almost surely continuous, hence bounded, the result is a straightforward application of monotone convergence (both with E x,x and the dx integral).
We now prove convergence of eigenvalues and eigenvectors. Let E denote the form of H and D(E) its domain, as defined in Definition 2.5 for Case 2-R. We note that we can think of H n as the operator with the same form E but acting on the smaller domain D n := f ∈ H 1 V (0, ∞) : f (x) = 0 for every x ≥ n ⊂ D(E).
These domains are increasing, in that D 1 ⊂ D 2 ⊂ · · · ⊂ D(E). A straightforward modification of the convergence argument presented in Section 5.6 gives the desired result (at least through a subsequence). the one-dimensional Anderson Hamiltonian and the parabolic Anderson model, which served as a chief motivation for the writing of this paper. The author thanks Michael Aizenman for helpful pointers in the literature regarding random Schrödinger operators. The author gratefully acknowledges Mykhaylo Shkolnikov for his continuous guidance and support and for his help regarding a few technical obstacles in the proofs of this paper, as well as Vadim Gorin and Mykhaylo Shkolnikov for discussions concerning the resolution of an error that appeared in a previous version of the paper. The author thanks anonymous referees for carefully reading several previous versions of this paper, as well as a number of insightful comments that helped significantly improve the presentation of the present version.