Analysis of a Stratified Kraichnan Flow

We consider the stochastic convection-diffusion equation \[ \partial_t u(t\,,{\bf x}) =\nu\Delta u(t\,,{\bf x}) + V(t\,,x_1)\partial_{x_2}u(t\,,{\bf x}), \] for $t>0$ and ${\bf x}=(x_1\,,x_2)\in\mathbb{R}^2$, subject to $\theta_0$ being a nice initial profile. Here, the velocity field $V$ is assumed to be centered Gaussian with covariance structure \[ \text{Cov}[V(t\,,a)\,,V(s\,,b)]= \delta_0(t-s)\rho(a-b)\qquad\text{for all $s,t\ge0$ and $a,b\in\mathbb{R}$}, \] where $\rho$ is a continuous and bounded positive-definite function on $\mathbb{R}$. We prove a quite general existence/uniqueness/regularity theorem, together with a probabilistic representation of the solution that represents $u$ as an expectation functional of an exogenous infinite-dimensional Brownian motion. We use that probabilistic representation in order to study the It\^o/Walsh solution, when it exists, and relate it to the Stratonovich solution which is shown to exist for all $\nu>0$. Our a priori estimates imply the physically-natural fact that, quite generally, the solution dissipates. In fact, very often, \begin{equation} P\left\{\sup_{|x_1|\leq m}\sup_{x_2\in\mathbb{R}} |u(t\,,{\bf x})| = O\left(\frac{1}{\sqrt t}\right)\qquad\text{as $t\to\infty$} \right\}=1\qquad\text{for all $m>0$}, \end{equation} and the $O(1/\sqrt t)$ rate is shown to be unimproveable. Our probabilistic representation is malleable enough to allow us to analyze the solution in two physically-relevant regimes: As $t\to\infty$ and as $\nu\to 0$. Among other things, our analysis leads to a"macroscopic multifractal analysis"of the rate of decay in the above equation in terms of the reciprocal of the Prandtl (or Schmidt) number, valid in a number of simple though still physically-relevant cases.

where ρ is a continuous and bounded positive-definite function on R.
We prove a quite general existence/uniqueness/regularity theorem, together with a probabilistic representation of the solution that represents θ as an expectation functional of an exogenous infinite-dimensional Brownian motion. We use that probabilistic representation in order to study the Itô/Walsh solution, when it exists, and relate it to the Stratonovich solution which is shown to exist for all ν > 0.
Our a priori estimates imply the physically-natural fact that, quite generally, the solution dissipates. In fact, very often, for all m > 0, (0.1) and the O(1/ √ t) rate is shown to be unimproveable.

Introduction and general description of results
Let V := {V (t , x)} t 0,x∈R denote a centered, generalized Gaussian random field that is white in its "time variable" t and spatially-homogeneous in its "space variable" x, with spatial correlation function ρ. Somewhat more precisely, we suppose that the covariance structure of V is described as follows: for all s, t 0 and x, y ∈ R, (1.1) where ρ : R → R + is assumed to be continuous.
The SPDE (1.4) is an example of the Kraichnan model, and describes the turbulent transport of a passive scalar quantity immersed in an incompressible two-dimensional fluid; see Kraichnan [42,43] and §10 below. The stratified velocity field V has a form that was introduced by Majda [47,48] in a slightly different setting.
If V were instead a reasonably nice function, then (1.4) can be, and has been, analyzed by both probabilistic and analytic methods. See, for example, Cranston and Zhao [22], Osada [52], and Zhang [59], and their combined bibliography. In the present, rough/random, setting, the situation is a little different. In this case, there are two standard ways to solve the SPDE (1.4). One of the approaches works as follows: One first interprets (1.4) pointwise as an infinite-dimensional Stochastic Differential Equation (SDE), dθ(t) = ν∆θ(t) dt + ∂ y θ(t) • dW (t), (1.5) where "•" denotes the Stratonovich product, and t → W a ij (t , x) ∂ 2 u ∂x i ∂x j + f (t , x , u , Du) where {w k } ∞ k=1 are i.i.d. standard Brownian motions, and the basic idea is that the above equation defines a homeomorphism between the solution space (called stochastic Banach spaces) and the space of initial data. The study of the particular equation is thus reduced to the study of the functions in the solution space, which is still quite involved.
The starting point of the present article is to take a different, third, approach to the Kraichnan SPDE (1.4), and try and produce a unique solution to (1.4), with the following nearly-minimal requirements in mind: (1) ρ is assumed only to satisfy (1.2) and (1.3); and more significantly, (2) The product of V (t , x) and ∂ y θ(t , x , y) in (1.4) is interpreted as an Itô/Walsh product, as opposed to the Stratonovich product.
The utility of (2) will become apparent soon, after we describe applications of our theory to the detailed analysis of the solution of (1.4).
As it will turn out, one can prove that (1.4) has a unique strong Itô/Walsh type solution when, and only when, ν > 1 2 ρ(0). (1.6) This is unfortunate because, in terms of the underlying fluid problem, condition (1.6) implies that the fluid is allowed to experience only low levels of turbulence. After all, ν is inversely proportional to the Reynolds number of the fluid, and 1 2 ρ(0) denotes turbulent diffusivity. One can state this limitation of (1.6) in another essentially-equivalent manner: If (1.6) holds then we cannot study the Kraichnan model in the fully-turbulent regime ν ≈ 0, in spite of the fact that the fully-turbulent regime is the subject of a vast literature on this subject. For some of the more modern treatments, in the case of the full Kraichnan model, see Celani and Vincenzi [17], Grossmann and Lohse [34], Holzer and Siggia [36], and particularly Warhaft [56], as well as their combined, extensive bibliography. Our aim to reconcile these seemingly-contradictory assertions naturally leads us to study the following slightly more general Itô/Walsh type SPDE, ∂ t θ(t , x , y) = ν 1 ∂ 2 x θ(t , x , y) + ν 2 ∂ 2 y θ(t , x , y) + ∂ y θ(t , x , y) V (t , x), (1.7) for t 0 and x, y ∈ R. Here, ν 1 and ν 2 are positive parameters, and the initial profile is still a nice possibly-random function θ 0 that is independent of V . Thus, the shear-flow model (1.4) is the same as the SPDE (1.7) in the case that ν 1 = ν 2 . Moreover, it turns out that the mentioned analysis of (1.4) generalizes immediately to show that (1.7) has a unique Itô/Walsh solution provided that condition (1.6) is replaced by ν 2 > 1 2 ρ(0); (1.8) there are no restrictions on ν 1 other than strict positivity.
We will use ideas from the Malliavin calculus in order to represent the solution to (1.7), probabilistically, in terms of an exogenous Wiener measure; see Theorems 5.8 and 8.1 below. That probabilistic (Lagrangian) representation has a number of consequences, many of which are the central, most novel, findings of this paper.
As a first application of our probabilistic representation we construct a Stratonovichtype solution to (1.7), and in particular to (1.4)  In order to describe this work in more detail let {φ ε } ε>0 denote a suitably-regular approximation to the identity on R + × R and define V ε = φ ε * V for all ε > 0, where the space-time integral in the latter convolution is understood as a Wiener integral. It is not difficult to see that the two-parameter Gaussian random field V ε is almost surely C ∞ for every fixed ε > 0. Therefore, the following regularized version of (1.7) is a standard linear PDE, albeit with a random velocity term V ε : ∂ t θ ε (t , x , y) = ν 1 ∂ 2 x θ ε (t , x , y) + ν 2 ∂ 2 y θ ε (t , x , y) + ∂ y θ ε (t , x , y) V ε (t , x), subject to θ ε (0) = θ 0 .
It is an elementary fact that this has a weak solution θ ε for every ε > 0 (in the sense of PDEs) that is C 1 in the y variable, and C ∞ in all variables when θ 0 is smooth. 1 We will use our probabilistic representation to prove that, as ε ↓ 0, the random field θ ε converges in a strong sense to the solution of (1.7), but with ν 2 replaced by ν 2 := ν 2 + 1 2 ρ(0); see Theorem 7.2 for a precise statement. This yields a particular infinite-dimensional version of the Wong-Zakai theorem ( [58]; see also McShane [50] and Ikeda, Nakao, and Yamato [38]) of classical Itô calculus. In light of the work of Wong and Zakai, it makes sense to refer to the preceding solution to (1.7) as its "Stratonovich solution," which we will do henceforth. 2 In any case, because ν 2 := ν 2 + 1 2 ρ(0) > 1 2 ρ(0) tautologically satisfies (1.8) for every ν 2 > 0, it follows that (1.7) has a Stratonovich solution-in the sense that we just described-for every possible ν 1 , ν 2 > 0. Moreover, the Stratonovich solution to (1.7) with parameters ν 1 , ν 2 > 0 coincides with the Itô/Walsh solution to (1.7) with parameters ν 1 and ν 2 := ν 2 + 1 2 ρ(0). In particular, our probabilistic representation of the solution to the Itô/Walsh formulation of (1.7) immediately yields also a probabilistic representation of the Stratonovich solution. Set ν 1 = ν 2 to see that the Stratonovich solution to the Kraichnan model (1.4) with parameter ν > 0 is, in particular, the Itô/Walsh solution to (1.4) with parameters ν 1 = ν and ν 2 = ν + 1 2 ρ(0). And that the solution exists provided only that ρ is continuous and non degenerate [see (1.2) and (1.3)]. This is a significant improvement over the current state of existence and uniqueness of the Stratonovich solution to (1.4). Let us emphasize further that the said solution also has a probabilistic representation in terms of an exogenous Wiener measure. Thus, we may yet again apply that probabilistic representation to study the Stratonovich solution to (1.7) in greater detail.
One of the immediate corollaries of our probabilistic representation is that the Stratonovich solution to (1.7) converges as ν → 0 to a nice random field that is formally the method-of-characteristics solution to the inviscid form of (1.7); see Corollary 7.3. More precisely, lim ν↓0 θ(t , x , y) = θ 0 x , y − t 0 V (s , x) ds , (1.9) where the convergence holds in ∩ ∞ k=2 L k (Ω) and the Gaussian random field t 0 V (s , x) ds will be defined rigorously in §5 below. The preceding result is not consistent with some of the physical predictions of this field (see, for example, Warhaft [56, §5]). Closelyrelated results can be found in other parts of the literature as well; see for example, Bedrossian and Coti Zelati [4], Bernard, Gawȩdzki, and Kupiainnen [5,6], Eyink and Xin [29,Ref. 21], and Vanden Eijnden [54]. In order to have a solution with properties that are consistent with the various existing physical predictions, one needs to initialize By contrast, the exact rate of dissipation of θ(t) is shown to be of sharp order 1/t when the initial data is θ 0 = δ 0 ⊗ δ 0 ;. See Theorems 9.1 and 9.2; see Eq. (9.3) for a related observation. For instance, based on the preceding claim, one expects tθ(t , 0 , 0) to be a.s. of sharp order one as t → ∞. This turns out to be not quite true on the level of the sample-function trajectories; in fact, it turns out that tθ(t , 0 , 0) dissipates in a "multifractal" fashion as t → ∞. Slightly more precisely put, we will use our probabilistic representation of the solution to show that, when θ 0 = δ 0 ⊗ δ 0 and ρ is a constant, the set of times where tθ(t , 0 , 0) goes to zero faster than (log t) −δ a.s. has "macroscopic fractal dimension" D(δ) := max 0 , 1 − 2δν ρ(0) for all δ > 0. (1.10) Though the details are likely to change, we expect the preceding macroscopic multifractal formalism to continue to hold in the more physically-interesting case that ρ is non constant. Unfortunately, we have no ideas on how to carry out the analysis in the general case.
One can restate (1.10) as follows: When θ 0 = δ 0 ⊗ δ 0 , the Stratonovich solution to (1.7) decays as t(log t) −δ on non trivial, macroscopically fractal, time sets of fractal dimension D(δ) ∈ (0 , 1) for every value of δ > 0 less than ρ(0)/(2ν). Note that this discussion applies to the Stratonovich solution and, as such, (1.10) and the ensuing remarks apply to all values of ν > 0. Figure 1 shows a large-time simulation of tθ(t , 0 , 0)-for the Stratonovich solution to (1.4)-with ν = 10 −7 , up to time t = 10 5 . And Figure 2 shows a large-time simulation of two trajectories of tθ(t , 0 , 0)-for the Stratonovich solution to (1.4), using the same noise-where the parameter ν of the more fluctuating graph (red) is 14% of the ν of the other (blue). Perhaps one can recognize the large-time multifractal, intermittent structure of t → θ(t , 0 , 0) in these simulations?
Within the confines of the present, restricted model, these results rigorously justifyand give mathematical language to-some of the fluid intermittency assertions of the turbulence literature. See Mandelbrot [49, Section 10] for a detailed discussion of this broad topic.
Throughout this paper, we consistently adopt the following notational convention, often without making explicit mention. Figure 1: A simulation of the intermittent behavior of t → tθ(t , 0 , 0). The "Gamma" on the axes refers to our later notation for the Stratonovich solution θ in the special case that θ 0 = δ 0 ⊗ δ 0 . See (8.4). Figure 2: A simulation of two versions of t → tθ(t , 0 , 0)-for the same noise-where the parameter ν for one (red) is 14% of the ν for the other (blue). The "Gamma" on the axes refers to our later notation for the Stratonovich solution θ in the special case that θ 0 = δ 0 ⊗ δ 0 . See (8.4).
Conventions. If Z ∈ L k (Ω) is a complex-valued random variables, then Z k := {E(|Z| k )} 1/k . Whenever F is a real-valued function on R + × R 2 , we write t → F (t) for the function EJP 25 (2020), paper 122.
for all t 0 and a, b ∈ R.
Furthermore, we write F [b] for the function that is defined by for every three-variable function F on R + × R 2 .

The Itô/Walsh solution
In this section we study the generalized Kraichnan model (1.7), a special case of which [see (1.4)] is of particular interest. Namely, we consider the SPDE, subject to θ(0) = θ 0 . The product of V and ∂ y θ is interpreted in the Itô sense.

A presentation of the main results
for all t > 0 and x ∈ R.
(y) defines the fundamental solution to the operator L , we can define the notion of a mild solution to (1.4) as in Walsh [55]. Namely, we have the following.
Definition 2.1. We say that (t , x , y) → θ(t , x , y) is a mild solution to (2.1) when θ is a predictable random field (see [55]) that satisfies the following: (1) For every t > 0 and x ∈ R, the random function y → θ(t , x , y) is a.s. C 1 ; and (2) For all t > 0 and x, y ∈ R, almost surely, where the final integral is interpreted as a Walsh integral, and is tacitly assumed to exist in the sense of Walsh [55]; see also Dalang [23].
Appendix A below highlights a summary of some of the salient features of Walsh stochastic integrals.
We pause to say two things about Definition (2.1). First, recall that ρ is the Fourier transform of a finite Borel measure ρ on R (Herglotz's theorem). This is because ρ is a correlation function that is bounded and continuous [see (1.2)]. In particular, for all a ∈ R.
One can also consider weak solutions [in the sense of PDEs] instead of mild solutions.
The main result of this section is an existence and uniqueness theorem about Itô/Walsh solutions of the generalized Kraichnan model (1.7). Before we state that result, let us identify four requisite technical criteria that will be assumed to hold throughout this section. The first is condition (1.8) that we recall next.
Assumption II (Integrability in the second variable). There exists η ∈ (0 , 1] such that 2. If θ is any other predictable random field that satisfies (2.4), (2.5) and that θ(t, x, y) is also the Fourier transform of a random field U (t, x, ξ) in the third variable, where U also satisfies (2.15), then θ and θ are modifications of one another; that is, P θ(t , x , y) = θ(t , x , y) = 1 for every t > 0 and x, y ∈ R.
Finally, the following dissipation estimates are valid: As t → ∞, (2.7) Remark 2.4. Suppose that there exists α ∈ (0 , 1] such that for every k 2 Suppose also that Assumption II holds; that is, suppose that By Chebyshev's inequality, sup x∈R |y|>q θ 0 (x , y) k dy Aq −η for all q > 0. Therefore, Optimize the right-hand side over the ancillary parameter q in order to find that In other words, Assumption II and a standard continuity-type condition such as (2.8) together imply that Assumption III holds [with γ := αη/(1 + η)].
Let us mention also the following result on the regularity of the solution of (1.7), which has an additional Hölder-continuity requirement [see (2.9)]-at the origin-for the correlation function ρ. 4 4 In fact, the following well-known argument from Fourier analysis implies that (2.9) is a uniform Hölder condition: (2.3) ensures that for all a, b ∈ R, EJP 25 (2020), paper 122. Theorem 2.5. Suppose there exist ∈ (0 , 2] and C * > 0 such that ρ(0) − ρ(z) C * |z| for all z ∈ R, (2.9) and that there exist α, ζ ∈ (0 , 1] such that for every k 2 there exists a real number A k such that uniformly for every a, a , b, b ∈ R. Then, with probability one: where C s,x,y ((0 , ∞) × R 2 ) denotes the space of all real-valued, locally Hölder continuous, , and c → f (· , · , c) are s, x, and y.
If ρ is not a constant function, then (2.3) ensures that ρ = ρ(0) δ 0 and hence . This is enough to imply that 2, as desired.
Theorems 2.3 and 2.5, and their proofs, have a number of consequences. We mention some of them next in order to highlight the "physical" nature of the Kraichnan SPDE (1.7). The first consequence is about the dissipative nature of the solution to (1.7). We emphasize that the following uniform a.s. decay rate is consistent with the distributional one from (2.7). Then, for every m > 0, the following is valid with probability one: We will see in Remark 6.1 below that the dissipation rate 1/ √ t is unimproveable.

Outline of the proof of Theorem 2.3
As was observed by Majda [47,48], the stratified structure of the velocity field V in (1.7) lends itself well to an application of the Fourier transform in the variable y (see also Bronski and McLaughlin [9,10,11]). With this in mind, let us define U to be the Fourier transform of θ in its y variable; that is, somewhat informally, x , y) dy. (2.11) In order to prove Theorem 2.3, we first prove that U exists, and has a sufficiently good version, thanks to Assumptions I through IV. And then we invert the Fourier transform (2.11), thereby also establish the existence and uniqueness of θ as a by product.
Unfortunately, (2.11) is an informal definition: It will turn out that θ(t , x , ·) is in general not integrable with probability one for all t > 0 and x ∈ R. Still, one can think of U as a Fourier transform provided only that θ(t , x , ·) ∈ L 1 loc (R) a.s. 5 The ensuing a priori estimates will show that this local integrability property holds under Assumptions I-IV.
By analogy with classical linear PDEs, if the random field U were at all well defined, then it would have to solve the complex-valued SPDE, (2.12) subject to U (0) = θ 0 . One can interpret (2.12) easily as an infinite family of complexvalued, but otherwise standard, Itô/Walsh SPDEs, one for every ξ ∈ R. As such, it is not difficult to solve it in order to obtain the random field U . We plan to "invert" the Fourier transform operation-compare with (2.11)-in order to construct θ. This endeavor will require the assumptions of Theorem 2.3. If and when this is possible, it is not hard to prove that this procedure will yield the desired solution to (1.7), as well.
As an aside, let us mention that one could think of (2.12) as a two-dimensional, real-valued SPDE as follows: Define X := Re U and Y := Im U in order see that (X , Y ) (2.13) subject to the obvious initial condition. In the case that ν 2 is replaced by zero, the SPDE (2.13) is related loosely to the mutually-catalytic super Brownian motion system of Döring and Mytnik [27]. Though there also are obvious differences between (2.13) and such super Brownian motions as well.
We now return to the construction of the random field U . In accord with the theory of Walsh [55], we seek to solve (2.12) by rewriting it as the following Walsh-type stochasticintegral equation: where p (ν) denotes the fundamental solution of the heat operator ∂ t − ν∂ 2 x ; see (2.2). 5 Recall that the weak-L 2 Fourier transform of f ∈ L 1 loc (R) can be understood formally via the Parseval relation, The preceding is a complex version of the sort of SPDE that is treated in Walsh [55]. Therefore, it is not hard to use the technology of Walsh [55] to prove that (2.12)equivalently, (2.13)-has a unique strong solution, among other things. The following, perhaps more interesting, a priori result estimates carefully the moment Lyapunov exponent λ 2 of that solution by showing that for all x, ξ ∈ R. (2.14) Though we have not attempted to derive a matching lower bound, we believe that the preceding inequality is an identity. In any case, we can see from (2.14) and Assumption I that λ 2 is strictly negative. A quantitative form of (2.14) will allow us to "invert" (2.11) under Assumption I, and hence establish Theorem 2.3. The derivation of (2.14) requires some care, in part because the solution to (2.12) is complex valued. So we shall proceed with care, paying careful attention to numerical constants that arise along the way.
The above bound for λ 2 is based on the following, more useful, quantitative result. Theorem 2.11. Suppose U (0) : (x , ξ) → U 0 (x , ξ) is a jointly measurable random field that is independent of V and satisfies sup x∈R E(|U 0 (x , ξ)| k ) < ∞ for every ξ ∈ R and k 2. Choose and fix some ν 2 > 0. Then, for every ξ ∈ R, (2.12) has a mild solution U [ξ] that satisfies the following for every ε ∈ (0 , 1), t > 0, and x, ξ ∈ R: Moreover, any other such mild solution is a modification of U [ξ]. Finally, for all ε ∈ (0 , 1), t 0, and x, ξ ∈ R, Once we have a good version of U that has a well-controlled second-moment Lyapunov exponent, we can readily "invert" the Fourier transform in (2.11) in order to obtain the solution θ to the Kraichnan model (1.7). In the remainder of this section we carry out the above program. The astute reader might wonder why we have included bounds for all high-order Lyapunov exponents when we claim that the important one is the second-moment Lyapunov exponent λ 2 . The reason will become apparent when we use the high-order Lyapunov exponents to obtain some of the required a priori regularity; see the discussion that follows Lemma 4.1 below, for example.

Stochastic convolutions
Owing to Definition 2.1, linear SPDEs are related to "stochastic convolutions" in a manner that is analogous to the relationship between linear PDEs and space-time convolutions. In this subsection we develop some norm inequalities for stochastic convolutions. We will use these inequalities in the next subsection (see §3) in order to verify Theorem 2.11.
Let us start with a more-or-less standard definition.
Definition 2.12. Suppose Φ = {Φ(t , x)} t 0,x∈R is a space-time random field. We say that Φ is Walsh integrable if Φ is predictable in the sense of Walsh [55] and satisfies for every t > 0 and x ∈ R. If Φ is complex valued, then we say that Φ is a Walsh integrable if the real part and imaginary part of Φ are both Walsh integrable.
Next is a simple extension of a standard definition to the present, complex-valued setting.
On a few occasions we will refer to the following simple fact, which is isolated as a little lemma for ease of reference. Lemma 2.14. Let Φ := {Φ(t , x)} t 0,x∈R be a complex-valued predictable random field that satisfies (2.16) for every t > 0 and x ∈ R. Then, Φ is Walsh integrable.
Proof. Since |zw| 2 (Re z · Re w) 2 + (Im z · Im w) 2 for every two complex numbers z and w, we take square roots in order to see that |Re z · Re w| + |Im z · Im w| 2|zw| for all z, w ∈ C.
Thus, we may use this with z = Φ(s , x ) and w = Φ(s , x ), take expectations, and appeal to (2.3) in order to find that, for all t > 0 and x ∈ R, which is finite.
Thanks to Lemma 2.14, in order to verify that a random field Φ := {Φ(t , x)} t 0,x∈R is Walsh integrable, it suffices to check that Φ is both predictable and satisfies the integrability condition (2.16). We first verify the latter integrability condition by developing a "stochastic Young's inequality" as in Foondun and Khoshnevisan [31] and Conus, Khoshnevisan [20]. With this aim in mind, let us introduce some terminology. Definition 2.15. Let us define, for every complex-valued space-time random field Φ = {Φ(t , x)} t 0,x∈R and all real numbers k 2 and β > 0, Clearly, every N k,β is a norm on the vector space of all space-time random fields that have finite N k,β -norm, provided that we identify two random fields when they are modifications of one another (as one always does, any way). The following shows a sufficient condition, in terms of the norms in (2.17), for the integrability condition (2.16) to hold. Lemma 2.16. If Φ = {Φ(t , x)} t 0,x∈R is a complex-valued, space-time random field that satisfies N 2,β (Φ) < ∞ for some β > 0, then Φ satisfies the integrability condition (2.16). EJP 25 (2020), paper 122.
Proof. By the Cauchy-Schwarz inequality, uniformly for all s > 0 and x , x ∈ R. Because which is finite.
We now state and prove the stochastic Young's inequality that was alluded to earlier.
Then, for all ν > 0, the stochastic convolution, is a well-defined, complex-valued Walsh integral for every t > 0 and x ∈ R, and (2.20) Definition 2.18. In order to make future notation consistent, from now on we tacitly assume that (p (ν) Φ)(0 , x) = 0 for all x ∈ R and all predictable, 2-parameter random fields Φ.
Before we prove Lemma 2.17, let us make two more observations. Remark 2.19. The preceding lemma says that (p (ν) Re Φ)(t , x) and (p (ν) Im Φ)(t , x) are well-defined Walsh integrals, and tacitly defines for every t > 0 and x ∈ R.
Of course, this inequality has content when, and only when, N k,β (Φ) is finite.
Proof of Lemma 2.17. Choose and fix t > 0 and x ∈ R, and define, for every τ ∈ (0 , t], and it follows from Walsh's theory that the preceding local martingales have respective quadratic variations, We may now borrow from the proof of Lemma 2.16 as follows: Fubini's theorem and t−s (w) dw = 1. The same inequality holds when we replace Re M by Im M. Therefore, We multiply and divide, inside the integral, by exp(−2βs) and maximize the resulting integrand in order to see that Take square roots, divide both sides by exp(βt), and maximize both sides over t and x in order to deduce the announced bound-see (2.19)-for N 2,β (p (ν) Φ) in terms of c 2 . For the L k (Ω) norm inequalities we appeal to the Carlen-Kree [15] bound on Davis' optimal constant [26] in the Burkholder-Davis-Gundy inequality [12,13,14] in order to see that (2.21) EJP 25 (2020), paper 122.
By working directly with the formula for Re M t , and thanks to the Minkowski inequality, we can see that In the last line we used the Cauchy-Schwarz inequality in the following form: XY k/2 X k Y k for every X, Y ∈ L k (Ω). In any case, the preceding yields The same inequality holds, for the same sort of reason, if we replace the real part of M by its imaginary part. Therefore, by (2.21), [It might help to recall that c k := 8k because k > 2.] Take kth root of both sides, divide both sides by exp(βt), and then optimize over t and x to finish.
In some of the ensuing applications-for example see Lemma 3.9-the factor β −1/2 on the right-hand side of (2.19) will be too crude; see also Remark 2.20. The following finite time-horizon variation of Lemma 2.17 will be used in such instances.
The same quantity bounds the kth moment of (p Im Φ)(t , x). The lemma follows readily from these observations.

Proof of Theorem 2.11
In order to prove Theorem 2.11, it is convenient to first define a new random field u via and note that if U [ξ] is a mild solution to (2.12) for every ξ ∈ R, then u[ξ] would have to be a mild solution to the following stochastic PDE for every ξ ∈ R: That is, for every t > 0 and x, ξ ∈ R, where " " denotes the stochastic convolution operator; see (2.18). This is because the fundamental solution to the PDE, , and hence the following is the mild formulation of (2.12): One can also understand (3.2) as a system of two coupled, real-valued SPDEs. Indeed, let X := Re u and Y := Im u, in order to see that, for every ξ ∈ R, the pair (X We plan to prove the following equivalent formulation of Theorem 2.11. Theorem 3.1. Suppose u 0 : Ω×R 2 → C is a measurable random field that is independent of V and satisfies sup x∈R E(|u 0 (x , ξ)| k ) < ∞ for every k 2 and ξ ∈ R. Choose and fix some ν 1 > 0. Then, for every ξ ∈ R, (3.2) has a mild solution u[ξ] that satisfies the following for every k 2, ε ∈ (0 , 1), t > 0, and x, ξ ∈ R: where c k was defined in (2.20). Furthermore, suppose v[ξ] is a mild solution (3.2) for every ξ ∈ R, for k = 2 and some ε ∈ (0 , 1). Then, v is a modification of u.
The above is a ready result of a series of quantitative bounds, which we develop next. 6 With Theorem 3.1 in mind, let us begin with a standard Picard iteration argument. We first define for all t 0 and x, ξ ∈ R.
Then, we define iteratively for all n 0, where " " denotes stochastic convolution; see (2.18). The preceding is well defined provided that the final Walsh integral is well defined; see Definition 2.12. That is, if u n [ξ] is a predictable random field for every ξ ∈ R, and satisfies for every t > 0 and x, ξ ∈ R. The following lemma will ensure that this is the case.

Lemma 3.2.
Assume the hypotheses of Theorem 3.1 are met. Suppose also that there exists an integer n 0 such that u n [ξ] is a predictable random field. Then, for every ξ ∈ R, u n+1 [ξ] is a predictable, two-parameter random field. Moreover, for all ξ ∈ R, ε ∈ (0 , 1), and k 2, and where c k was defined in (2.20).
Proof. The predictability of u n+1 [ξ] follows from the predictability of Re u n+1 [ξ] and Im u n+1 [ξ], which in turn follows from the following standard fact from stochastic analy- is predictable whenever u n (s , z , ξ) is [18,55]. We verify (3.6), which is the main message uniformly for all t, β > 0 and x, ξ ∈ R. This bounds the first term on the right-hand side of (3.5). We now estimate the second term, using Lemma 2.17, as follows: Therefore, (3.5) yields for every n 0 and ξ ∈ R, and for every β > 0. We now make the particular choice that β = β * , where β * is given in (3.7): In this way we obtain the recursive inequality, valid for all k 2 and n 0. We iterate this inequality in order to find that see (3.8). This is another way to state the lemma.
Next we present two a priori regularity results; see Lemmas 3.3 and 3.4. Both lemmas will be improved above later on. But, logically speaking, we will need the a priori form of these lemmas first in order to establish the existence of a solution before we can use that solution in order to establish our later, improved regularity results. This is unfortunate, as it makes the proof of Theorem 2.3 somewhat lengthy. But we do not know of another rational argument that bypasses this lengthy procedure. Thus, we begin with an a priori regularity result in the space variable. exists an integer n 0 such that u n [ξ] is a predictable random field for every ξ ∈ R. Then, for every real number k 2, t > 0 and x, z, ξ ∈ R, Proof. Choose and fix an integer n 0 and real numbers k 2, t > 0, ξ ∈ R, and x, z ∈ R that satisfy |x − z| 1. Thanks to (3.5) we can write According to Lemma 6.4 of Joseph et al [21] (for the explicit constant mentioned below see the bound for µ 1 (|x|) in the proof of Lemma 6.4 in [21], all the time remembering that their constant κ/2 is ν 1 in the present setting), Also, a trivial bound yields EJP 25 (2020), paper 122.
When ξ = 0, we have T 2 = 0 and (3.11) completes the proof in that case. Now consider the case that ξ = 0.
We may observe that As before, we consider Re u n and Im u n separately, using the Burkholder-Davis-Gundy inequality. Let us fix n 0, t > 0, and x, z, ξ ∈ R, and write for all s ∈ (0 , t) and x ∈ R. We respectively define T 21 and T 22 to be the same expressions as T 2 , but with u n [ξ] replaced by Re u n [ξ] and Im u n [ξ]. Then, we use similar ideas as those that were used in the proof of Lemma 2.17 in order to see that In particular, , thanks to the triangle inequality. Two back-to-back applications of Minkowski's inequality now imply that EJP 25 (2020), paper 122.
In particular, see (3.10). The same bound holds if we replace R(s , x ) = Re u n (s , x , ξ) by Im u n (s , x , ξ).
Therefore, this and (3.12) together yield Therefore, we can deduce the lemma from (3.11) and (3.13).
The following is an a priori regularity result in the time variable, and matches the result of the spatial Lemma 3.3.

Lemma 3.4.
Assume the hypotheses of Theorem 3.1 are met. Suppose also that there exists an integer n 0 such that u n [ξ] is a predictable random field for every ξ ∈ R, and that (3.6) holds for all k 2, except with u n+1 there replaced by u n here. Finally, suppose that there exists α > 0 such that for every k 2 there exists a real number M k such that for all a, b ∈ R. Then, for every real number k 2, t, h > 0, and x, ξ ∈ R, Proof. In accord with (3.5), we may write We will estimate these in turn.
In order to estimate T 1 , first let B := {B t } t 0 denote a Brownian motion, run at speed 2ν 1 so that p ν1 t is the probability density function of B t for every t > 0. By the conditional form of Jensen's inequality, Because (3.14) ensures that the tower property of conditional expectations yields If ξ = 0, then T 2 = T 3 = 0 and the lemma is proved in that case. From now on we consider the case that ξ = 0, and proceed to estimate T 2 and T 3 in this order. We estimate T 2 by following a similar reasoning as was done in the proof of Lemma 3.2. Namely, we first appeal to the Burkholder-Davis-Gundy inequality (as was done surrounding (2.21)) and (2.3) to see that The factor (16k) k/2 is put in place of the usual (4k) k/2 to account for two appeals to the BDG inequality: One for the real part and one for the imaginary part, and also the inequality of the type Φ 2 In any case, the preceding yields EJP 25 (2020), paper 122. Therefore, the definition (2.17) of the norm N k,β * , and the definition (3.7) of the constant β * together yield Consequently, we may appeal to (3.6)-with u n+1 there replaced by u n here-in order to deduce the following: where we are viewing t, h, and x as fixed numbers to simplify the notation in the ensuing calculation: Then, by arguing as before we find that The final object [under { · · · } k/2 ] involves a real-variable integral that is easy to estimate directly, as follows: Apply the triangle inequality, |w 2 /(4ν 1 r 2 ) − 1/(2r)| w 2 /(4ν 1 r 2 ) + (2r) −1 . We integrate the dw-integral first in order to see that as can be seen by examining the integral according to whether or not t < h/2. Thus, we obtain the following: we may combine (3.15), (3.16), and (3.17), and set ε := 1/2 [to be concrete] to finish.
With the preceding technical results in place, we can now prove Theorem 3.1 easily by following a standard argument which we sketch briefly for the convenience of the reader.
Proof of Theorem 3.1 (a sketch). Now we have established the joint measurability of u n for all n, we follow and adapt the proof of Lemma 3.2 in order to prove that for every β > 0. The particular choice β = β * , where β * is given in (3.7), implies then that {u n } ∞ n=1 is a Cauchy sequence in the norm N k,β * . Theorem 3.1 follows with u = lim n→∞ u n , where the limit holds in the norm N k,β * .
The preceding lemmas will play a role in establishing the following regularity result. Theorem 3.5. Assume the hypotheses of Theorem 3.1 are met. Assume also that there exist α, γ ∈ (0 , 1] such that for every k 2 there exists a real number A k such that (3.18) uniformly for all x, x , ξ, ξ ∈ R. Then, the solution (t , x , ξ) → u(t , x , ξ) to (3.2) has a version that is Hölder continuous on R + × R 2 .
Remark 3.6. The proof of Theorem 3.5 shows that, in fact, the 3-parameter stochastic process (t , x , ξ) → u(t , x , ξ) has a version that is Hölder continuous with respective , and γ − ε (in the variable ξ)-for every fixed ε > 0-uniformly on compact subsets of R + × R 2 . Theorem 3.5 will follow immediately from Lemmas 3.7, 3.8, and 3.10 below, after an appeal to a suitable form of the Kolmogorov continuity theorem (see [24], for an example). Therefore, we will not write a proof for Theorem 3.5. Instead we merely state and prove the following three auxilliary lemmas. Then, for every real number k 2, t > 0, and x, z, ξ ∈ R, Proof. We can follow the proof of Lemma 3.3, and adapt the argument, in order to see that for all n ∈ Z + , by (3.18) and Minkowski's inequality, and T 2 was defined in (3.9). Estimate T 2 by (3.13), then let n → ∞ and use the fact-see the proof of Theorem 3.
Similarly, one can let n → ∞ in Lemma 3.4, and appeal to Fatou's lemma in order to deduce the following inequality. It might help to also compare (3.14) and (3.18) to see Then, for every real number k 2, t, h > 0, and x, ξ ∈ R, The following addresses the same sort of estimate that Lemma 3.8 does, but now in the case that t = 0. The reasoning is slightly different, and so we include a proof. Then, for all ε ∈ (0 , 1), for the same constants c k and A k that appeared respectively in (2.20) and in (3.18).
Proof. For every k 2, h > 0, and x, ξ ∈ R, we may write Moreover, we may appeal first to Lemma 2.21 and then to Theorem 3.1 in order to see that Combine the estimates, using the fact that Finally, in the next lemma, we establish a regularity result in the auxilliary variable ξ.
We emphasize that the following result holds under exactly the same conditions as does Theorem 3.1, together with (3.18) which will turn out to be an innocuous condition on u 0 . Then, for all real numbers k 2 and R, L > 0, Remark 3.11. The proof in fact shows that where: We now estimate T 1 , T 2 , and T 3 in this order. Clearly, Thanks to Theorem 3.1 and Lemma 2.17, N k,β (u[ξ]) < ∞ for some β. Therefore, we may first apply Lemma 2.21 and then Theorem 3.1-in this order-in order to see that for every ε ∈ (0 , 1), Finally, the most interesting term T 3 we hold t, ξ, and ξ fixed and define in order to see that , thanks to a by-now familiar appeal to a suitable form of the Burkholder-Davis-Gundy inequality. And another familiar calculation now reveals that the latter expectation is bounded from above by We now plug this inequality into (3.21) and combine with (3.19) and (3.20) in order to conclude that in order to see that 7 for all t > 0 and ε ∈ (0 , 1). Set ε := 1 2 , to be concrete, and appeal to Gronwall's lemma and the fact that 0 < γ 1 in order to deduce the following: 8 uniformly for all t 0, ξ, ξ ∈ R that satisfy |ξ − ξ | 1 and |ξ| ∨ |ξ | L. (3.22) Thus, we can exchange the respective roles of ξ and ξ in order to arrive at the following moment bound: uniformly for all t 0 and ξ, ξ ∈ R that satisfy (3.22). This completes the proof.

Proof of Theorem 2.3
We have laid the groundwork for the proof of Theorem 2.3 and begin the task of proving that result now.
Thanks to Assumption I of the Introduction, there exists ε ∈ (0 , 1) such that  Indeed, the integral is well defined because θ 0 is continuous (Assumption IV) and because sup x,ξ∈R by Assumption II. By Assumption III, for every x, x , ξ ∈ R and k 2, for every η ∈ (0 , 1], x, ξ, ξ ∈ R, and k 2, which goes to zero as ξ → ξ by Assumption II. These computations, in conjunction, verify the hypotheses of Theorems 3.1 and 3.5. Let us record these conclusions next. 1. sup x,ξ∈R u 0 (x , ξ) k < ∞ for all k 2; 2. u 0 satisfies (3.18) with γ = η.
Thus, it follows from Theorems 3.1 and 3.5 that the SPDE (3.2) has a unique random field solution u[ξ] for every ξ ∈ R, starting from initial data u 0 given by (4.2); and that u is Hölder continuous, as guaranteed by Theorem 3.5. Finally, u is subject to the moment growth bound of Theorem 3.1.
for all x, ξ ∈ R, Theorem 2.11 and 3.5 ensure that U is a continuous, complex-valued, random field that satisfies the moment growth conditions of Theorem 2.11 and solves uniquely the SPDE (2.12) (thanks to (3.4)). Now, motivated by the informal definition (2.11) of U , we may define a 3-parameter, complex-valued, random field θ := {θ(t , x , y)} t 0,x,ξ∈R via θ(0 , x , y) := θ 0 (x , y) and In due time, we will prove that the random field θ is the unique mild solution to the SPDE (1.7) and derive the asserted properties of θ that were outlined in Theorem 2.3.
First of all, let us remark that θ is a well-defined, predictable random field. This is because U is continuous (see Lemmas 3.10 and 4.1), and since Theorem 2.11 ensures that, for the same ε ∈ (0 , 1) that appeared earlier in (4.1), and for every t > 0 and x, y ∈ R, see also (4.2). It also follows that the first assertion of the dissipation relation (2.7) holds.
The estimate (4.5) has also the consequence that y → θ(t , x , y) is locally integrable a.s. for every t > 0 and x ∈ R. Since θ is the inverse Fourier transform of U , it then follows from the Parseval identity that U must then be the Fourier transform of θ in the sense of distributions; that is, for every test function ϕ, where F −1 denotes the inverse Fourier transform. The above justifies some of the assertions surrounding (2.11).
Also, the second assertion of (2.7) follows from the above reasoning (set n = 1). It remains to prove that θ is both a mild and a weak solution to (1.7) and it is unique in the sense that is stated.
Since u is the mild solution to (3.2), a standard application of a stochastic Fubini theorem (see Theorem A.1) implies that u is also a weak solution to (3.2); see Walsh [55].
We will use this fact next. Define where the final stochastic integral is understood as a Walsh integral with respect to the Gaussian noise (t , x , y , ξ) → V (t , x). Therefore, it follows from (4.4), Fubini's theorem, and a stochastic Fubini theorem (Theorem A.1) that with probability one, This verifies (2.5). Next we prove that θ(t , x , y) is a mild solution to equation (1.7). Since u(t , x , ξ) is a mild solution to the C-valued SPDE (3.2), almost surely. We first multiply both sides by (2π and then appeal to both Fubini and the stochastic Fubini theorems (see Theorem A.1 for the latter), in order to find that We skip the routine measure-theoretic details. By (4.2) and the inversion theorem, The inversion theorem is applicable, owing to (4.3). Therefore, we may evaluate I 1 as follows: In order to evaluate I 2 we apply the stochastic Fubini theorem (Theorem A.1) to find by Plancherel's theorem. Therefore, another appeal to the stochastic Fubini theorem yields We combine (4.7) and (4.8) and apply them in (4.6) to see that θ is indeed a mild solution to (1.7).
For the uniqueness of the mild solution θ, we have shown that (t , x) → exp(ν 2 ξ 2 t)U (t , x , ξ) is the unique mild solution to equation (3.2) for every ξ ∈ R. Another way to state this is that if θ and θ are mild solutions to the fluid problem (1.7), and they are the inverse Fourier transforms of U (t , x , ξ) and U (t , x , ξ) respectively, in ξ, and U, U having common initial profileθ 0 , then their inverse Fourier transforms [in the ξ variable] are equal and hence θ and θ are modifications of one another, thanks to the uniqueness theorem of Fourier analysis. This proves uniqueness. Finally, we verify (2.6).
We have already shown that θ = U in the sense of distributions. Therefore, (3.1) and the Parseval identity together imply that for every t > 0 and non-random functions ψ 1 , ψ 2 ∈ S (R), A similar argument implies that EJP 25 (2020), paper 122.
According to Lemma 4.1, sup a,b∈R u 0 (a , b) 2 is finite. Therefore, we may set k = 2 in Lemma 3.9, and recall that c 2 = 1 [Lemma 2.17], in order to see that the constant C of Lemma 3.9 can be bounded above as follows: As long as t ∈ (0 , 1], where K 1 and K 2 do not depend on (t , ξ), and ε ∈ (0 , 1) is the same constant that was held fixed in (4.1). Because of (4.1), the condition "t ∈ (0 , 1]" implies that Consequently, the dominated convergence theorem implies that whence it follows from the dominated convergence theorem that We obtain (2.6) by combining (4.9), (4.10), and (4.11). This completes the last part of the proof of Theorem 2.3.

Proof of Theorem 2.5
In this section we verify the regularity Theorem 2.5. We also use this opportunity to study various "curvilinear stochastic integrals" along the field V . In fact, we start with the latter topic.

Smoothing the noise
One of the objects that arises naturally is the random field (t , x) → t 0 V (s , x) ds.
There is a well-known method to construct this and related random fields from the generalized Gaussian field V ; see for example Kunita [45, Section 6.2] for an indirect construction and Hu and Nualart [37] for a direct construction. We will need to use aspects of the latter construction. With that aim in mind define a smoothed approximation to the random distribution V (t , x) as follows: For all ε, δ > 0, t 0, and x ∈ R define The defining properties of the isonormal process V ensure that V ε,δ is a centered twoparameter Gaussian random field with covariance for every t, t 0 and x, x ∈ R. See (A.2) in Appendix A.1. The following records these, and a few other, properties of V ε,δ . Proposition 5.1. For every ε, δ > 0, V ε,δ is a centered, 2-parameter, stationary Gaussian random field that has (up to a modification) C ∞ trajectories. Proof. Choose and fix ε, δ > 0 throughout. We need only to verify the smoothness of the random field (t , x) → V ε,δ (t , x).
Let us first demonstrate that V ε,δ is a.s. continuous. Thanks to (5.2), for every t 0 and x ∈ R. Therefore, for all t, h 0 and x, y ∈ R, Let us examine the two expressions separately.
Because V ε,δ is a.s. continuous, the left-hand side is a classical, Riemann-type convolution integral, and is easily seen to be a.s. a C ∞ function of (s , y); therefore so is the right-hand side. Since ε and δ are positive and otherwise arbitrary, this proves that V 2ε,2δ -whence also V ε,δ -is C ∞ a.s. This completes the proof.

Curvilinear stochastic integrals
We now can construct stochastic integrals of the form A t := t 0 V (s , f (s)) ds where f : R → R is continuous and independent of V . One can think of A t as the total amount of V -noise that is accumulated along the graph of f . As such, A t can be thought of as a curvilinear stochastic integral. The terminology is borrowed in essence from the work of Bertini and Cancrini [7]. We change the notation slightly from the above discussion, however, in order to accomodate our later needs. The proof of this, and the next result, rely on the following consequence of the elementary properties of Wiener integrals: The conditional distribution of the 4-parameter process (ε , δ , t , x) → V ε,δ (t , x) is centered Gaussian, given the process X. In fact, we use this elementary fact several times in the sequel, frequently without explicitly mentioning the fact itself.
It is not hard to prove that the construction of the just-defined curvilinear stochastic integral does not depend essentially on the particular smoothing choices that were made in the construction of V ε,δ . The following lemma is the first step toward establishing this fact. Lemma 5.3. Let X be as in Lemma 5.2, and ψ, φ : R → R be two non-random C ∞ functions with compact support such that for all ε, δ > 0, t 0, x ∈ R. Then, for every fixed ε, δ > 0,V ε,δ is a centered, 2-parameter, stationary Gaussian random field that has (up to a modification) C ∞ trajectories. Moreover, for every t > 0 and x ∈ R, t 0V (s , x + X t−s ) ds := lim ε,δ↓0 t 0V ε,δ (s , x + X t−s ) ds exists in L 2 (Ω).
Proof. If ε > 0 and δ > 0 are fixed, thenV ε,δ is a well-defined random field, thanks to the defining properties of the Wiener integral. In order to show that (t , x) →V ε,δ (t , x) is a.s. smooth let us define, for every integer n 0 and all reals ε, δ > 0, t 0, and x ∈ R, Since ϕ δ and ψ ε have compact support, it is possible to check directly thatV (n) ε,δ is a well-defined, centered Gaussian random field for every n 0. Also, by (2.3), where C is a real number that depends only on (ε , δ , ρ , n). The Kolmogorov continuity theorem implies that, with probability one,V ϕ : R → R on R, almost surely. It follows thatV (n) ε,δ (t) is a.s. the n-th order weak derivative of x →V ε,δ (t , x) for every ε, δ > 0 and t 0. SinceV (n) ε,δ (t) is continuous, we may conclude thatV (n) ε,δ (t) is a.s. the n-th order classical derivative ofV ε,δ (t) for every t 0. In particular, it follows thatV ε,δ (t) is C ∞ a.s.
One can prove thatV ε,δ (· , x) is C ∞ a.s. for every x ∈ R using the same sort of argument.
Now that the curvilinear stochastic integral t 0V (s , x + X t−s ) ds is defined, we prove that it agrees with t 0 V (s , x + X t−s ) ds. In other words, the following result proves that the construction of t 0 V (s , x + X t−s ) ds does not depend on the particular choice of the heat kernel as the smoother in the definition of V ε,δ . Proposition 5.4. Choose and fix φ, ψ as in Lemma 5.3, and define for all ε, δ > 0, the random fieldV ε,δ via (5.7). Then, for every t > 0 and x ∈ R.

An infinite-dimensional Brownian motion
Curvilinear stochastic integrals of the form (5.8) arise frequently in the study of polymer measures, among other places; see for example Bertini and Cancrini [7] and Carmona and Molchanov [16], together with their voluminous combined references. In that theory, X is frequently a nice linear diffusion (such as 1-dimensional Brownian motion), and t 0 V (s , x + X t−s ) ds represents the cost of letting the corresponding timereversed space-time Brownian motion to run through an external space-time environment V . Perhaps the simplest example of a curvilinear stochastic integral is obtained when we set X ≡ 0. In that case, it follows from Lemma 5.2 [with X ≡ 0 and (5. In other words, t → t 0 V (s , ·) ds is a cylindrical Brownian motion with homogeneous spatial correlation function ρ.
We make two remarks about this Brownian motion next.  for all x, x ∈ R and t > 0.
Thus, we may think of t 0 V (s , x) ds = ρ(0) W t , given the representation (A.4) of the random generalized function V .
More generally, a small variation on this argument shows that if (X , Y ) is independent of V and X and Y are continuous random processes, then for all t > 0 and x ∈ R, whenever ρ is a constant. We can set Y ≡ 0 in order to see that when ρ is a constant,  L 2 (Ω)) if and only if ρ ∈ C 6+ε for some ε > 0. In this case, the Kunita theory of stochastic flows (see [45,Ch. 6]) implies that the infinite-dimensional Stratonovich SDE (1.5) has a unique solution. We referred to this fact, without detailed explanation, in the Introduction. EJP 25 (2020), paper 122.

A probabilistic representation of the solution
We use the curvilinear stochastic integral of the preceding section in order to write the solution to the generalized Kraichnan model (1.7) probabilistically in terms of an exogenous Wiener measure. First we introduced two simple σ-algebras V and T 0 . Definition 5.7. Let V denote the σ-algebra generated by all random variables of the Also, let T 0 denote the σ-algebra generated by all random variables of the form θ 0 (x , y), where x and y are real numbers.
Then we have the following probabilistic representation of the solution to (1.7).
Theorem 5.8 follows readily from Proposition 5.9 below and the inversion theorem of Fourier analysis. Proposition 5.9. Let u[ξ] be the solution to (3.2) for every ξ ∈ R, subject to initial data θ 0 , where θ 0 satisfies Assumptions I-IV. Then, for all t 0 and x, ξ ∈ R, where B is a Brownian motion independent of V ∨ T 0 with Var(B 1 ) = 2ν 1 .
Proof. We follow the argument of Hu and Nualart [37] closely, making adjustments to account for the present, slightly different, setting.
In order to simplify the typsetting we will consider only the case that θ 0 -hence also u 0 -is non random. To obtain the general case from this one, one simply replaces all of the following expectation operators by conditional expectation operators, given T 0 , without altering the course of the proof.
Define v(t , x , ξ) to be the quantity on the right-hand side of (5.11); that is, We are going to show that v(t , x , ξ) solves the SPDE (3.2) in mild form (3.3). This and the uniqueness of the solution to (3.2) [Theorem 3.1] together will imply that v(t , x , ξ) and u(t , x , ξ) coincide almost surely for all t 0 and x, ξ ∈ R, and complete the proof. To this end, recall the space H from (A.1) in the appendix. Since ρ is positive definite a priori, it follows that · · · H is indeed a Hilbert norm, with corresponding inner product for every ϕ 1 , ϕ 2 ∈ C ∞ c ((0 , ∞) × R). And of course H is a Hilbert space, once endowed with the latter inner product. Define for every ϕ ∈ H, t 0, and x ∈ R, where F ϕ := exp ϕ , V − 1 2 ϕ 2 H , and we have written ϕ , V for the stochastic integral, Thanks to the construction of t 0 V (s , x + B t−s ) ds (see Lemma 5.2 and its proof), Therefore, we may first condition on B, and then use the fact that V is Gaussian, in order to deduce from (4.2) that S t,x (ϕ) can be written as By the classical Feynman-Kac formula for deterministic PDEs, the function (t , x) → S t,x (ϕ) is the unique solution to the diffusion equation, with initial profile u 0 (· , ξ). In particular, the Duhamel principle yields Let D denote the Malliavin derivative that corresponds to the infinite-dimensional Brownian motion t → t 0 V (s , ·) ds (see Nualart [51]). It is well known that DF ϕ = ϕF ϕ a.s. (see Nualart [51]). Consequently, Fubini's theorem and the integration by parts formula of Malliavin calculus (see Nualart [51]) together imply that Because our noise V is white in time, the adjoint [divergence] of the operator D, acting on predictable random field X, is simply the Walsh integral of that random field X (see Nualart [51]). Therefore, it follows that EJP 25 (2020), paper 122.
Because the family {F ϕ } ϕ∈H is total in L 2 (Ω , V, P) (see Nualart [51]), it follows from the elementary properties of conditional expectations that v(t , x , ξ) solves (3.2). This is what we had set out to prove.
Proof of Theorem 5.8. We now compare (3.1) with Proposition 5.9, and recall (5.10), in order to see that As a consequence of this formula, and thanks to the definition of κ (see (5.10)), we readily obtain the bound, where we recall from the Introduction (see "Conventions") that Z k denotes the L k (Ω) norm of a random variable Z. We can deduce from the preceding that the following random field-defined earlier in (4.4)-is well defined Moreover, because of Assumption I of the Introduction, (5.12) thanks, additionally, to the fact that because of the independence of θ 0 and B,

Now, the inversion formula of Fourier transforms ensures that
where Y was defined in (5.8), with X replaced by the Brownian motion B. The validity of the absolute integrability condition (5.12) ensures that Fubini's theorem is applicable (in (5.12) we can replace θ 0 (x + B t , ξ) 1 by θ 0 (x + B t , ξ) 2 to check that stochastic Fubini (Theorem A.1) is applicable) and yields This is equivalent to the assertion of Theorem 5.8.

Proof of Theorem 2.5
In the previous subsections we introduced some of the ingredients of the proof of Theorem 2.5. We are now ready to establish Theorem 2.5. Throughout, we assume the hypotheses of Theorem 2.5. Define the random fields V ε,δ and Y respectively by (5.1) and (5.8), so that for all t > 0 and x, y ∈ R. The following is a first step toward estimating the smoothness properties of the random field θ.
Recall the random field Y from (5.8).
Proof. Thanks to (5.2), almost surely for every ε, δ, t, h > 0 and x, x ∈ R. Let ε and δ both tend to zero and appeal to Lemma 5.2 to see that a.s.
Since the conditional law of Y given B is Gaussian, elementary properties of mean-zero Gaussian processes tell us that for all k 2, t, h > 0, and x, x ∈ R, almost surely. If k 2, t > 0, x, x ∈ R, and 0 < h < 1, then by (2.9) and Brownian scaling, It follows easily from this that The result follows.
We are ready for the following.
Proof of Theorem 2.5. By the probabilistic representation of the solution (see Theorem 5.8) to see that for all t > 0, x, x , y ∈ R, and k 2, thanks to (2.10) and the conditional form of the Jensen's inequality. Therefore, Lemma 5.10 yields Similarly, for all t, h > 0, x, y ∈ R, and k 2, Finally, for every t > 0, x, y, y ∈ R, and k 2, The a.s.-smoothness of y → θ(t , x , y) was established in Theorem 2.3. Therefore, (5.13), (5.14), and (5.15) together imply the result, thanks to a suitable version of the Kolmogorov continuity theorem.
Proof of Proposition 2.7. According to Theorem 5.8, for every (t , x , y) ∈ (0 , ∞) × R 2 , where the curvilinear stochastic integral Y was defined in (5.8). Both sides are continuous [up to a modification], thanks to Theorem 2.5. Therefore, we may appeal to the continuous modification instead to see that the preceding identity holds for all (t , x , y) ∈ (0 , ∞) × R 2 outside a single P-null set. Because z → p The triangle inequality and Assumption III together imply that for all x, x ∈ R and k 2. Therefore, the Kolmogorov continuity theorem ensures that a → ∞ −∞ |θ 0 (a , w)| dw has a continuous modification, whence almost surely for all m > 0. The result follows. Remark 6.1. We pause to prove an assertion that was made in the Introduction. Namely, that the dissipation rate in (2.7) is unimproveable. Consider where κ was defined in (5.10). Recall that, because ρ is a constant, we can write 3). Therefore, Theorem 5.8 and the semigroup properties of p (κ) together yield that θ(t , x , y) = p (κ) 1+t (y − W t ) for all t > 0 and x, y ∈ R. It follows immediately from this that θ(t) 0 and sup y∈R θ(t , x , y) = 1 4πκ(1 + t) .
In particular, Proposition 2.7 guarantees an uppper bound on the dissipation rate of the passive scalar that is unimproveable.
Proof of Proposition 2.8. Let W denote the law of the process B. We may view W as a probability measure on the usual space C[0 , ∞) of real-valued, continuous functions on [0 , ∞). Theorem 5.8 and Fubini's theorem together imply that θ(t , x , y) almost surely. Of course, t 0 V (s , x + f (t − s)) ds is not defined for every f ∈ C[0 , ∞). But it is well defined for W-almost every f ∈ C[0 , ∞) by Lemma 5.2.
It follows essentially immediately from the preceding display that for every t > 0 and x, y ∈ R, P{θ(t , x , y) > 0} = 1. This is however a weaker statement than the one that was announced in Proposition 2.8. [N.B. The quantifiers.] In order to prove the full result we need to pay attention to a few measure-theoretic details.
According to Theorem 2.5, both sides of the preceding display are continuous up to a modification. Therefore, we may replace each side with its continuous modification as is usual to see that the preceding identity holds for all t > 0 and x, y ∈ R outside a single P-null set. In particular, outside a single null set, if θ(t , x , y) = 0 for some (t , x , y) Now, suppose to the contrary that θ(t , x , y) = 0 for some (t , x , y) ∈ (0 , ∞) × R 2 . If this were so, then the preceding discussion and Fubini's theorem together show that Fix any such b ∈ R and observe that Z(b) := {a ∈ R : (a , b) = 0} is a Lebesgue-null set, by Fubini's theorem. Since the distribution of B t is mutually absolutely continuous with respect to the Lebesgue measure, it follows that W{f : x + f (t) ∈ Z(b)} = 0, and hence W{f : θ 0 (x + f (t) , b) = 0} = 0. This contradicts (6.2).
EJP 25 (2020), paper 122. after a line, or two, of elementary calculus. If the preceding approximation were of sufficiently high quality (it is!), then we would be able to write, for n 1 large but fixed, In particular, if X(t) := lim ε↓0 X ε (t) existed (it does!), then simple continuity considerations imply that X would have to satisfy provided only that n 1. Let n → ∞ and appeal to elementary properties of the Itô integral in order to conclude that X must then solve the Itô stochastic differential equation, This is essentially the Wong and Zakai theorem [58]. A somewhat surprising feature of that theorem is that it implies among other things that the limit X of X ε does not satisfy the Itô SDE dX = σ(X) dW , as one might guess from a first look at (7.1). Rather, X solves a Stratonovich SDE: The stochastic integral is the Stratonovich stochastic integral of σ(X), and the Wong-Zakai theorem implies that a "physical approximation" to a stochastic differential equation should typically be understood as a Stratonovich SDE (and not an Itô SDE). Armed with this philosophy we next turn to "physical approximations" of the Kraichnan model (1.4).
There is in fact an integration theory associated to this definition, as was the case in finite dimensions. But we will not need that theory here, and so will not discuss it.
We introduce analogous notation to the one earlier as follows. Let U ε,δ (t , x , ·) := θ ε,δ (t , x , ·) denote the Fourier transform of y → θ ε,δ (t , x , y) in the sense of distributions. Clearly, U ε,δ solves weakly the following random PDE: subject to U ε,δ (0 , x , ξ) = θ 0 (x , ξ). In particular, u ε,δ (t , x , ξ) := e νξ 2 t U ε,δ (t , x , ξ) solves the random PDE, subject to u ε,δ (0 , x , ξ) = θ 0 (x , ξ). We invoke classical theory once again to see that the unique solution to the preceding PDE is where the notation is the same as before. In the case where iξ is replaced by ξ, this is for example found in Freidlin [32]. The present, more complex, case enjoys essentially exactly the same proof [which we omit, as a result]. In this way, we see that It follows from this and Assumption II that U ε,δ (t , x , ·) ∈ L 1 (R) a.s., whence by the inversion theorem of Fourier transforms. Because of first Fubini's theorem, and then another round of Fourier inversion, this yields where W is a standard, linear Brownian motion that is independent of (B , V ), and W t := (2ν) −1/2 B t . It is now easy to deduce from Lemma 5.2 and the dominated convergence theorem that when θ 0 satisfies assumptions II-IV and θ 0 is bounded, θ(t , x , y) := lim ε,δ↓0 θ ε,δ (t , x , y) exists in probability, and for every t > 0 and x, y ∈ R, θ(t , x , y) almost surely. This is exactly the same solution as the one in Theorem 5.8, with ν 1 = ν except in the latter theorem, B was replaced by a Brownian motion with speed κ = ν 2 − 1 2 ρ(0); equivalently, we obtain the above from Theorem 5.8 when we set ν 2 = ν+ 1 2 ρ(0).
The following is a simple consequence of the preceding probabilistic representation (7.3) of the Stratonovich solution to (1.4). Corollary 7.3. Suppose θ 0 satisfies (2.10) and is bounded. Let ν > 0 and define θ (ν) to be the Stratonovich solution of (1.4). Then, for every k 2, t > 0, and x, y ∈ R 2 , In other words, the "Stratonovich solution" θ (0) to the inviscid form of (1.4) is solved by formally applying the method of characteristics, as one would do in the classical PDE setting when V is smooth.
The Stratonovich solution to the inviscid form of (1.4) is very easy to understand: where {W(t)} t 0 is the cylindrical Brownian motion defined by More precisely, the proof of Lemma 5.2 shows immediately that W is a centered Gaussian process whose covariance is described by (5.9).
Thus, it suffices to prove that J 3 converges to zero as ν ↓ 0. Since V is conditionally Gaussian, given the process W , Lemma 5.2 and its proof yield which goes to 0 as ν ↓ 0 by the dominated convergence theorem. Because the L k (Ω)norm of a centered Gaussian random variable is proportional to the (k/2)th power of its variance, the conditional form of Jensen's inequality yields, Because of (2.10), this shows that J k 3 → 0 as ν ↓ 0, and completes the proof.

Measure-valued initial profiles
Temporarily let G θ0 denote the Stratonovich solution to (1.4), starting from an arbitrary non-random initial function θ 0 (as in (1.4)) that satisfies Assumptions II-IV and is bounded. According to Theorem 7.2 [see also (7.3)], we can write for all t > 0 and x, y ∈ R, It is easy to justify measurability and integrability, as well as the use of Fubini's theorem here. Therefore, we refrain from further mentioning those details. Instead, let us observe that the stochastic process X s : s t] is a Brownian bridge that is conditioned to go from the space-time point (0 , x ) to the space-time point (t , x), run at speed 2ν.
t (x , x ; y , y ) dx dy , a.s. for all t > 0 and x, y ∈ R. We can now deduce from the linearity of the SPDE (1.4) the following result. But first let us note that if the initial condition θ 0 is a finite measure on R 2 , then the smoothed version of (1.4)-that is, (7.2)-still has a unique classical solution θ ε,δ a.s. Thus, the Stratonovich solution to (1.4) with measure initial condition can be defined in exactly the same way as Definition 7.1. We are ready to state the next result.
One can also study Itô-Walsh type solutions to the Kraichnan flow (8.3), or even the generalized Kraichnan flow (1.7) where θ 0 is a finite Borel measure. We will avoid such generalizations here. Instead let us emphasize only that, because of Theorem 8.1, is the Stratonovich solution to (8.3), starting from initial Borel measure µ = δ 0 ⊗ δ 0 on R 2 . 9 A question of general interest to engineers is "what happens when ν ↓ 0"? When the initial data was a nice function, Corollary 7.3 showed that the answer is that the solution to (1.4) converges to the [formal] method-of-characteristics solution to the inviscid case of (1.4). Moreover, for all a, b ∈ R.
In the physically-interesting case that θ 0 = δ 0 ⊗ δ 0 , it is clear that the inviscid form of (1.4) does not have a function-valued solution. Intuitively speaking, this is because does not exist as a nice random function. In order to see this, we next study the small-ν behavior of the covariance function of Γ (ν) t (x , 0 ; y , 0) for every fixed t > 0. Theorem 8.2. Suppose that ρ is non increasing on [0 , ∞) and ρ(w) = ρ(0) =⇒ w = 0. (8.6) Then, for all t > 0 and x, y ∈ R, and also the following holds for all t > 0 and x, x , y, y ∈ R with x = x and y = y: t (x)} ν∈(0,1) is an L 2 (Ω)-tight sequence of random variables for every fixed t > 0 and x, y ∈ R. It might help to recall that this means that for every sequence ν 1 > ν 2 > · · · , of positive numbers that descend to zero, there exists a finite random variable L t (x , y) = L t (x , y; and hence also weakly. It is possible that L t (x , y) does not depend on the sequence If this were true, then the two-point covariance function of L t can be read off (8.7) and (8.8), and the existence of a second-order type of invariant measure also would follow, consistent with a result of van Eijnden [54] for a related, though slightly different, fluid model. For a different type of limit theorem, see Fannjiang [30].
The proof of Theorem 8.2 requires the following regularity result, which implies that if ρ is non increasing on [0 , ∞), then: (1) Under Condition (8.6), the right-hand side of (8.8) is strictly positive and finite provided that x = x and y = y ; (2) Under the more restrictive Condition (8.11), the right-hand side of (8.8) is strictly positive and finite for all x, x , y, y ∈ R.
when k is fixed, the lim is a constant. This proves, in particular, that P(A ε ) = o(ε k ) as ε ↓ 0 for every k > 1. Therefore, we can deduce from (8.15) that We are now ready to prove Theorem 8.2.
Proof of Theorem 8.2. To prove (8.7), we begin with the expression (8.2) in order to see that t−s ) ds is centered Gaussian with [conditional] variance tρ(0). It follows from the tower property of conditional expectations by (8.16).
In order to prove (8.8), we may first condition on B and B in order to see that Thus, we appeal to (8.5), by first conditioning on B and B, and find that Manifestly, the right-hand side is strictly positive, and it is finite owing to Proposition 8.6.
is an approximate identity, (1.3) and the dominated convergence theorem together ensure that the above integral converges to In order to simplify the exposition somewhat we study only the large-time behavior t (x , y) at x = y = 0, since the point (0 , 0) is slightly more distinguished than other points in light of the fact that the initial data is δ 0 ⊗ δ 0 . It is not hard to extend our analysis to study the behavior of t → Γ (ν) t (x , y) for other values of x and y though. First, we observe that the typical behavior of t → Γ (ν) t (0 , 0) is const/t; see also (9.3).
The following is a fractal-analysis version of such an assertion.
Furthermore, for any x ∈ R, Among other things, Theorem 9.1 says that, asymptotically as t → ∞, t → Γ (ν) t (0 , 0 ; 0 , 0) typically behaves as K/t for all possible values of K ∈ (0 , (4πν) −1 ). Moreover, the set of times were Γ (ν) t (0 , 0 ; 0 , 0) > K/t for such a K is a "monofractal" of full macroscopic Minkowski dimension. The following result shows that there are more subtle, logarithmic, corrections on whose scale a suitable log-scaling of the set of decay times of order t −1 (log t) −δ is a bona fide macroscopic multifractal. Theorem 9.2. Choose and fix a real number δ > 0. Then, with probability one, Of course Theorem 9.2 has non-trivial content if and only if δ < ρ(0) 2ν =: R.
Interesting enough, R is the ratio of turbulent diffusivity to thermal diffusivity and, as such, plays a similar role to 1 2 Pr-half of the Prantdl number-in the non stochastic setting. Larger values of R translate to more turbulent transport of the underlying passive scalar; see Grossmann and Lohse [34] and its extensive bibliography for earlier physical (in some cases, experimental) observations that the the multifractal behavior of Γ (ν) is determined essentially solely by the value of the Prandtl (or Schmidt) number, here R. See §10 for some more explanation of some of the physical terminology that is used here. EJP 25 (2020), paper 122.
In light of the preceding remarks, Theorem 9.2 implies that, as R gets larger, higher dissipation rates can be observed on non-trivial unbounded sets of greater macroscopic dimension. Stated yet in another way, the larger the value of R the more multifractal is the rates of dissipation of the passive scalar.
We begin the proofs with a technical lemma about standard Brownian motion. Lemma 9.3. Let {W t } t 0 denote a standard, linear Brownian motion, and z ∈ R and α > 0 be fixed numbers. Then, with probability one, Proof. The first part of the lemma is well known; see, for example Khoshnevisan [39] in the case that W is replaced by a random walk. We make small adjustments to that proof in order to verify the first part of our lemma.
Consider the following random subset of Z + : for all N ∈ N. It is well known, and easy to verify directly from the Markov property of W , that for every B > 0 there exist real numbers C 1 , C 2 -depending only on (B , z)-such For example, it is well known (as well as elementary) that which clearly implies (9.4) when z = 0. The case z = 0 follows from potential-theoretic considerations; see [40] for example.
In any case, it follows that there exist real numbers C 3 , C 4 > 0-depending only on z ∈ R-such that In particular, Chebyshev's inequality implies that ∞ n=1 P{J 2 n 2 (1+ε)n/2 } < ∞ for all ε > 0. Since ε > 0 is arbitrary, the Borel-Cantelli lemma ensures that (log 2 n ) −1 log J 2 n Next we show that the above is in fact an a.s. identity, and hence prove the first assertion of the lemma.
If k and j are integers that satisfy k j + 2 > j 1, then we may apply the strong Markov property to the first time in [j , j + 1] that W reaches z in order to see that for a real number C that depends only on z; confer with (9.4). Therefore, as N → ∞, by (9.5).
This, (9.5), and the Paley-Zygmund inequality (see Lemma 7.3 in [41] for example) together imply that 2 with probability at least q > 0. By the Kolmogorov 0-1 law, the latter event must in fact have full probability, whence Dim M (L) 1 2 a.s. This and (9.6) together establish the first half of the lemma. The second part of the lemma follows from another second-moment computation. In order to simplify the notation let to be the random set whose dimension is supposed to be 1. Elementary properties of the macroscopic Minkowski dimension ensure that it suffices to prove that Dim M ( L) 1 a.s. For every integer N > 1 define As j → ∞, which is strictly positive. Therefore, for all N sufficiently large, Because J N N , whence also E( J 2 N ) N 2 , the Paley-Zygmund inequality implies that for all sufficiently-large N .
Armed with Lemma 9.3, we can now derive Theorem 9.1 fairly easily.
Lemma 9.3 then implies that if 0 < K < (4πν) −1 , then Dim M (E(K)) = 1 a.s. If, on the other hand, K (4πν) −1 , then E(K) is empty and hence has zero macroscopic Minkowski dimension. This completes the proof of the first assertion of the theorem; the second assertion is a ready consequence of the first part of Lemma 9.3.
As it turns out, Theorem 9.2 is a consequence of the probabilistic representation of the solution to (1.4) together with a large-scale fractal property of the Ornstein-Uhlenbeck process.
Proof of Theorem 9.2. Let us fix some δ > 0 and consider the random set Choose and fix an arbitrary ε ∈ (0 , 1), and fix N > 1 such that Elementary properties of the macroscopic dimension imply that (9.8) and similarly, The stochastic process {U t } t 0 is a stationary Ornstein-Uhlenbeck process with covariance function Cov[U s , U t ] = exp{−|t − s|} for s, t 0. Therefore, Theorem 6.1 of Weber [57] implies that For all t 0 and x, y ∈ R define to be a model for a 2-dimensional velocity field. It is a generally-accepted fact that the transport equation of a passive scalar in the field V is governed by the following convection-diffusion equation: x , y) ∂y , (10.1) valid for all t > 0 and x, y ∈ R, subject to nice initial data θ(0) := θ 0 . The constant ν is strictly positive and referred to as thermal diffusivity for example when θ denotes temperature; Kraichnan [43] refers to a closely-related quantity as eddy diffusitivity.
Other, similar names, are used when θ denotes concentration, temperature, etc.
In fluid mechanics, ν is inversely proportional to the Reynolds number of the underlying fluid: Smaller values of ν imply more turbulence in the fluid.
We follow Majda [47] and specialize to velocity fields that come from so-called shear flows of the type, .
Among other things, such fluids are incompressible or divergence free; that is, ∇ · V = 0.
In this way, the PDE (10.1) is simplified to the convection-diffusion equation, 3) The partial differential equation (10.3) has the same form as (1.7), but there is a small difference: In general, the velocity field V is decomposed into its "mean component" µ ∈ R and its "fluctuating component" V = V (t , x) as follows: 10 and µ is not in general zero. This is the so-called Reynolds decomposition of V , and the quoted terms are substitutes for the respective statements that µ is deterministic and V is random. When V is a centered, generalized Gaussian random field with covariance (1.1), the partial differential equation (10.3) is called the Kraichnan model for the 2-D flow described by V ; see Kraichnan [42]. In this case, 1 2 That is, the introduction of the additional mean velocity field µ merely changes the mean function of the Brownian motionB from its standard value zero to the mean velocity µ.
We leave the analysis of this slightly more general model to the interested reader since the methods of this paper cover this more general case as well. 10 In order to simplify the technical aspects of this discussion we are assuming that µ ∈ R is constant, though more general mean velocity fields can be considered as well.

A multi-dimensional extension
In this section we briefly study the following higher-dimensional analogue of the SPDE (1.7): ∂ yj θ(t , x , y)V j (t , x), (11.1) where θ is a predictable random field, indexed by R + × R × R n , and the noise is centered Gaussian whose covariance function Σ is described by for all s, t 0 and x, x ∈ R, where ρ = (ρ i,j ) 1 i,j n : R n → R n×n + is the spatial correlation function of V . 11 Instead of writing out detailed proofs, we merely point out how one solves (11.1) using analogies with the earlier case n = 2 where the details were provided. In complete analogy with the preceding sections, wherein n was equal to 2, we may take Fourier transforms with respect to the variable y in order to find that U (t , x , ξ) := R n e iξ·y θ(t , x , y) dy ought to solve the SPDE ∂ t U (t , x , ξ) = ν 1 ∂ 2 x U (t , x , ξ) − ν 2 ξ 2 U (t , x , ξ) + iU (t , x , ξ) n j=1 ξ j V j (t , x), where ξ := (ξ 1 , . . . , ξ n ) ∈ R n and ξ 2 := ξ 2 1 + · · · + ξ 2 n . Once again, we follow the procedure of the previous sections and define a random field u via U (t , x , ξ) = exp −ν 2 ξ 2 t u(t , x , ξ), and arrive at the corresponding parabolic Anderson problems, ∂ t u(t , x , ξ) = ν 1 ∂ 2 x u(t , x , ξ) + iu(t , x , ξ)ξ · V (t , x), (11.2) solved pointwise for every ξ ∈ R n . Thus we see that the difference between (3.2) and (11.2) is that, instead of the multiplicative noise iξV (t , x) in (3.2), we have in (11.2) the noise i n j=1 ξ j V j (t , x). We now proceed in almost exactly the same way as we did when n was 2, and obtain the following n-dimensional extension of Theorem 3.1.
Throughout, we write F [z] for the function (t , x) → F (t , x , z) whenever applicable, notation being clear from context. Theorem 11.1. Suppose u 0 : Ω × R × R n → C is a measurable random field that is independent of V and satisfies sup x∈R E(|u 0 (x , ξ)| k ) < ∞ for every k 2 and ξ ∈ R n . Choose and fix some ν 1 > 0. Then, for every ξ ∈ R n , (3.2) has a unique mild solution u[ξ] that satisfies the following for every k 2, ε ∈ (0 , 1), t > 0, x ∈ R and ξ ∈ R n : where c k was defined earlier in Lemma 2.17. 11 Interestingly enough, the matrix n −1 ρ(0)-sometimes known as turbulent diffusitivity has a role in the ensuing analysis as (2/n) times the closely-related matrix 1 2 ρ(0), which does have a physical meaning.
If we assume that (3.18) holds, where now ξ is replaced by the vector ξ ∈ R n , then we can proceed in exactly the same way as we did in the proof of Lemma 3.10, in order to show that, in the n-dimensional case, ξ → u(t , x , ξ) has a continuous, and thus Borel-measurable, version. We can also obtain the probabilistic representation of u as follows.
In order to obtain a probabilistic representation of θ-and also to prove the existence and uniqueness of the solution to (11.1)-we plan to compute the inverse Fourier transform of R n ξ → exp{−ν 2 |ξ| 2 t}u(t , x , ξ). In analogy with the preceding sections, our methods show that this inverse Fourier transform exists provided only that ν 2 I − 1 2 ρ(0) is strictly positive definite. Here, I denotes the n × n identity matrix. These assertions can be summarized as follows. Theorem 11.3. Assume that (3.18) holds and that ν 2 I − 1 2 ρ(0) is strictly positive definite. Let B denote a standard linear Brownian motion, andB a standard Brownian motion on R n , and assume that: 1. B,B, and V ∨ T 0 are totally independent; 2. B has speed Var(B 1 ) = 2ν 1 ; and 3. The covariance matrix forB 1 is 2ν 2 I − ρ(0).
Then, for all t 0, x ∈ R, and y ∈ R n , θ(t , x , y) = E θ 0 x + B t , y +B t − t 0 V (s , x + B t−s ) ds V ∨ T 0 , almost surely, where V denotes the σ-algebra generated by V , T 0 is as before, and the random variable t 0 V (s , x + B t−s ) ds is defined as in Lemma 5.2 in every coordinate.
Finally, we may consider instead the Stratonovich solution to equation (11.1) by first replacing V by a smooth random noise and taking limits afterward. The required extension to the present n-dimensional setting does not require new ideas, and leads to the following: θ (ν1,ν2) (t , x , y) where W and W are respectively linear and n-dimensional Brownian motions, both independent of each other as well as the σ-algebra V ∨ T 0 . In particular, we see that (11.1) has a Stratonovich solution for every ν 1 , ν 2 > 0, with θ 0 satisfying II-IV and bounded. In particular, if (3.18) holds, then for every ν > 0, the Stratonovich solution of ∂ t θ(t , x , y) = ν∆θ(t , x , y) + ∇ y θ(t , x , y) · V (t , x), subject to initial data θ 0 that follows Assumptions I-IV and bounded is the following: For all t 0, x ∈ R, and y ∈ R n , almost surely. We leave the other extensions (inviscid equations, measure-valued initial data, etc.) to the interested reader.

A Appendix: Stochastic integrals
In this appendix we briefly review aspects of the Walsh theory of stochastic integration, as it pertains to the present setting. We use this opportunity to set forth some notation, and present a stochastic Fubini theorem that plays an important role in the paper.

A.1 The Wiener integral
Let C ∞ c ((0 , ∞) × R) denote the usual vector space of all infinitely-differentiable, compactly-supported, real-valued functions on (0 , ∞) × R, and define H to be the completion of C c ((0 , ∞) × R) in the norm · · · H , where dy ϕ(t , x)ϕ(t , y)ρ(x − y) . (A.1) Throughout, we let (Ω , F , P) be a probability space that is rich enough to support a centered Gaussian process V := {V (ϕ)} ϕ∈C ∞ c ((0,∞)×R) with formal covariance form given in (1.1). More precisely put, V is a centered Gaussian process whose covariance function is described by for every ϕ, ψ ∈ C ∞ c ((0 , ∞) × R). The stochastic process V is sometimes called an isonormal, or iso-Gaussian process. According to the classical Wiener theory, we may identify V with a linear isometry from C ∞ c ((0 , ∞) × R) into the space of all random variables in L 2 (P). Thus, we may also think of V as a Wiener integral. For this reason, we also write V (ϕ) = R+×R ϕ(t , x)V (t , x) dt dx for every ϕ ∈ H.
As is usual, we may write A×B ϕ(t , x)V (t , x) dt dx := V (ϕ1 A×B ), and continuously extend the domain of definition of V to a random field-still written as V -defined on the full parameter space H. It follows that V is now a linear isometry from the full Hilbert space H into L 2 (P).
Consider the special case that ρ is a constant; that is, ρ(x) = ρ(0) for all x ∈ R. where ∞ 0 ϕ(t , x) dW t is a standard Wiener integral-with respect to Brownian motion W -for every x ∈ R. It is easy to see that { V (ϕ)} ϕ∈C ∞ c (R+×R) is a centered Gaussian random field with covariance function Cov V (ϕ) , V (ψ) = ρ(0)  for all ϕ, ψ ∈ C ∞ c (R + × R).
Thus, it follows that there exists a unique, continuous extension of V to a stochastic process { V (ϕ)} ϕ∈H whose law is the same as the law of V . In other words, whenever ρ is a constant, we may-and will-assume that V has the form given by (A.3). In this sense, we see that if ρ is a constant, then we can write V as V (t , x) dt dx = ρ(0) dW t dx, (A.4) using informal infinitesimal notation.

A.2 The Walsh integral
The Walsh integral is an extension of the Wiener integral to the case that Φ is a predictable random field that satisfies see Walsh [55] and especially Dalang [23] for details. Thanks to (2.3) and Tonelli's theorem, (A.5) is implied by the following integrability condition: this fact is used several times in the paper. As a noteworthy consequence of the construction of the Walsh integral, we can see that for all such random functions Φ, This is the so-called Walsh isometry for Walsh stochastic integrals.
It is easy to see that if ρ is a constant, then we may use the representation (A.3) in order to find that as long as, additionally, the following hold: (1) t → Φ(t , x) is a predictable process for every x ∈ R; and (2) the Itô integral map x → ∞ 0 Φ(t , x) dW t is Lebesgue measurable.
Indeed, by a standard approximation procedure, it suffices to verify this assertion for processes of the form Φ(t , x) = X1 (a,b) (t)f (x) where X ∈ L 2 (P) is measurable with respect to the σ-algebra generated by all random variables of the form (0,a)×R ϕ(t , x)V (t , x) dx, as ϕ roams over H, and f ∈ C ∞ c (R) is a nonrandom, smooth, and compactly-supported function. In that case, V (Φ) = XV (1 (a,b) ⊗ f ) and the assertion follows by direct inspection, thanks to (A.3) and the defining properties of the Walsh stochastic integral.

A.3 A stochastic Fubini theorem
The stochastic Fubini's theorem is used a number of times in this paper. We cite, without proof, a suitable version of it here. It might help to recall from (A.1) the space H, and also the fact that Φ[y] refers to the function (t , x) → Φ(t , x , y) for every y ∈ R.
The stochastic integral in [25] is defined in the setting of the cylindrical Wiener process which turns out to be the same as our setting in Walsh integral. Thus, we have the following: EJP 25 (2020), paper 122.

A.4 Elements of Malliavin calculus
In this subsection we will outline the setup of Malliavin calculus. For a detailed treatment of this material, see Nualart [51]. Let F be a smooth and cylindrical random Let us remark that in our context, that is, V (t, x) is a Gaussian noise which is white in time and has certain covariance ρ in the space, if R + × R (t , x) → u(t , x) is an adapted stochastic process such that E ∞ 0 ∞ −∞ ∞ −∞ u(t , y)u(t , z)ρ(y − z) dy dz dt < ∞, then u belongs to the domain of δ and δ(u) coincides with the Walsh integral: δ(u) = R+×R u(t , x)V (t , x) dt dx.