The continuum parabolic Anderson model with a half-Laplacian and periodic noise

We construct solutions of a renormalized continuum fractional parabolic Anderson model, formally given by $\partial_t u=-(-\Delta)^{1/2}u+\xi u$, where $\xi$ is a periodic spatial white noise. To be precise, we construct limits as $\varepsilon\to 0$ to solutions of $\partial_t u_\varepsilon=-(-\Delta)^{1/2}u_\varepsilon+(\xi_\varepsilon-C_\varepsilon)u_\varepsilon$, where $\xi_\varepsilon$ is a mollification of $\xi$ at scale $\varepsilon$ and $C_\varepsilon$ is a logarithmically diverging renormalization constant. We use a simple renormalization scheme based on that of Hairer and Labb\'e,"A simple construction of the continuum parabolic Anderson model on $\mathbf{R}^{2}$."


INTRODUCTION
Let Λ = −(−∆) 1/2 be the half-Laplacian on R. It is given by the formula whenever this is well defined.Here and throughout the paper p. v. ∫ R will denote the principal value integral: if g is a function with a singularity at x, then p. v. Also, let ξ be a periodic Gaussian spatial white noise on R of period L ∈ (0, ∞).The covariance kernel of ξ is thus given by Eξ(x)ξ(y) We are interested in solutions to the fractional parabolic Anderson model formally given by We would expect solutions of (1.2) to model scaling limits of the parabolic Anderson model on the lattice with long-range jumps.This lattice model, with an aperiodic non-Gaussian noise, was previously studied in [14].We refer, for example, to [13] for additional background on the parabolic Anderson model.Straightforward heuristics indicate that (1.2) cannot be interpreted directly.Indeed, the white noise ξ has (Hölder) regularity just below −1/2, and thus the solution to the linearized problem (around u ≡ 1) of (1.2) has regularity just below 1/2, since we gain one derivative by inverting the half-Laplacian.So we can expect the regularity of u to be at most just less than 1/2.Thus the product of ξ and u is undefined since the sum of their regularities is negative (albeit just barely).This is the reason the power of 1/2 on the Laplacian is interesting-it is the largest power such that the product in (1.2) is ill-defined.
Since abstract theory does not allow us to interpret the problem as stated, we turn our attention to a regularized problem and try to pass to a limit as the regularization is removed.We will see that a renormalization is necessary to obtain a finite limit.Fix a mollifier ρ ∈ C ∞ c so that ∫ ρ = 1.Define, for ε > 0, ρ ε (x) = ε −1 ρ(ε −1 x), and define ξ ε = ρ ε * ξ, where * denotes spatial convolution.Also fix a constant C ε , depending on ε.Then we consider the problem (1.3) This equation can be solved using standard techniques because ξ ε ∈ C ∞ for all ε > 0. Our goal will be to pass to the limit as ε → 0. To state our main theorem, we first define the Banach space in which this convergence takes place.If Y is a Banach space, we define for κ ∈ R and T > 0 the Banach space X κ T (Y) ⊂ C loc ((0,T], Y) to be the space of functions f such that the norm There is a choice of deterministic constants C ε , ε ∈ (0, 1] (explicitly defined in (5.1) below), so that the following holds.For any κ ∈ (0, 1/4), if u ∈ C −1/2−κ , then for each ε ∈ [0, 1] there is a random u ε ∈ C loc ((0, ∞), C 1/2−κ ) so that whenever ε > 0, u ε is a mild solution to (1.3)-(1.4),and moreover for every Finally, we have a constant C > 0 so that, for all ε ∈ (0, 1], The model (1.2) has similar scaling properties to the ordinary continuum parabolic Anderson model in two spatial dimensions.That model also has a just-barely-ill-defined product, and it also requires a logarithmic renormalization.Solutions to (1.6) were constructed independently in [10,9] using the theories of regularity structures and paracontrolled distributions, respectively.An elementary approach that also works on the whole space was carried out by Hairer and Labbé in [11], and some properties of solutions were derived in [8,7].The more difficult case of (1.6) in three spatial dimensions was tackled in [12].On the other hand, singular stochastic partial differential equations modeling long-range jump processes, thus involving fractional Laplacian terms, have previously been considered in [2,3].Our approach to proving Theorem 1.1 closely follows the strategy of [11], thus avoiding the use of regularity structures or paracontrolled distributions.Similar strategies were also employed for the random Schrödinger equation in [5,4].As in [11], we perform a change of variables in (1.3) by writing u ε = e S ε v ε , where S ε is the exponential of an approximate solution to the time-independent problem, and then write a PDE for v ε .(See Section 3.) The coefficients of the PDE for v ε converge, in appropriate spaces, as ε → 0. One of these converging "coefficients" is in fact a nonlocal operator.Proving the convergence requires new estimates, which we carry out in Section 5 using some purely analytic bounds that we prove in Section 4. Then the continuity of the PDE for v ε shows that v ε → v, where v solves the limiting PDE.This is the main content of Section 6 and is essentially the same as the argument of [11], as the estimates we obtain by this point are analogous.Inverting the change of variables then shows that u ε converges to e −S v.
In this paper, we restrict ourselves to the case of periodic noise.The periodicity is used so that the noise is bounded (as a distribution in C −1/2−κ for any κ > 0) uniformly in space.It is not clear whether or how solutions to (1.2) can be constructed with aperiodic noise.In particular, the weighted-space approach of [11] does not straightforwardly generalize to our setting because the Cauchy kernel decays only algebraically in space, in contrast to the Gaussian decay of the heat kernel.
Acknowledgments.We thank Lenya Ryzhik for suggesting the problem and much useful advice, as well as Yu Gu, Leonid Mytnik, and Weijun Xu for interesting conversations.We are also grateful to Leandro Chiarini for pointing out a subtlety in the proof of Lemma 2.4.

PRELIMINARIES AND NOTATION
We will often work with constants, which we call C, and allow them to change from line to line in a computation.This does not apply to the renormalization constant C ε , which will be fixed in (5.1) below.
Throughout the paper, we will let δ denote a delta distribution at 0. We will often work with the approximate Green's function of the fractional Laplacian, which we define in the following lemma.
and there is a constant C so that, for all x ∈ R, we have smooth as well, so to prove (2.2) we only need to consider the behavior of F at infinity.Now for |x| > 1, and the last integral is finite by (2.1) and the fact that G is smooth and hence bounded on Similarly, This proves (2.2).
2.1.Hölder spaces.We will often use the α-Hölder spaces, given as usual by the norm for all α ∈ (0, 1).We will also need to use Hölder spaces with negative Hölder exponent.
Then we define, for all α ∈ (−1, 0), the α-Hölder norm on a distribution u by and let C α be the Banach space of distributions such that this norm is finite.We recall that C α is equivalent to the Besov space B α ∞,∞ (see [1] for the definition), and refer to [11, Section 2], [10, Section 3], or [6] for more background on negative Hölder spaces in the context of stochastic PDEs.The following wavelet characterization of the negative Hölder spaces, giving a countable characterization of the spaces, will be useful in the analysis.

14]
).There are compactlysupported functions ψ, φ ∈ C 1 c so that for any α ∈ (−1, 0) we have a constant C so that (using the notation We will also use the following standard statement about multiplication of elements of Hölder spaces. Proposition 2.3 ([1, Theorem 2.52]).If α < β and α + β > 0, then multiplication of functions extends to a continuous bilinear map Finally, we will use the following standard Schauder-type estimates.
Lemma 2.4.Let P t be the Cauchy kernel (This is the propagation kernel for the fractional heat equation ∂ t u = Λu.)For any T < ∞ and α < β, there is a C < ∞ so that for any function f ∈ C α and any t ∈ (0,T], we have Proof.This follows from a scaling argument analogous to that used in [11,Lemma 2.8].For completeness, we present the argument for the case Then write, for t ≥ 0 and x ∈ R, We note that there is a constant C < ∞ so that for all t ∈ (0,T], we have the estimate On the other hand, we have We complete the proof by concluding that Proof.If η is a smooth, positive function, supported on [−1/2, 1/2], which is 1 in a neighborhood of 0, then for some smooth, compactly-supported function k.Then it is sufficient to prove the To do this, we note first that where q x,x ′ ,y (z) = η(x/y − z) − η(x ′ /y − z) and we use the common abuse of notation in which the integrals over z are in fact pairings with the distribution f .If On the other hand, if y ≤ 2|x − x ′ |, then q x,x ′ ,y can be written as the sum of two C1 functions with support contained in . Therefore, we have, for a constant C depending on η but not on f , that The necessary bound on |(L * f )(x)| is easier, so we omit it.

THE CHANGE OF VARIABLES
In this section we explain the key change of variables that we perform on (1.3).This change of variables should be seen as an analogue for the fractional Laplacian of the change of variables performed in [11, p. 3].The advantage of the change of variables is that the coefficients of the new equation converge as ε → 0, and so an an equation is obtained for the limit.Lemma 3.1.Suppose that f ∈ C ∞ and A ∈ R and that u solves the equation Let S = −G * f , where G is defined as in Lemma 2.1.If we put u = e S v, then v satisfies the PDE where and ∫ (e S(y)−S(x) − 1) (w(y) − w(x)) Proof.We note that It is straightforward to verify that ).Thus we have In our case of interest, we will take u = u ε , f = ξ ε , and A = C ε , for ε > 0, in (3.1).This yields the problem where Here and in the sequel we use the notation G ε = G * ρ ε .We note that the definitions of S ε and Ξ ε make sense for ε = 0 as well.We define S = S 0 and Ξ = Ξ 0 .

ANALYTIC ESTIMATES
In this section we derive some purely analytic estimates that will be useful in controlling the quantities defined in (3.6).Following [11,Section 3] or [10, Section 10.3], define the norm, for any smooth function where K (k) denotes the kth derivative of K. We note in particular that, with G defined as in Lemma 2.1, for all κ > 0 and all m ∈ N. We define the notation Quantities of this form will arise in the expressions for second moments of the quantities defined in (3.6).

STABILITY OF THE COEFFICIENTS OF THE EQUATION FOR v ε
In this section we prove that the coefficients of the equation (3.5) are stable as we eliminate the spatial mollification of the noise.We will use the following two basic lemmas.
Lemma 5.1.For any κ > 0, we have for each Proof.This follows from a simple estimate using Proposition 2.2 as in [11,  For ε ≥ 0, define S ε (y, x) = S ε (y) − S ε (x).(Norms of the form S ε • will still refer to the one-variable function S ε .)In the remainder of this section, we will consider the coefficients of (3.5) in turn.The stability of F * ξ ε will come directly from the decay (2.2) of F and F ′ .We will consider the local term Z ε , which requires renormalization, in Subsection 5.1.We bound the size of the renormalization constant C ε in Proposition 5.9.Then we will show the stability of the nonlocal term Ξ ε in Subsection 5.2.5.1.Stability of Z ε .For ε > 0, we fix with the second equality (for any x ∈ R) by the space-stationarity of the noise.For η ∈ C 1 c and ε ≥ 0, put so Z ε is a distribution.We will show that Z ε ∈ C −κ for any κ > 0 and ε ≥ 0 in Proposition 5.3 below.We note that for ε > 0, this agrees with the definition (3.6) (with the choice (5.1) of C ε ), so we are simply extending (3.6) to ε = 0. We split Z ε into two parts.For ε ≥ 0 and η ∈ C 1 c define and for ε ≥ 0 and x ∈ R define Now evidently The main goal of this section is to prove the following.
To prove Proposition 5.3, we will show that U ε and V ε are both stable in C −κ as ε → 0, which will imply the same for Z ε .The advantage of the change of variables carried out in Section 3 is that expressions for the coefficients (3.5), and in fact (5.3) and (5.4), make sense at ε = 0.
. By Taylor's theorem, we have a constant C so that Thus, for all ε ≥ 0 and all x, y ∈ R, and the last integral is finite and independent of x as long as κ < 1/2.This implies that V ε ∈ L ∞ .Also, by the mean value theorem and (5.5), we have, for all x, y ∈ R, Therefore, for all x ∈ R, dy.
The integral is bounded independently of x, and by Lemma 5.2, we have in probability as ε → 0. This proves that V ε → V in probability in L ∞ as ε → 0.
5.1.2.Stability of U ε .Now we show the stability of U ε .Since U ε (η) is defined as an integral over squares of Gaussian random variables-elements of the second Wiener chaos-we can use moment estimates to control its regularity and establish its stability.
Lemma 5.5.For each κ > 0, we have a constant C so that for all ε ∈ [0, 1] and where and is defined as in (4.3).Then we can compute, from (5.3), We have by the Isserlis theorem that Also, from (4.2) and [10, (10.12)], we see that H ε − H ε (0) 1−κ/2;m ≤ C, where C < ∞ is a constant independent of ε.But the operator does not see constants, so applying Lemma 4.3, we have Then the conclusion follows by rescaling.
Proof.We note that U ε is an element of the second Wiener chaos by definition.By Lemma 5.5 and the equivalence of moments of elements of finite Wiener chaoses (as stated for example in [10, Lemma 10.5]) and Lemma 5.5, for each κ > 0 and p ∈ [1, ∞) there is a constant C = C(p, κ), depending only on p and κ, so that Choose p > 2(1/κ + 1), so the last sum is finite.A simpler computation shows that E sup Lemma 5.7.For any κ > 0 and R > 0, we have a constant C so that, for any Proof.We have We have by the Isserlis theorem that Combining this with (5.8) yields y −2 z −2 η(x)η(w) dy dz dx dw.(5.9)By [10,Lemma 10.17] and (4.2), for any m ∈ Z ≥0 we have a constant C, independent of ε, so that [10, (10.12)], we thus obtain the three inequalities The operator does not see constants, so these three inequalities, (5.9), and Lemma 4.3 imply that Then (5.7) follows by substituting κ → κ/6 and rescaling.
Proof.As in Corollary 5.6, U ε − U is an element of the second Wiener chaos by definition.By Lemma 5.7 and the equivalence of moments of the elements of finite Wiener chaoses, for each κ > 0 and p ∈ [1, ∞) there is a constant C = C(p, κ), depending only on p and κ, so that Take p > 2(1/κ + 1), so the last sum is finite.A simpler computation shows that E sup for some constant C not depending on ε, which means that U ε → U in probability by Markov's inequality.
The results of the last two subsections are now enough to prove Proposition 5.3.
Proof of Proposition 5.3.Since Z ε = U ε + V ε for all ε ≥ 0, the fact that Z ε ∈ C −κ almost surely is an immediate consequence of Lemma 5.4 and Corollary 5.6, and the convergence is an immediate consequence of Lemma 5.4 and Lemma 5.7.5.1.3.The renormalization constant.We now estimate the size of the constant C ε , proving the bound (1.5).Proposition 5.9.There is an absolute constant C so that, for all ε ∈ (0, 1], Proof.We recall the definition (5.1) of C ε .By (5.6), we have (with H ε as defined there) Here, C represents a term which is bounded independently of ε.The third term in (5.10) is bounded independently of ε, while the second is In this section, we show that Ξ ε (defined in (3.6)) is stable as ε → 0.
Proof.We have sup , so by the triangle inequality, and the integral is bounded independently of x.

THE FIXED-POINT ARGUMENT
Fix κ ∈ (0, 1/4) and T > 0, and define X κ T (Y) for any Banach space Y as in the introduction.We will construct a solution to (3.2) in the space X κ T (C 1/2+κ ) using a fixed-point iteration scheme.For g ∈ C −κ , Ψ ∈ B(C 1/2+κ , C −κ ), and v ∈ C −1/2+2κ , define the affine operator M g,Ψ,v on X κ T (C 1/2+κ ) (we will justify that it maps this space to itself in Corollary 6.2 below) by We note that v ε solving (3.5) is equivalent to v ε being a fixed point of the map M Z ε −F * ξ ε ,Ξ ε ,v ε .Thus our goal is to show that there exists a unique such fixed point.We start by bounding the operator norm of L g,Ψ .
Proposition 6.1.We have a constant C < ∞ so that if g ∈ C −κ and Ψ ∈ B(C 1/2+2κ , C −κ ), then L g,Ψ ∈ B(X κ T (C 1/2+κ )) and We also have a constant C so that ) , with the first inequality by Proposition 2.3.Substituting these estimates into (6.2) and integrating, we obtain This implies that M g,Ψ,v is a contraction map if T is chosen sufficiently small: Corollary 6.2.There is a constant C < ∞ so that the following holds.For any g ) by Proposition 6.1.Also, by Lemma 2.4, we have a constant C so that P t * v C 1/2+κ ≤ Ct −1+κ v C −1/2+2κ .This implies that t → P t * v is an element of X κ T (C 1/2+κ ) as well.Therefore, M g,Ψ,v v ∈ X κ T (C 1/2+κ ).Since M g,Ψ,v v − M g,Ψ,v ṽ = L g,Ψ v − L g,Ψ ṽ, the continuity and contraction statements follow immediately from Proposition 6.1.
We can now apply the contraction mapping principle to construct solutions as fixed points of M g,Ψ,v .Lemma 6.3.For any T < ∞, the map M g,Ψ,v has a unique fixed point V T (g, Ψ, v) in X κ T (C 1/2+κ ).Proof.This holds for T < T 0 (g, Ψ) by Corollary 6.2.As T 0 (g, Ψ) does not depend on the initial condition v, the construction can be extended to larger T as explained in the proof of [11,Proposition 4.1].
Thus for all t ∈ (0,T] we have . This shows that V T is continuous on A M when T < T 0 (M, M).
By induction, this implies that V T is continuous on A M for any T. Since this is true for any M, we see that, for any T, V T is in fact continuous on C −κ × B(C 1/2+κ , C −κ ) × C −1/2+2κ .The continuity of the solution map then allows us to apply the stability of the coefficients proved in Section 5 to show that the solutions converge.Corollary 6.5.If u ∈ C −1/2+2κ , then for all ε ∈ [0, 1), for any T > 0 there is a unique solution v ε ∈ X κ T (C 1/2+κ ) to (3.5), and v ε converges to v 0 in probability, with respect to the topology of X κ T (C 1/2+κ ), as ε ↓ 0. Proof.As noted above, u) uniquely solves (3.5).Using (2.2), the periodicity of ξ and ξ ε , and Lemma 5.1, we see that F * ξ ε → F * ξ as ε → 0 in probability in L ∞ .Proposition 5.3 says that Z ε → Z 0 in probability in C −κ , and Proposition 5.11 says that Ξ ε → Ξ 0 in probability in B(C 1/2+κ , C −κ ).Also, Lemma 5.2 and Proposition 2.3 imply that v ε → v 0 in probability in C −1/2+2κ .Thus Proposition 6.4 implies that v ε → v 0 in probability in X κ T (C 1/2+κ ).To prove Theorem 1.1, it simply remains to undo the change of variables.

Date:
February 25, 2020.The author was partially supported by the NSF Graduate Research Fellowship Program under Grant No. DGE-1147470.