The non-linear stochastic wave equation in high dimensions

We propose an extension of Walsh’s classical martingale measure stochastic integral that makes it possible to integrate a general class of Schwartz distributions, which contains the fundamental solution of the wave equation, even in dimensions greater than 3. This leads to a square-integrable random-ﬁeld solution to the non-linear stochastic wave equation in any dimension, in the case of a driving noise that is white in time and correlated in space. In the particular case of an aﬃne multiplicative noise, we obtain estimates on p -th moments of the solution ( p > 1), and we show that the solution is H¨older continuous. The H¨older exponent that we obtain is optimal .


Introduction
In this paper, we are interested in random field solutions to the stochastic wave equation with vanishing initial conditions. In this equation, d 1, ∆ denotes the Laplacian on R d , the functions α, β : R → R are Lipschitz continuous andḞ is a spatially homogeneous Gaussian noise that is white in time. Informally, the covariance functional ofḞ is given by where δ denotes the Dirac delta function and f : R d → R + is continuous on R d \ {0} and even.
We recall that a random field solution to (1.1) is a family of random variables (u(t, x), t ∈ R + , x ∈ R d ) such that (t, x) → u(t, x) from R + × R d into L 2 (Ω) is continuous and solves an integral form of (1.1): see Section 4. Having a random field solution is interesting if, for instance, one wants to study the probability density function of the random variable u(t, x) for each (t, x), as in [12]. A different notion is the notion of function-valued solution, which is a process t → u(t) with values in a space such as L 2 (Ω, L 2 loc (R d , dx)) (see for instance [7], [4]). In some cases, such as [6], a random field solution can be obtained from a function-valued solution by establishing (Hölder) continuity properties of (t, x) → u(t, x), but such results are not available for the stochastic wave equation in dimensions d 4. In other cases (see [3]), the two notions are genuinely distinct (since the latter would correspond to (t, x) → u(t, x) from R + × R d into L 2 (Ω) is merely measurable), and one type of solution may exist but not the other. We recall that function-valued solutions to (1.1) have been obtained in all dimensions [14] and that random field solutions have only been shown to exist when d ∈ {1, 2, 3} (see [1]).
In spatial dimension 1, a solution to the non-linear wave equation driven by space-time white noise was given in [24], using Walsh's martingale measure stochastic integral. In dimensions 2 or higher, there is no function-valued solution with space-time white noise as a random input: some spatial correlation is needed in this case. In spatial dimension 2, a necessary and sufficient condition on the spatial correlation for existence of a random field solution was given in [2]. Study of the probability law of the solution is carried out in [12].
In spatial dimension d = 3, existence of a random field solution to (1.1) is given in [1]. Since the fundamental solution in this dimension is not a function, this required an extension of Walsh's martingale measure stochastic integral to integrands that are (Schwartz) distributions. This extension has nice properties when the integrand is a non-negative measure, as is the case for the fundamental solution of the wave equation when d = 3. The solution constructed in [1] had moments of all orders but no spatial sample path regularity was established. Absolute continuity and smoothness of the probability law was studied in [16] and [17] (see also the recent paper [13]). Hölder continuity of the solution was only recently established in [6], and sharp exponents were also obtained.
In spatial dimension d 4, random field solutions were only known to exist in the case of the linear wave equation (α ≡ 1, β ≡ 0). The methods used in dimension 3 do not apply to higher dimensions, because for d 4, the fundamental solution of the wave equation is not a measure, but a Schwartz distribution that is a derivative of some order of a measure (see Section 5). It was therefore not even clear that the solution to (1.1) should be Hölder continuous, even though this is known to be the case for the linear equation (see [20]), under natural assumptions on the covariance function f .
In this paper, we first extend (in Section 3) the construction of the stochastic integral given in [1], so as to be able to define in the case where M (ds, dx) is the martingale measure associated with the Gaussian noiseḞ , Z(s, x) is an L 2 -valued random field with spatially homogeneous covariance, and S is a Schwartz distribution, that is not necessarily non-negative (as it was in [1]). Among other technical conditions, S must satisfy the following condition, that also appears in [14]: where µ is the spectral measure ofḞ (that is, Fµ = f , where F denotes the Fourier transform).
With this stochastic integral, we can establish (in Section 4) existence of a random field solution of a wide class of stochastic partial differential equations (s.p.d.e.'s), that contains (1.1) as a special case, in all spatial dimensions d (see Section 5).
However, for d 4, we do not know in general if this solution has moments of all orders. We recall that higher order moments, and, in particular, estimates on high order moments of increments of a process, are needed for instance to apply Kolmogorov's continuity theorem and obtain Hölder continuity of sample paths of the solution.
In Section 6, we consider the special case where α is an affine function and β ≡ 0. This is analogous to the hyperbolic Anderson problem considered in [5] for d 3. In this case, we show that the solution to (1.1) has moments of all orders, by using a series representation of the solution in terms of iterated stochastic integrals of the type defined in Section 3.
Finally, in Section 7, we use the results of Section 6 to establish Hölder continuity of the solution to (1.1) (Propositions 7.1 and 7.2) for α affine and β ≡ 0. In the case where the covariance function is a Riesz kernel, we obtain the optimal Hölder exponent, which turns out to be the same as that obtained in [6] for dimension 3.

Framework
In this section, we recall the framework in which the stochastic integral is defined. We consider a Gaussian noiseḞ , white in time and correlated in space. Its covariance function is informally given by where δ stands for the Dirac delta function and f : R d → R + is continuous on R d \ {0} and even. Formally, let D(R d+1 ) be the space of C ∞ -functions with compact support and let F = {F (ϕ), ϕ ∈ D(R d+1 )} be an L 2 (Ω, F, P)-valued mean zero Gaussian process with covariance functional dy ϕ(t, x)f (x − y)ψ(t, y).
Since f is a covariance, there exists a non-negative tempered measure µ whose Fourier transform is f . That is, for all φ ∈ S(R d ), the Schwartz space of C ∞ -functions with rapid decrease, we have As f is the Fourier transform of a tempered measure, it satisfies an integrability condition of the form Following [2], we extend this process to a worthy martingale measure denotes the bounded Borel subsets of R, in such a way that for all ϕ ∈ S(R d+1 ), where the stochastic integral is Walsh's stochastic integral with respect to the martingale measure M (see [24]). The covariation and dominating measure Q and K of M are given by We consider the filtration F t given by and N is the σ-field generated by the P-null sets.
Fix T > 0. The stochastic integral of predictable functions g : is defined by Walsh (see [24]). The set of such functions is denoted by P + . Dalang [1] then introduced the norm · 0 defined by Recall that a function g is called elementary if it is of the form , and X is a bounded F a -measurable random variable. Now let E be the set of simple functions, i.e., the set of all finite linear combinations of elementary functions. Since the set of predictable functions such that g 0 < ∞ is not complete, let P 0 denote the completion of the set of simple predictable functions with respect to · 0 . Clearly, P + ⊂ P 0 . Both P 0 and P + can be identified with subspaces of P, where For S(t) ∈ S(R d ), elementary properties of convolution and Fourier transform show that (2.2) and (2.4) are equal. When d 4, the fundamental solution of the wave equation provides an example of an element of P 0 that is not in P + (see Section 5).
Let M Z be the martingale measure defined by in which we again use Walsh's stochastic integral [24]. We would like to give a meaning to the stochastic integral of a large class of S ∈ P with respect to the martingale measure M Z . Following the same idea as before, we will consider the norms · +,Z and · 0,Z defined by Let P +,Z be the set of predictable functions g such that g +,Z < ∞. The space P 0,Z is defined, similarly to P 0 , as the completion of the set of simple predictable functions, but taking completion with respect to · 0,Z instead of · 0 .
For g ∈ E, as in (2.3), the stochastic integral , is an isometry. Therefore, this isometry can be extended to an isometry S → S·M Z from (P 0,Z , · 0,Z ) into M. The square-integrable martingale S · M Z = ((S · M Z ) t , 0 t T ) is the stochastic integral process of S with respect to M Z . We use the notation The main issue is to identify elements of P 0,Z . We address this question in the next section.

Stochastic Integration
In this section, we extend Dalang's result concerning the class of Schwartz distributions for which the stochastic integral with respect to the martingale measure M Z can be defined, by deriving a new inequality for this integral. In particular, contrary to [1,Theorem 2], the result presented here does not require that the Schwartz distribution be non-negative.
In Theorem 3.1 below, we show that the non-negativity assumption can be removed provided the spectral measure satisfies the condition (3.6) below, which already appears in [14] and [4]. As in [1,Theorem 3], an additional assumption similar to [1, (33), p.12] is needed (hypothesis (H2) below). This hypothesis can be replaced by an integrability condition (hypothesis (H1) below).
Suppose Z is a process such that sup 0 s T E[Z(s, 0) 2 ] < +∞ and with spatially homogeneous For s fixed, the function g s is non-negative definite, since it is a covariance function. Hence, there exists a non-negative tempered measure ν Z s such that Using the convolution property of the Fourier transform, we have where * denotes convolution. Looking back to the definition of · 0,Z , we obtain, for a deterministic ϕ ∈ P 0,Z with ϕ(t, ·) ∈ S(R d ) for all 0 t T (see [1, p.10]), In particular, where C = sup 0 s T E[Z(s, 0) 2 ] < ∞ by assumption. Taking (3.1) as the definition of · 0,Z , we can extend this norm to the set P Z , where a function and S 0,Z < ∞} .
The spaces P +,Z and P 0,Z will now be considered as subspaces of P Z . Let S ∈ P Z . We will need the following two hypotheses to state the next theorem. Let B(0, 1) denote the open ball in R d that is centered at 0 with radius 1.
Suppose in addition that either hypothesis (H1) or (H2) is satisfied. Then S ∈ P 0,Z . In particular, the stochastic integral (S · M Z ) t is well defined as a real-valued square-integrable martingale ((S · M Z ) t , 0 t T ) and Proof. We are now going to show that S ∈ P 0,Z and that its stochastic integral with respect to M Z is well defined. We follow the approach of [1, proof of Theorem 3].
Suppose that S n ∈ P 0,Z for all n. Then The expression |Fψ n (ξ + η) − 1| 2 is bounded by 4 and goes to 0 as n → ∞ for every ξ and η. By (3.6), the Dominated Convergence Theorem shows that S n − S 0,Z → 0 as n → ∞. As P 0,Z is complete, if S n ∈ P 0,Z for all n, then S ∈ P 0,Z .
To complete the proof, it remains to show that S n ∈ P 0,Z for all n.
First consider assumption (H2). In this case, the proof that S n ∈ P 0,Z is based on the same approximation as in [1]. For n fixed, we can write S n (t, x) because S n (t) ∈ S(R d ) for all 0 t T . The idea is to approximate S n by a sequence of elements of P +,Z . For all m 1, set where t k m = kT 2 −m . Then S n,m (t, ·) ∈ S(R d ). We now show that S n,m ∈ P +,Z . Being a deterministic function, S n,m is predictable. Moreover, using the definition of · +,Z and the fact that |g s (x)| C for all s and x, we have whereS n (t k+1 m , x) = S n (t k+1 m , −x). By Leibnitz' formula (see [22], Ex. 26.4, p.283), the function z → (|S n (t k+1 m , ·)| * |S n (t k+1 m , ·)|)(z) decreases faster than any polynomial in |z| −1 . Therefore, by (2.1), the preceding expression is finite and S n,m +,Z < ∞, and S n,m ∈ P +,Z ⊂ P 0,Z .
The sequence of elements of P +,Z that we have constructed converges in · 0,Z to S n . Indeed, which goes to 0 as m → ∞ by (H2). Therefore, S n,m → S n as m → ∞ and S n ∈ P 0,Z . This concludes the proof under assumption (H2). Now, we are going to consider assumption (H1) and check that S n ∈ P 0,Z under this condition. We will take the same discretization of time to approximate S n , but we will use the mean value over the time interval instead of the value at the right extremity. That is, we are going to consider By (3.3) in assumption (H1), a k n,m ∈ S(R d ) for all n, m and k. Moreover, using Fubini's theorem, which applies by (3.4) We now show that S n,m ∈ P +,Z . We only need to show that a k n,m (x) where a k n,m (x) = a k n,m (−x). Since a k n,m ∈ S(R d ), a similar argument as above, using Leibnitz' formula, shows that this expression is finite. Hence S n,m ∈ P +,Z ⊂ P 0,Z .
It remains to show that S n,m → S n as m → ∞. Indeed, (3.12) We are going to show that the preceding expression goes to 0 as m → ∞ using the martingale L 2 -convergence theorem (see [9, thm 4.5, p.252]). Take Ω = R d ×R d ×[0, T ], endowed with the σ- of Borel subsets and the measure µ(dξ)×ν Z s (dη)×ds. We also consider the filtration ( . For n fixed, we consider the function X : Ω → R given by X(ξ, η, s) = FS n (s, ·)(ξ + η). This function is in which is finite by assumption (3.6). Then, setting The martingale L 2 -convergence theorem then shows that (3.12) goes to 0 as m → ∞ and hence that S n ∈ P 0,Z . Now, by the isometry property of the stochastic integral between P 0,Z and the set M 2 of squareintegrable martingales, (S · M Z ) t is well-defined and The bound in the second part of (3.7) is obtained as in (3.2). The result is proved.
Remark 3.2. As can be seen by inspecting the proof, Theorem 3.1 is still valid if we replace (H2) by the following assumptions : and Quer-Sardanyons [13] have removed this assumption. The second concerns positivity of the covariance function f . A weaker condition appears in [14], where function-valued solutions are studied.

Integration with respect to Lebesgue measure
In addition to the stochastic integral defined above, we will have to define the integral of the product of a Schwartz distribution and a spatially homogeneous process with respect to Lebesgue measure. More precisely, we have to give a precise definition to the process informally given by where t → S(t) is a deterministic function with values is the space of Schwartz distributions with rapid decrease and Z is a stochastic process, both satisfying the assumptions of Theorem 3.1.
In addition, suppose first that S ∈ L 2 ([0, T ], L 1 (R d )). By Hölder's inequality, we have by the assumptions on Z. Hence where ν Z s is the measure such that Fν Z s = g s . Let us define a norm · 1,Z on the space P Z by This norm is similar to · 0,Z , but with µ(dξ) = δ 0 (dξ). In order to establish the next proposition, we will need the following assumption.
be a stochastic process satisfying the assumptions of Theorem 3.1. Let t → S(t) be a deterministic function with values in the space Suppose in addition that either hypothesis (H1) or (H2*) is satisfied. Then In particular, the process , 0 t T is well defined and takes values in L 2 (Ω).
Proof. We will consider (S n ) n∈N and (S n,m ) n,m∈N to be the same approximating sequences of S as in the proof of Theorem 3.1. Recall that the sequence (S n,m ) depends on which of (H1) or (H2*) is satisfied. If (H1) is satisfied, then (3.10), (3.11) and (H1) show that S n,m ∈ L 2 ([0, T ], L 1 (R d )). If (H2*) is satisfied, then (3.9) and the fact that Moreover, by arguments analogous to those used in the proof of Theorem 3.1, where we just consider µ(dξ) = δ 0 (dξ), replace (3.6) by (3.17) and (H2) by (H2*), we can show that S n,m − S n 1,Z → 0, as m → ∞, in both cases. As a consequence, the sequence is Cauchy in L 2 (Ω) by (3.14) and hence converges. We set the limit of this sequence as the definition of T 0 ds R d dx S n (s, x)Z(s, x) for any n ∈ N. Note that (3.14) is still valid for S n . Using the same argument as in the proof of Theorem 3.1 again, we now can show that Hence, by a Cauchy sequence argument similar to the one above, we can define the random variable . Moreover, (3.14) remains true.
Remark 3.5. Assumption (3.17) appears in [6] to give estimates concerning an integral of the same type as in Proposition 3.4. In this reference, S 0 and the process Z is considered to be in L 2 (R d ), which is not the case here.

Application to SPDE's
In this section, we apply the preceding results on stochastic integration to construct random field solutions of non-linear stochastic partial differential equations. We will be interested in equations of the form with vanishing initial conditions, where L is a second order partial differential operator with constant coefficients,Ḟ is the noise described in Section 2 and α, β are real-valued functions. Let Γ be the fundamental solution of equation Lu(t, x) = 0. In [1], Dalang shows that (4.1) admits a unique solution (u(t, x), 0 t T, x ∈ R d ) when Γ is a non-negative Schwartz distribution with rapid decrease. Moreover, this solution is in L p (Ω) for all p 1. Using the extension of the stochastic integral presented in Section 3, we are going to show that there is still a random-field solution when Γ is a (not necessarily non-negative) Schwartz distribution with rapid decrease. However, this solution will only be in L 2 (Ω). We will see in Section 6 that this solution is in L p (Ω) for any p 1 in the case where α is an affine function and β ≡ 0. The question of uniqueness is considered in Theorem 4.8.
By a random-field solution of (4.1), we mean a jointly measurable process ( (Ω) is continuous and satisfies the assumptions needed for the right-hand side of (4.3) below to be well defined, namely (u(t, x)) is a predictable process such that sup and such that, for t ∈ [0, T ], α(u(t, ·)) and β(u(t, ·)) have stationary covariance and such that for all 0 t T and x ∈ R d , a.s., In this equation, the first (stochastic) integral is defined in Theorem 3.1 and the second (deterministic) integral is defined in Proposition 3.4.
We recall the following integration result, which will be used in the proof of Lemma 4.6.
Remark 4.3. The main example, that we will treat in the following section, is the case where L = ∂ 2 ∂t 2 − ∆ is the wave operator and d 4. Proof. We are going to use a Picard iteration scheme. Suppose that α and β have Lipschitz Now suppose by induction that, for all T > 0, Suppose also that u n (t, x) is F t -measurable for all x and t, and that (t, x) → u n (t, x) is L 2continuous. These conditions are clearly satisfied for n = 0. The L 2 -continuity ensures that (t, x; ω) → u n (t, x; ω) has a jointly measurable version and that the conditions of [2, Prop.2] are satisfied. Moreover, Lemma 4.5 below shows that Z n and W n satisfy the assumptions needed for the stochastic integral and the integral with respect to Lebesgue-measure to be well-defined. Therefore, u n+1 (t, x) is well defined in (4.6), and is L 2 -continuous by Lemma 4.6. We now show that u n+1 satisfies (4.7). By (4.6), Using the linear growth of α, (4.7) and the fact that Γ(s, ·) ∈ P 0,Zn , (4.4) and Theorem 3.1 imply that sup Further, the linear growth of β, (4.5) and Proposition 3.4 imply that It follows that the sequence (u n (t, x)) n 0 is well-defined. It remains to show that it converges in L 2 (Ω). For this, we are going to use the generalization of Gronwall's lemma presented in [1, Lemma 15]. We have By the Lipschitz property of α, the process Y n satisfies the assumptions of Theorem 3.1 on Z by Lemma 4.5 below. Hence, by Theorem 3.1, The Lipschitz property of α implies that and we deduce that The process V n satisfies the assumptions of Theorem 3.1 on Z by Lemma 4.5 below. Hence, by Proposition 3.4, Then set The Lipschitz property of β implies that and we deduce that Then, setting J(s) = J 1 (s) + J 2 (s) and putting together (4.8) and (4.9), we obtain Lemma 15 in [1] implies that (u n (t, x)) n 0 converges uniformly in L 2 , say to u(t, x). As a consequence of [1, Lemma 15], u n satisfies (4.2) for any n 0. Hence, u also satisfies (4.2) as the L 2 -limit of the sequence (u n ) n 0 . As u n is continuous in L 2 by Lemma 4.6 below, u is also continuous in L 2 . Therefore, u admits a jointly measurable version, which, by Lemma 4.5 below has the property that α(u(t, ·)) and β(u(t, ·)) have stationary covariance functions. The process u satisfies (4.3) by passing to the limit in (4.6).
The following definition and lemmas were used in the proof of Theorem 4.2 and will be used in Theorem 4.8. . We say that the process (Z(s, x), s 0, x ∈ R d ) has the"S" property if, for all z ∈ R d , the finite dimensional distributions of do not depend on z.
Proof. It follows from the definition of the martingale measure M and the fact that u 0 is constant that the finite dimensional distributions of (u is an abstract function of the distribution of which, as mentioned above, does not depend on z. Hence, the conclusion holds for n = 1, because u 0 is constant. Now suppose that the conclusion holds for some n 1 and show that it holds for n + 1. We can write n , M (x) ). The function Ψ does not depend on x and we have u Hence, for every choice of (s 1 , . . . , s k ) ∈ R k + , (t 1 , . . . , t j ) ∈ R j + , (r 1 , . . . , r ℓ ) ∈ R ℓ + , and (x 1 , . . . , is an abstract function of the distribution of which does not depend on z by the induction hypothesis. Proof. For n = 0, the result is trivial. We are going to show by induction that if (u n (t, x), t 0, x ∈ R d ) is continuous in L 2 , then (u n+1 (t, x), t 0, x ∈ R d ) is too.
The term Y 2 goes to 0 as h → 0 because, by Proposition 3.4, by the Dominated Convergence Theorem. Concerning Y 1 , we have This integral goes to 0 as h → 0 either by (4.5) and Turning to spatial increments, we have First consider C n . We have Clearly, |1 − e −i ξ+η,z | 2 4 and the integrand converges to 0 as z → 0. Therefore, for n fixed, by the Dominated Convergence Theorem, C n (t, x, z) → 0 as z → 0.
Moreover, considering D n , we have Clearly, |1 − e −i η,z | 2 4 and the integrand converges to 0 as z → 0. Therefore, for n fixed, by the Dominated Convergence Theorem, D n (t, x, z) → 0 as z → 0. This establishes the L 2 -continuity in the spatial variable.
Remark 4.7. The induction assumption on the L 2 -continuity of u n is stronger than needed to show the L 2 -continuity of u n+1 . In order that the stochastic integral process Γ(t − ·, x − ·) · M Z be L 2 -continuous, it suffices that the process Z satisfy the assumptions of Theorem 3.1.
We can now state the following theorem, which ensures uniqueness of the solution constructed in Theorem 4.2 within a more specific class of processes. Proof. We are going to show that E[(u(t, x) − v(t, x)) 2 ] = 0. In the case where Γ is a nonnegative distribution, we consider the sequence (u n ) n∈N used to construct u, defined in (4.6).
The approximating sequence (Γ m ) m 0 built in [1, Theorem 2] to define the stochastic integral is a positive function. Hence the stochastic integral below is a Walsh stochastic integral and using the Lipschitz property of α, we have (in the case β ≡ 0): Using a Gronwall-type argument ([1, Lemma 15]), uniqueness follows.
In the case considered here, the sequence (Γ m ) m 0 is not necessarily positive and the argument above does not apply. We need to know a priori that the processes Z(t, x) = α(u n (t, x)) − α(v(t, x)) and W (t, x) = β(u n (t, x)) − β(v(t, x)) have a spatially homogeneous covariance. This is why we consider the restricted class of processes satisfying property "S".
As u 0 ≡ 0, it is clear that the joint process (u 0 (t, x), v(t, x), t 0, x ∈ R d ) satisfies the "S" property. A proof analogous to that of Lemma 4.5 with u n−1 replaced by v shows that the process (u n (t, x), v(t, x), t 0, x ∈ R d ) also satisfies the "S" property. Then α(u n (t, ·)) − α(v(t, ·)) and β(u n (t, ·)) − β(v(t, ·)) have spatially homogeneous covariances. This ensures that the stochastic integrals below are well defined. We have Clearly, and using the notations in the proof of Theorem 4.2 we obtain, by (4.10), Moreover, Hence,M n (t) t 0M n−1 (s)J(t − s)ds.

By [1, Lemma 15], this implies that
where (a n ) n∈N is a sequence such that ∞ n=0 a n < ∞. This shows thatM n (t) → 0 as n → ∞. Finally, we conclude that as n → ∞. This establishes the theorem.

The non-linear wave equation
As an application of Theorem 4.2, we check the different assumptions in the case of the nonlinear stochastic wave equation in dimensions greater than 3. The case of dimensions 1, 2 and 3 has been treated in [1]. We are interested in the equation with vanishing initial conditions, where t 0, x ∈ R d with d > 3 andḞ is the noise presented in Section 2. In the case of the wave operator, the fundamental solution (see [10,Chap.5]) is where σ d t is the Hausdorff surface measure on the d-dimensional sphere of radius t and γ is Euler's gamma function. The action of Γ(t) on a test function is explained in (5.6) and (5.7) below. It is also well-known (see [23, §7]) that in all dimensions. Hence, there exist constants C 1 and C 2 , depending on T , such that for all s ∈ [0, T ] and ξ ∈ R d , C 1 1 + |ξ| 2 sin 2 (2πs|ξ|) 4π 2 |ξ| 2 C 2 1 + |ξ| 2 .
To check (H1), and in particular, In the preceding sections, we have seen that the stochastic integral constructed in Section 3 can be used to obtain a random field solution to the non-linear stochastic wave equation in dimensions greater than 3 (Sections 4 and 5). As for the stochastic integral proposed in [1], this stochastic integral is square-integrable if the process Z used as integrand is square-integrable. This property makes it possible to show that the solution u(t, x) of the non-linear stochastic wave equation is in L 2 (Ω) in any dimension.
Theorem 5 in [1] states that Dalang's stochastic integral is L p -integrable if the process Z is. We would like to extend this result to our generalization of the stochastic integral, even though the approach used in the proof of Theorem 5 in [1] fails in our case. Indeed, that approach is strongly based on Hölder's inequality which can be used when the Schwartz distribution S is non-negative.
The main interest of a result concerning L p -integrability of the stochastic integral is to show that the solution of an s.p.d.e. admits moments of any order and to deduce Hölder-continuity properties. The first question is whether the solution of the non-linear stochastic wave equation admits moments of any order, in any dimension ? We are going to prove that this is indeed the case for a particular form of the non-linear stochastic wave equation, where α is an affine function and β ≡ 0. This will not be obtained via a result on the L p -integrability of the stochastic integral. However, a slightly stronger assumption on the integrability of the Fourier transform of the fundamental solution of the equation is required ((6.1) below instead of (4.4)). The proof is based mainly on the specific form of the process that appears in the Picard iteration scheme when α is affine. Indeed, we will be able to use the fact that the approximating random variable u n (t, x) is an n-fold iterated stochastic integral. as well as the assumptions of Theorem 4.2. Let α : R → R be an affine function given by α(u) = au + b, a, b ∈ R, and let β ≡ 0. Then equation (4.1) admits a random-field solution (u(t, x), 0 t T, x ∈ R d ) that is unique in the sense of Theorem 4.8, given by and v n is defined recursively for n 1 by Moreover, for all p 1 and all T > 0, this solution satisfies, Proof. The existence and uniqueness are a consequence of Theorems 4.2 and 4.8. Multiplying the covariance function f by a, we can suppose, without loss of generality, that the affine function is α(u) = u + b (b ∈ R), that is, a = 1. In this case, the Picard iteration scheme defining the sequence (u n ) n∈N is given by u 0 ≡ 0 and where the stochastic integrals are well defined by Theorem 3.1. Set v n (t, x) = u n (t, x)−u n−1 (t, x) for all n 1. Then Hence, u(t, x) = lim m→∞ u m (t, x) = lim m→∞ m n=1 v n (t, x) = ∞ n=1 v n (t, x) and (6.2) is proved. By Theorem 3.1 and because v 1 (t, x) is a Gaussian random variable, v 1 (t, x) admits finite moments of order p for all p 1. Suppose by induction that for some n 1, v n satisfies, for all p 1, sup We are going to show that v n+1 also satisfies (6.6).
By its definition and (6.5), v n+1 satisfies the recurrence relation for all n 1. The stochastic integral above is defined by Theorem 3.1 using the approximating sequence Γ m,k ∈ P + , denoted S n,m in the proof of Theorem 3.1 (whose definition depends on which of (H1) or (H2) is satisfied). For s t T , we set and, for n 1, For all n 1, set also v Fix an even integer p and set q = p 2 . We know that s → M (m,k) n (s; t, x) is a continuous martingale and so, by Burkholder's inequality (see [15, Chap. IV, Theorem 73]), and by Theorem 2.5 in [24] and Hölder's inequality, the last expectation above is bounded by The last step uses Fubini's theorem, the assumptions of which are satisfied because Γ m,k ∈ P + and is deterministic for all m, k, and v n (t, x) has finite moments of any order by the induction assumption. In particular, the right-hand side of (6.8) is finite.
We are going to study the expression E[v n (s, y 1 )v n (s, z 1 ) · · · v n (s, y q )v n (s, z q )] and come back to (6.8) later on. More generally, we consider a term of the form where p is a fixed even integer, s ∈ [0, T ] and for all i, 1 n i n, x i ∈ R, and t i ∈ [s, T ]. In the next lemma, we provide an explicit expression for this expectation.
Lemma 6.2. Let p be a fixed even integer, (n i ) p i=1 be a sequence of integers such that 1 n i n Suppose moreover that n is such that for all m n and all q 1, If the sequence (n i ) is such that each term in this sequence appears an even number of times, then

Proof. We want to calculate
We say that we are interested in the expectation with respect to a configuration (n i ) p i=1 . The order of this configuration (n i ) is defined to be the number N = 1 2 p i=1 n i . The proof of the lemma will be based on Itô's formula (see [18,Theorem 3.3,p.147]), by induction on the order of the configuration considered. Suppose first that we have a configuration of order N = 1. The only case for which the expectation does not vanish is p = 2, n 1 = n 2 = 1 in which the term 1 appears an even number of times. In this case, by [24,Theorem 2.5] and properties of the Fourier transform, Taking limits as k, then m tend to infinity, we obtain This expression satisfies (6.9) with N = 1, σ 1 = t 1 , σ ′ 1 = t 2 , η 1 = η ′ 1 = 0, δ 1 = ξ 1 , δ 2 = −ξ 1 . Now suppose that (6.9) is true for all configurations of order not greater than N and consider a configuration (n i ) p i=1 of order N + 1. For all i = 1, . . . , p, the process s → M n i (s; t i , x i ) is a continuous martingale. We want to find the expectation of h(M n 1 , . . . , M np ), where h(x 1 , . . . , x p ) = x 1 · · · x p . To evalute this expectation, we first use Itô's formula with the function h and the processes M 1, . . . , p). We obtain As the processes M (m i ,k i ) n i admit finite moments for all i = 1, . . . , p, the process in the expectation in the first sum of the right-hand side of (6.10) is a martingale that vanishes at time zero. Hence, this expectation is zero. In the second sum on the right-hand side of (6.10), all terms are similar. For the sake of simplicity, we will only consider here the term for i = 1, j = 2 : the right-hand side of (6.9) is a sum of terms similar to this one. In the case where n 1 = n 2 , the cross-variation is zero. Indeed, the two processes are multiple stochastic integrals of different orders and hence do not belong to the same Wiener chaos. Otherwise, using [24, Theorem 2.5] and Fubini's theorem (which is valid because M (m i ,k i ) n i has finite moments of any order for all i and Γ m,k ∈ P + ), we have → M n j in L p (Ω). As Γ m,k ∈ P + , taking limits as k 3 , . . . , k p tend to +∞ and then as m 3 , . . . , m p tend to +∞, we obtain At this point in the proof, we can see why the terms of (n i ) have to appear an even number of times. Indeed, if we consider n 1 = n 2 , we have seen that the expectation is zero. When n 1 = n 2 , the product in the expectation on the right-hand side of (6.12) is of order N . Hence, we can use the induction assumption to express it as in (6.9). By the induction assumption, if the terms of (n i ) do not appear an even number of times, the expectation on the right-hand side of (6.12) vanishes and hence the one on the left-hand side does too. If these terms do appear an even number of times, then setting t 1 = s = ρ, t 2 = ρ, x 1 = y, x 2 = z in (6.9) and substituting into (6.12), we obtain where (i) σ j and σ ′ j are linear combinations of ρ 1 , . . . , ρ N , ρ, t 3 , . . . , t p (j = 1, . . . , N ) ; (ii) η j and η ′ j are linear combinations of ξ 1 , . . . , ξ j−1 (j = 1, . . . , N ) ; (iii) δ k is a linear combination of ξ 1 , . . . , ξ N (k = 1, . . . , p).
The left-hand side has the desired limit because M n i has finite moments of any order and (Ω, F, P), i = 1, 2. For the right-hand side, first consider the limit with respect to k 1 and k 2 . To show convergence, we consider the left-hand side of (6.14) as the inner product of FΓ m 1 ,k 1 (t 1 −ρ)(ξ +δ 1 ) and FΓ m 2 ,k 2 (t 2 −ρ)(ξ +δ 2 ) in the L 2 -space with respect to the measure Note that the exponentials are of modulus one and hence do not play any role in the convergence. Therefore, it is sufficient to consider i = 1 and to show that goes to 0 as k tends to infinity. This limit has to be treated differently according to which assumption (H1) or (H2) in Theorem 3.1 is satisfied.
In the case where assumption (H1) is satisfied, the proof of convergence is based on the martingale convergence theorem in a way analogous to the approach used in the proof of Theorem 3.1 with the measure ds × ν s (dη) × µ(dξ) replaced by the one in (6.15). Assumption (6.1) allows to bound the µ(dξ j )-integrals (1 j N ) when we check the L 2 -boundedness of FΓ m (t 1 − ρ)(ξ + δ 1 ).
In the case where (H2) is satisfied, we bound the µ(dξ j )-integrals by (6.1) again, compute the time-integrals (except the one with respect to ρ) and finally the continuity assumption (H2) shows the desired convergence.
Finally, the limit with respect to m 1 and m 2 is treated as in the proof of Theorem 3.1 by the Dominated Convergence Theorem. Lemma 6.2 is proved.
Proof of Theorem 6.1 (continued) We use (6.9) with n i = n, t i = s for all i = 1, . . . , p, to express the expectation in (6.8). Using the same idea as in the proof of Lemma 6.2, we can permute the integrals to obtain where S means "is bounded by a sum of terms of the form" and N = nq is the order of the particular configuration considered in that case. The variables σ j , σ ′ j , η j , η ′ j (j = 1, . . . , N ) satisfy the same assumptions as in Lemma 6.2, the variables γ ℓ , γ ′ ℓ (ℓ = 1, . . . , q) are linear combinations of ξ 1 , . . . , ξ N and δ is a linear combination of ξ 1 , . . . , ξ N , β 1 , . . . , β q . When using (6.9) in (6.8), exponentials of the form e i y j ,δ j and e i z j ,δ j appear. When writing the y ℓ , z ℓintegrals as a µ(dβ ℓ )-integral, these exponentials become shifts. This explains why the variables γ ℓ , γ ′ ℓ (ℓ = 1, . . . , q) and δ appear. Now, using the Cauchy-Schwartz inequality and setting which is finite by (6.1), and taking limits as k and m tend to +∞, we obtain where q = p 2 . We have obtained an expression that bounds the moment of order p of v n+1 as a finite sum of finite terms. In order to have a bound for this moment, it remains to estimate the number of terms in the sum. This is the goal of Lemma 6.4. Proof. We have to estimate the number of terms appearing in the sum when we use Itô's formula. For each application of Itô's formula, we have to sum over all choices of pairs in (n i ) p i=1 . Hence, we have at most 1 2 p(p − 1) choices. Moreover, Itô's formula has to be iterated at most N = nq times to completely develop the expectation. Hence, the number of terms in the sum implied by S = is bounded by R = (q(p − 1)) nq .

Proof of Theorem 6.1 (continued)
We return to the proof of Theorem 6.1. Using Lemma 6.4 together with (6.17), we obtain Clearly, the series ∞ n=0 v n+1 (t, x) p converges, where · p stands for the norm in L p (Ω). Hence, As the bound on the series does not depend on x and as t T , we have for all even integers p. Jensen's inequality then shows that (6.19) is true for all p 1. As the sequence (u n (t, x)) n∈N converges in L 2 (Ω) to u(t, x) by Theorem 3.1, (6.19) ensures the convergence in L p (Ω) and we have for all p 1. Theorem 6.1 is proved.
Remark 6.5. The fact that α is an affine function is strongly used in this proof. The key fact is that its derivative is constant and so Itô's formula can be applied iteratively. This is not the case for a general Lipschitz function α.

Hölder continuity
In this section, we are going to study the regularity of the solution of the non-linear wave equation (4.1) in the specific case considered in Theorem 6.1 : let u(t, x) be the random field solution of the equation with vanishing initial conditions, where b ∈ R and the spatial dimension is d 1. We will need the following hypotheses, which are analogous to those that appear in [20], in order to guarantee the regularity of the solution.
Hence, for any h 0 and t ∈ [0, T − h], we have 2) The Gaussian process v 1 is given by (6.3). Hence, and Fix p an even integer. By Burkholder's inequality (see [15, Chap. IV, Theorem 73]), Ch pγ 1 (7.5) by (H3). On another hand, using again Burkholder's inequality, we see that Ch p(γ 2 + 1 2 ) , (7.6) by (H4). Hence, putting together (7.5) and (7.6), we see that there exists a constant C 0 such that For n 2, set w n (t, where v n is defined by (6.4). Then and and letting A (m,k) n be the approximation of A n with Γ replaced by Γ m,k in (7.8), we can use the same argument as in (6.8) to see that where p is an even integer and q = p 2 . Using Lemma 6.2 to express the expectation and using the same argument as used to reach (6.16), we obtain where S means "is bounded by a sum of terms of the form", N = nq and σ j , σ ′ j , η j , η ′ j , γ ℓ , γ ′ ℓ and δ (1 j N , 1 ℓ q) satisfy the same assumptions as in (6.16). Notice that Γ appears in the first N integrals andΓ in the last q integrals.
We take limits in (7.11) as k and m tend to +∞. Then, using the Cauchy-Schwartz inequality, we bound the first N spatial integrals in (7.11) using (6.1), bound the other q spatial integrals by hypothesis (H3), compute the time integrals and bound the number of terms in the sum by Lemma 6.4 and, similarly to (6.18), we obtain where C (1) n = (q(p − 1)) nq T (n+1)q (nq+1)! I nq .
On another hand, let B (m,k) n be the corresponding approximation of B n . The same arguments as those used to obtain (6.8) show that (7.13) Note that the factor h q−1 appears because Hölder's inequality is used on the interval [t, t + h] instead of [0, t]. Using Lemma 6.2 and the argument used to reach (6.16), we obtain where S means "is bounded by a sum of terms of the form", N = nq and σ j , σ ′ j , η j , η ′ j , γ ℓ , γ ′ ℓ and δ (1 j N , 1 ℓ q) satisfy the same assumptions as in (6.16).
The next result concerns the spatial regularity of the solution.

Proof
. The proof is similar to that of Proposition 7.1. We know that u(t, x) is given by (6.2)-(6.4). Hence, for any compact set K ⊂ R d and for any z ∈ K, The Gaussian process v 1 is given by (6.3). Hence, By Burkholder's inequality, C|z| pγ 3 , (7.18) by (H5). Therefore, there exists a constant C 0 such that where v n is defined by (6.4). Then )v n (s, y)M (ds, dy). SettingΓ(s, y) = Γ(t − s, z + y) − Γ(t − s, y) and letting w (m,k) n be the approximation of w n with Γ replaced by Γ m,k in (7.20), we can use the same argument as in (6.8) to see that × E[v n (s, y 1 )v n (s, z 1 ) · · · v n (s, y q )v n (s, z q )], (7.21) where p is an even integer and q = p 2 . Using Lemma 6.2 to express the expectation and using the same argument as used to reach (6.16), we obtain where S means "is bounded by a sum of terms of the form", N = nq and σ j , σ ′ j , η j , η ′ j , γ k , γ ′ k and δ (1 j N , 1 k q) satisfy the same assumptions as in (6.16). Notice that Γ appears in the first N integrals andΓ in the last q integrals.
Putting together Propositions 7.1-7.4, Corollary 7.3 and Proposition 7.5, we have the following.
Theorem 7.6. If f (x) = 1 |x| β , with 0 < β < 2, then the random-field solution u(t, x) of the non-linear wave equation with spatial dimension d > 3 built in Theorem 6.1 is jointly γ-Höldercontinuous in time and space for any exponent γ ∈ ]0, 2−β 2 [. Remark 7.7. (a) Note that Theorem 7.6 and its proof are still valid when the spatial dimension is less than or equal to 3. In these cases, the regularity of the solution has already been obtained for a more general class of non-linear functions α, namely Lipschitz continuous functions. For more details, see [24] for d = 1, [12] for d = 2 and [6] for d = 3.
(b) The exponent 2−β 2 in Theorem 7.6 is the optimal exponent. Indeed, u(t, x) is not γ-Höldercontinuous for any exponent γ > 2−β 2 as is shown in [6,Theorem 5.1]. Their proof applies to the general d-dimensional case, essentially without change.