Some non-linear s.p.d.e.'s that are second order in time

We extend Walsh's theory of martingale measures in order to deal with hyperbolic stochastic partial differential equations that are second order in time, such as the wave equation and the beam equation, and driven by spatially homogeneous Gaussian noise. For such equations, the fundamental solution can be a distribution in the sense of Schwartz, which appears as an integrand in the reformulation of the s.p.d.e. as a stochastic integral equation. Our approach provides an alternative to the Hilbert space integrals of Hilbert-Schmidt operators. We give several examples, including the beam equation and the wave equation, with nonlinear multiplicative noise terms.


Introduction
The study of stochastic partial differential equations (s.p.d.e.'s) began in earnest following the papers of Pardoux [14], [15], [16], and Krylov and Rozovskii [8], [9]. Much of the literature has been concerned with the heat equation, most often driven by space-time white noise, and with related parabolic equations. Such equations are first order in time, and generally second order in the space variables. There has been much less work on s.p.d.e.'s that are second order in time, such as the wave equation and related hyperbolic equations. Some early references are Walsh [22], and Carmona and Nualart [2], [3]. More recent papers are Mueller [12], Dalang and Frangos [6], and Millet and Sanz-Solé [11].
For linear equations, the noise process can be considered as a random Schwartz distribution, and therefore the theory of deterministic p.d.e.'s can in principle be used. However, this yields solutions in the space of Schwartz distributions, rather than in the space of functionvalued stochastic processes. For linear s.p.d.e.'s such as the heat and wave equation driven by space-time white noise, this situation is satisfactory, since, in fact, there is no function-valued solution when the spatial dimension is greater than 1. However, since non-linear functions of Schwartz distributions are difficult to define (see however Oberguggenberger and Russo [13]), it is difficult to make sense of non-linear s.p.d.e.'s driven by space-time white noise in dimensions greater than 1.
A reasonable alternative to space-time white noise is Gaussian noise with some spatial correlation, that remains white in time. This approach has been taken by several authors, and the general framework is given in Da Prato and Zabczyk [5]. However, there is again a difference between parabolic and hyperbolic equations: while the Green's function is smooth for the former, for the latter it is less and less regular as the dimension increases. For instance, for the wave equation, the Green's function is a bounded function in dimension 1, is an unbounded function in dimension 2, is a measure in dimension 3, and a Schwartz distribution in dimensions greater than 3.
There are at least two approaches to this issue. One is to extend the theory of stochastic integrals with respect to martingale measures, as developed by Walsh [22], to a more general class of integrands that includes distributions. This approach was taken by Dalang [4]. In the case of the wave equation, this yields a solution to the non-linear equation in dimensions 1, 2 and 3. The solution is a random field, that is, it is defined for every (t, x) ∈ R + ×R d . Another approach is to consider solutions with values in a function space, typically an L 2 -space: for each fixed t ∈ R + , the solution is an L 2 -function, defined for almost all x ∈ R d . This approach has been taken by Peszat and Zabczyk in [18] and Peszat [17]. In the case of the non-linear wave equation, this approach yields function-valued solutions in all dimensions. It should be noted that the notions of random field solution and function-valued solution are not equivalent: see Lévèque [10].
In this paper, we develop a general approach to non-linear s.p.d.e's, with a focus on equations that are second order in time, such as the wave equation and the beam equation. This approach goes in the direction of unifying the two described above, since we begin in Section 2 with an extension of Walsh's martingale measure stochastic integral [22], in such a way as to integrate processes that take values in an L 2 -space, with an integral that takes values in the same space. This extension defines stochastic integrals of the form where G (typically a Green's function) takes values in the space of Schwartz distributions, Z is an adapted process with values in L 2 (R d ), and M is a Gaussian martingale measure with spatially homogeneous covariance. With this extended stochastic integral, we can study non-linear forms of a wide class of s.p.d.e.'s, that includes the wave and beam equations in all dimensions, namely equations for which the p.d.e. operator is Section 3). Indeed, in Section 4 we study the corresponding non-linear s.p.d.e.'s. We only impose the minimal assumptions on the spatial covariance of the noise, that are needed even for the linear form of the s.p.d.e. to have a function-valued solution. The non-linear coefficients must be Lipschitz and vanish at the origin. This last property guarantees that with an initial condition that is in L 2 (R d ), the solution remains in L 2 (R d ) for all time.
In Section 5, we specialize to the wave equation in a weighted L 2 -space, and remove the condition that the non-linearity vanishes at the origin. Here, the compact support property of the Green's function of the wave equation is used explicitly. We note that Peszat [17] also uses weighted L 2 spaces, but with a weight that decays exponentially at infinity, whereas here, the weight has polynomial decay.

Extensions of the stochastic integral
In this section, we define the class of Gaussian noises that drive the s.p.d.e.'s that we consider, and give our extension of the martingale measure stochastic integral.
Let D(R d+1 ) be the topological vector space of functions ϕ ∈ C ∞ 0 (R d+1 ), the space of infinitely differentiable functions with compact support, with the standard notion of convergence on this space (see Adams [1], page 19). Let Γ be a non-negative and non-negative definite (therefore symmetric) tempered measure on R d . That is, whereφ(x) = ϕ(−x), " * " denotes convolution, and there exists r > 0 such that (this was the framework considered in Dalang [4]). Let S(R d ) denote the Schwartz space of rapidly decreasing C ∞ test functions, and for ϕ ∈ S(R d ), let Fϕ denote the Fourier transform of ϕ: According to the Bochner-Schwartz theorem (see Schwartz [20], Chapter VII, Théorème XVII), there is a non-negative tempered measure µ on R d such that Γ = F µ, that is Examples. (a) Let δ 0 denote the Dirac functional. Then Γ(dx) = δ 0 (x) dx satisfies the conditions above. Stein [21], Chapter V §1, Lemma 2(a)), so Γ(dx) = f α (x) dx also satisfies the conditions above.
Let F = (F (ϕ), ϕ ∈ D(R d+1 )) be an L 2 (Ω, G, P )-valued mean zero Gaussian process with covariance functional As in Dalang and Frangos [6] and Dalang [4], ϕ → F (ϕ) extends to a worthy martingale measure (t, A) → M t (A) (in the sense of Walsh [22], pages 289-290) with covariance measure and dominating measure K ≡ Q, such that The underlying filtration is N is the σ-field generated by P -null sets and B b (R d ) denotes the bounded Borel subsets of R d .
Recall [22] that a function (s, and X is a bounded and F a -measurable random variable. The σ-field on R + × R d × Ω generated by elementary functions is termed the predictable σ-field. Fix T > 0. Let P + denote the set of predictable functions (s, x; ω) → g(s, x; ω) such that g + < ∞, where Recall [22] that P + is the completion of the set of elementary functions for the norm · + . For g ∈ P + , Walsh's stochastic integral is well-defined and is a worthy martingale measure with covariation measure and dominating measure For a deterministic real-valued function (s, x) → g(s, x) and a real-valued stochastic process (Z(t, x), (t, x) ∈ R + × R d ), consider the following hypotheses (T > 0 is fixed).
Lemma 1 Under hypotheses (G1), (G2) and (G3), for all x ∈ R d , the function defined by (s, y; ω) → g(s, x − y)Z(s, y; ω) belongs to P + , and so Because g(s, ·) is bounded uniformly in s by (G1), this expression is bounded by a constant times By (G2), the inner integral can be taken over K − K = {z − y : z ∈ K, y ∈ K}, and the sup-norm of the convolution is bounded by Z(s, ·) 2 by (2.1) and the fact that s → E( Z(s, ·) 2 L 2 (R d ) ) is continuous by (G2). Therefore, v g,Z (x) will be well-defined provided we show that (s, y, ω) → g(s, x − y)Z(s, y; ω) is predictable, or equivalently, that (s, y, ω) → Z(s, y; ω) is predictable.
For this, set t n j = jT 2 −n and Observe that Therefore, Z n ∈ P + , since this process, which is adapted, continuous in x and left-continuous in s, is clearly predictable. Further, The integrand converges to 0 and is uniformly bounded over [0, T ] by (G2), so this expression converges to 0 as n → ∞. Therefore, Z is predictable.
Since the covariation measure of M is Q, this equals The inner integral is equal to (g(s, x − ·)Z(s, ·)) * (g(s, x − ·)Z(s, ·))(−z), and since this function belongs to S(R d ) by (G1) and (G2), (2.4) equals by (2.2). Because the Fourier transform takes products to convolutions, so, by Plancherel's theorem, The minus can be changed to plus, and using the change of variables ξ = η + ξ ′ (η fixed), we find that (2.3) holds.
By Lemma 1, Z → v g,Z defines an isometry from (E, · g ) into L 2 (Ω × R d , dP × dx). Therefore, this isometry extends to the closure of (E, · g ) in P, which we now identify.
Lemma 3 P is contained in the closure of (E, · g ).
Proof. Fix ψ ∈ C ∞ 0 (R d ) such that ψ ≥ 0, the support of ψ is contained in the unit ball of R d and R d ψ(x) dx = 1. For n ≥ 1, set ψ n (x) = n d ψ(nx).
Fix Z ∈ P, and show that Z belongs to the completion of E in · g . Set Z n (s, x) = Z(s, x)1 [−n,n] d (x) and Z n,m (s, ·) = Z n (s, ·) * ψ m .
We first show that Z n,m ∈ E, that is, (G2) holds for Z n,m . Clearly, Z n,m (s, ·) ∈ C ∞ 0 (R d ), Z n,m (s, ·) is F s -measurable by (G5), and there is a compact set K n,m ⊂ R d such that supp Z n,m (s, ·) ⊂ K, for 0 ≤ s ≤ T . Further, so s → Z n,m (s, ·) is mean-square continuous by (G5). Therefore, Z n,m ∈ E.
We now show that for n fixed, Z n − Z n,m g → 0 as m → ∞. Clearly, Because |1 − F ψ m (ξ)| 2 ≤ 4 and we can apply the Dominated Convergence Theorem to see that for n fixed, lim m→∞ Z n − Z n,m g = lim m→∞ I g,Zn−Zn,m = 0.
Therefore, Z n belongs to the completion of E in · g . We now show that Z − Z n g → 0 as n → ∞. Clearly, andĨ g,Z < ∞, the Dominated Convergence Theorem implies that lim n→∞ Z − Z n g = 0, and therefore Z belongs to the completion of E in · g . Lemma 3 is proved.
Remark 4 Lemma 3 allows us to define the stochastic integral v g,Z = g · M Z provided g satisfies (G1) and (G4), and Z satisfies (G5). The key property of this stochastic integral is that We now proceed with a further extension of this stochastic integral, by extending the map g → v g,Z to a more general class of g.
Fix Z ∈ P. Given a function s → G(s) ∈ S ′ (R d ), consider the two properties: Notice that I G,Z ≤Ĩ G,Z < ∞ by (G5) and (G6). By Remark 4, the map G → v G,Z is an isometry from (H, · Z ) into L 2 (Ω × R d , dP × dx). Therefore, this isometry extends to the closure of (H, · Z ) in G.
Lemma 5 G is contained in the closure of (H, · Z ).
Proof. Fix s → G(s) in G. Let ψ n be as in the proof of Lemma 3. Set G n (s, ·) = G(s) * ψ n (·).
Observe that The last factor is bounded by 4, has limit 0 as n → ∞, and I G,Z < ∞, so the Dominated Convergence Theorem implies that This proves the lemma.
Theorem 6 Fix Z such that (G5) holds, and s → G(s) such that (G6) and (G7) hold. Then the stochastic integral v G,Z = G · M Z is well-defined, with the isometry property It is natural to use the notation Proof of Theorem 6. The statement is an immediate consequence of Lemma 5.
Remark 7 Fix a deterministic function ψ ∈ L 2 (R d ) and set .
It is not difficult to check that (X t , 0 ≤ t ≤ T ) is a (real-valued) martingale.

Examples
In this section, we give a class of examples to which Theorem 6 applies. Fix an integer k ≥ 1 and let G be the Green's function of the p.d.e.
As in [4], Section 3, F G(t)(ξ) is easily computed, and one finds According to [4], Theorem 11 (see also Remark 12 in that paper), the linear s.p.d.e with vanishing initial conditions has a process solution if and only if It is therefore natural to assume this condition in order to study non-linear forms of (3.2). In order to be able to use Theorem 6, we need the following fact. Proof. We begin with (G7). For ψ ∈ C ∞ 0 (R d ), Turning to (G6), we first show that function and p(u, x) is the density of a N(0, uI)− random vector (see [19], Section 5). In particular, and it is shown in [7] and [19] that (3.5) and the right-hand side is finite by (3.3). However, the proofs in [19] and [7] use monotone convergence, which is not applicable in presence of the oscillating function χ ξ . As in [7], because e −t|·| 2 has rapid decrease, Notice that G d,k ≥ 0, and so by formula (5.5) in [19], so we can use monotone convergence in the first equality below and the Dominated Convergence Theorem in the third equality below to conclude that by (3.5) and (3.3). The lemma is proved.
We say that a process (u(t, ·), 0 ≤ t ≤ T ) with values in L 2 (R d ) is a solution of (4.2) if, for all t ≥ 0, a.s., where G is the Green's function of (3.1). The third term is interpreted as the stochastic integral from Theorem 6, so (u(s, ·)) must be adapted and mean-square continuous from [0, T ] into L 2 (R d ).
Proof. We will follow a standard Picard iteration scheme. Set Notice that v 0 (t, ·) ∈ L 2 (R d ). Indeed, and one checks similarly that as is easily seen by proceeding as in (4.4) and using dominated convergence. Similarly, lim t→s G(t) * ṽ 0 − G(s) * ṽ 0 = 0.
For n ≥ 0, assume now by induction that we have defined an adapted and mean-square continuous process (u n (s, ·), 0 ≤ s ≤ T ) with values in L 2 (R d ), and define We note that (α(u n (s, ·)), 0 ≤ s ≤ T ) is adapted and mean-square continuous, because by (4.1), so the stochastic integral in (4.6) is well-defined by Lemma 8 and Theorem 6. Set By (3.3), (3.5) and (3.6), sup 0≤s≤T J(s) is bounded by some C < ∞, so by Theorem 6 and using (4.1), Therefore, u n+1 (t, ·) takes its values in L 2 (R d ). By Lemma 10 below, (u n+1 (t, ·), 0 ≤ t ≤ T ) is mean-square continuous and this process is adapted, so the sequence (u n , n ∈ N) is well-defined. By Gronwall's lemma, we have in fact We now show that the sequence (u n (t, ·), n ≥ 0) converges. Let Using the Lipschitz property of α(·), (4.5) and (4.6), we see that In particular, (u n (t, ·), n ∈ N) converges in L 2 (Ω × R d , dP × dx), uniformly in t ∈ [0, T ], to a limit u(t, ·). Because each u n is mean-square continuous and the convergence is uniform in t, (u(t, ·), 0 ≤ t ≤ T ) is also mean-square continuous, and is clearly adapted. This process is easily seen to satisfy (4.3), and uniqueness is checked by a standard argument.
The following lemma was used in the proof of Theorem 9.
Proof. Fix n ≥ 0. It was shown in the proof of Theorem 9 that t → u 0 (t, ·) is mean-square continuous, so we establish this property for t → v n+1 (t, ·), defined in (4.6). Observe that for h > 0, , where , . Clearly, The squared ratio is no greater than It follows that I 2 converges to 0 as h → 0, by the dominated convergence theorem, and I 1 converges to 0 because the integrand is bounded. This proves that t → v n+1 (t, ·) is mean-square right-continuous, and left-continuity is proved in the same way.

The wave equation in weighted L 2 -spaces
In the case of the wave equation (set k = 1 in (4.2)), we can consider a more general class of non-linearities α(·) than in the previous section. This is because of the compact support property of the Green's function of the wave equation.
Fix K > d and let θ : R d → R be a smooth function for which there are constants 0 < c < C such that , and observe that there are positive constants, which we again denote c and C, such that For a process (Z(s, ·), 0 ≤ s ≤ T ), consider the following hypothesis: (G9) For 0 ≤ s ≤ T , Z(s, ·) ∈ L 2 θ a.s., Z(s, ·) is F s -measurable, and s → Z(s, ·) is meansquare continuous from [0, T ] into L 2 θ . Set E θ = {Z : (G9) holds, and there is K ⊂ R d compact such that for 0 ≤ s ≤ T, supp Z(s, ·) ⊂ K}.
Notice that for Z ∈ E θ , Z(s, ·) ∈ L 2 (R d ) because θ(·) is bounded below on K by a positive constant, and for the same reason, s → Z(s, ·) is mean-square continuous from [0, T ] into is well-defined by Theorem 6.
Lemma 11 For G as above and Z ∈ E θ , v G,Z ∈ L 2 θ a.s. and where J(s) is defined in (4.7).
Using the first of these inequalities, (4.8) is replaced by Therefore u n+1 (t, ·) takes its values in L 2 θ . The remainder of the proof is unchanged, except that · L 2 (R d ) must be replaced by · L 2 θ . This proves Theorem 13.