A Note on Krylov's L P -theory for Systems of Spdes

We extend Krylov's L p-solvability theory to the Cauchy problem for systems of parabolic stochastic partial differential equations (SPDEs). Some additional integrability and regularity properties are also presented.


Introduction
A comprehensive theory of second order quasi-linear parabolic stochastic differential equations in Bessel classes H s p (R d ) was developed by N. V. Krylov in [1], [2].This theory applies to a large class of important equations, including equations of nonlinear filtering, stochastic heat equation with nonlinear noise term, etc..The main results of the theory are sharp in that they could not be improved under the same assumptions.
In this paper we extend Krylov's L p −theory to parabolic systems of quasilinear stochastic PDEs.Specifically, we are considering the system of equations where W is a cylindrical Wiener process in a Hilbert space.In (1.1) and everywhere below the summation with respect to the repeated indices is assumed.
Among other reasons this research was motivated by our interest in stochastic Fluid Mechanics (see e.g.[6], [7] ).While the results below do not apply directly to stochastic Navier-Stokes equations, they provide us with important estimates for solutions of suitable approximation to the latter.
The structure of the paper is as follows.
In Section 2 we present a simple and straightforward construction of stochastic integrals for H s p −valued integrands (for related results see [3], [4]).In this Section we also derive an Ito formula for L p -norms of H s p −valued semimartingales.In Section 3 we present some auxiliary results about pointwise multipliers in H s p needed for the derivation of apriori estimates for (1.1) (see Lemma 8).We give a more precise version of Krylov's Lemma 5.2 in [2] with an estimate that gives a positive answer to Krylov's question raised in Remark 6.5 (see [2]).
In Section 4, following Krylov's ideas, we derive the main results about the existence and uniqueness of solutions to equation (1.1).The results of the last subsection, in particular those concerning the regularity of solutions (Proposition 1, Corollary 3, Corollary 4) are new not only for systems but also for the scalar equations considered in [1], [2].In addition, in Section 4, we obtain some new integrability properties of the solution (Proposition 2-3, Corollary 3-4).
To conclude the Introduction, we outline some notation which will be used throughout the paper.Let us fix a separable Hilbert space Y .The scalar product of x, y ∈ Y will be denoted by x • y.
If u is a function on R d , the following notational conventions will be used for its partial derivatives: ∂ i u = ∂u/∂x i , ∂ 2 ij = ∂ 2 u/∂x i ∂x j , ∂ t u = ∂u/∂t,∇u = ∂u = (∂ 1 u, . . ., ∂ d u), and ∂ 2 u = (∂ 2 ij u) denotes the Hessian matrix of second derivatives.Let α = (α 1 , ..., α d ) be a multiindex, then ) be the set of all infinitely differentiable functions on R d with compact support.
). Obviously, the spaces C ∞ 0 , H s p R d , and H s p (R d , Y ) can be extended to vector functions (denoted with bold-faced letters).For example, the space of all vector functions u = (u 1 , . . ., u d ) such that Λ s u l ∈ L p , l = 1, . . ., d, with the finite norm we denote by H s p = H s p (R d ).Similarly, we denote by H s p (Y ) = H s p (R d , Y ) the space of all vector functions g = (g l ) 1≤l≤d , with Y -valued components g l , 1 ≤ l ≤ d, so that ||g|| s,p = ( l |g l | p s,p ) 1/p < ∞.The set of all infinitely differentiable vector-functions u = (u 1 , . . ., u d ) on R d with compact support will be denoted by Obviously, the function Similar notation, φ,ψ s and f, φ s,Y , will be used for scalar functions.

Stochastic integrals
Let (Ω, F, P) be a probability space with a filtration F of right continuous σ-algebras (F t ) t≥0 .All the σ−algebras are assumed to be P−completed.Let W (t) be an F-adapted cylindrical Brownian motion in Y .In this section we will construct a natural stochastic integral with respect to W (t) for F-adapted where q = p/ (p − 1) , we can define a stochastic integral Indeed, by Hölder inequality, Owing to (2.1), the stochastic integral t 0 g (r) , φ s,Y • dW (r) is well defined (see e.g.[9] or [5]).Of course the integral above is defined as a linear functional on H −s q .In fact, it can be characterized more precisely.Specifically, the following result holds.
Moreover, for each T > 0 there exists a constant C so that for each stopping time τ ≤ T, To prove the Theorem we will need the following technical result.
Lemma 1 Assume g ∈I s,p .Then there is a sequence of F-adapted H s p (R d )-valued processes g n (r) = g n (r, x) such that P-a.s.g n (r, x) is smooth in x and for each n, and the uniqueness follows.Let ϕ be a nonnegative function so that ϕ Note that g n is a smooth bounded Y -valued function.Moreover, by Hölder inequality, It is readily checked that for all r, ω, and p ≥ 2, we have the following: Analogously, one can prove (b) .Now, the statement follows by Lebesgue's dominated convergence theorem.
Proof of Theorem 1 Let g n be a sequence from Lemma 1.Since for every x and t, is well defined for each x (see e.g.[9] or [5]).It is not difficult to show that for every x, Let τ ≤ T be a stopping time such that Then, for all u ≤ t ≤ T, and by Kolmogorov's criterion, M s n,k has a continuous L p -valued modification.On the other hand, Then for every t > 0, P-a.s.one has where φ m ∈ C ∞ 0 is any uniformly bounded sequence converging pointwise to 1.
Let τ be a stopping time such that for all t > 0, P-a.s..

Proof
We remark that all the integrals in (2.6) are well defined.For example, let us prove that the duality |u(r)| p−2 u(r), a(r) 1−n makes sense if n = 0. Since a(r) ∈ H −1 p , there exist functions a i (r) ∈ L p so that a(r) = d i=0 ∂ i a i (r) where ∂ 0 = 1.Now it is not difficult to see that (2.7) The right hand side of the equality is finite owing to the obvious equality Let ϕ ∈ C ∞ 0 be a non-negative function such that ϕ dx = 1.For ε > 0, write Similarly, we write b = a(t), ϕ ε (x − •) .For all x and t, we have ) is a uniformly bounded sequence converging pointwise to 1.By Ito formula, we have as ε → 0. We complete the proof by taking integrals of both sides of (2.8) and passing to the limit as ε → 0, and then as m → ∞ .

Pointwise multipliers in
where F is the Fourier transform and F −1 is the inverse Fourier transform:

Define the operators
Consider the norms on H s p (Y ) Proof For each multiindex µ and s ≥ 0, we have Therefore, the equivalence of |u|| s,p and ||u|| ˜s,p for p ∈ (1, ∞) follows from Theorem 6.1.6 in [10].
The part of the statement regarding the case s > 0, p ∈ [1, ∞] follows by Theorem 6.3.2 in [10].
It is well known (and easily seen) that there is a constant i.e. ∂s is the generator of s-stable stochastic process.
where c(s) is a positive constant depending on s.
Proof Indeed, there is a constant N so that for any where One can easily see this by taking Fourier transform of (3.2) (see [11], Chapter II, section 2).Also, it can be easily seen, that for some constant C Using Minkowsky's inequality we obtain from (3.2), (3.3) the desired estimate.
Also, we will need some spaces of Y -valued continuous functions.For m = 1, 2, 3, . . ., we define where s = [s] + {s}, s is an integer and 0 ≤ {s} < 1.For an integer s > 0, we denote Proof For an non-integer s, C s is Zygmund's space (see Theorem 2.5.7 and Corollary 2.5.12 in [8]).Therefore the statement a) follows by Theorem 6.2.4 in [10].
Let s ∈ (0, 2], u ∈ C s+ε (Y ).We can assume that s + ε is not an integer and s < 2. By Remark 2, So, the statement b) follows by Lemma 4. Define and denote the corresponding norms by we write simply B s .The main statement we need is the following Lemma.
Then for every s there exist constants s 0 < s and N such that where and all its derivatives are bounded).Then, by Remark 2, If s ∈ (0, 2) and s = 1, we have by Lemma 5 for each s 0 ∈ ((s − 1) + , s) In the case s = 2, we have ∆(au) = a∆u + u∆a + 2(∇a)(∇u) and Therefore both parts of our statement hold for s ∈ [0, 2].For an arbitrary s > 2, we can find a positive integer m so that s = 2m + r, r ∈ (0 So, we found that for each s > 0, there is a constant C so that Since the multiplication by a is selfadjoint operation, by duality, obviously, follows that for each s ∈ (−∞, ∞) we have for some and the function H is a linear combinations of the products in the form (∂ ν h)(∂ µ a), where µ = 0, and |ν| + |µ| = 2m.Since ∂ µ a ∈ B |s|+κ−|µ| , using (3.5) we obtain If s < 0 is not an integer, then there is a positive integer m so that where g is a linear combination of ( Also by (3.5), By Lemma 5 and (3.5) and using Minkowsky's inequality, we have

Systems of SPDEs in Sobolev spaces
As in the previous Section, let (Ω, F, P) be a probability space with a filtration F of right continuous σ-algebras (F t ) t≥0 .All the σ−algebras are assumed to be P−completed.Let W (t) be an F-adapted cylindrical Brownian motion in ,j≤d be a symmetric F-adapted matrix.Let σ = σ(t) = (σ k (t, x)) 1≤k≤d be F-adapted vector function with Y -valued components σ k , and let u 0 = u l 0 1≤l≤d be an Everywhere in this section it is assumed that p ≥ 2.
Consider the following nonlinear system of equations on [0, ∞) : where The following assumptions will be used in the future: where K, δ are fixed strictly positive constants.

A3(s, p).
For every ε > 0, there exists a constant K ε such that for any u, v Given a stopping time τ , we consider a stochastic interval and the equality holds in H s−1 p (R d ) for every t > 0, P − a.s.If τ = ∞, we simply say u is an H s p -solution of equation (4.1).
Sometimes, when the context is clear, instead of "H s p -solution" we will simply say "solution".It is readily checked that all the integrals in 4.3 are well defined.For example, let us consider the stochastic integral.Since ∂ i is a bounded operator from H s p into H s−1 p (see [8]), by Lemma 7 and Assumption A1(s, p), we have σ k (r)∂ k u (r) s,p ≤ C ||u (r) || s+1,p for r ≤ τ P-a.s.By assumptions A2(s, p),A3(s, p), , and the integral is defined by Theorem 1.
It is readily checked that dr × dP-a.s.
Note that to prove the first equality one should first establish it for smooth functions and then prove it in the general case by approximations.Thus, (4.5) implies (4.4).Now by reversing the order of our arguments one could easily show that (4.3) follows from (4.4).
The basic result of this Section is given in the following The Theorem will be proved in several steps.We begin with a simple particular case.
Then for each stopping time τ there is a unique H s p −solution u of equation ( 4 where C = C(d, p, δ, K) does not depend on T and τ , τ .
Proof The statement is a straightforward corollary of the results of [2].Indeed, owing to our assumptions one can treat each component u l of u separately.The the statement regarding the existence follows directly by Theorem 4.10 in [2] considering D(r) = D(r)1 [[0,τ ]] (r), and According to Lemma 4.7 in [2], the uniqueness is an obvious consequence of the deterministic heat equation result.In particular, we obtain (4.7) by taking λ = 1/p in (4.26) in [2].
To prove Theorem 3 in the general case, we will rely on the two fundamental techniques: partition of unity and the method of continuity.The same technology was used in [2] for scalar equations.
The next step is to derive a priori L p -estimates for a solution of (4.1).Proof In order to use Theorem 8 we start with a standard partition of unity.Let ψ ∈ C ∞ 0 (R), be [0, 1]-valued and such that ψ(s) = 1, if |s| ≤ 5/8, and ψ(s) = 0, if |s| > 6/8.For an arbitrary but fixed κ > 0 there we choose m such that κ < 2 −m .Consider a grid in R d consisting of

Lemma 8 Assume A, A1(s, p)-A3(s, p). Suppose that u is an
Obviously, k η k = 1 in R d and for all k and multiindices µ, and for each p ≥ 1, where We have where ηk (x) = ηk (5x/6) (notice ηk (x) = 1 in V k and ηk (x) = 0 if there is l such that |x l − x l k | > 0.9 • 2 −m ).According to Lemma 7, there is a constant C and s 0 < s such that Similarly, by Lemma 7 there is s 0 < s so that It follows by the assumptions, (4.10), Lemma 7 and interpolation theorem (see Lemma 6.7 in [2]) that for each ε there is κ > 0 and a constant Applying Lemma 8 to this equation we get v = 0 P-a.s.

Remark 4
In fact the uniqueness of the solution can be proved in a larger functional class, similar to the one of Theorem 5.1 in [2].For the sake of simplicity we will not address this problem in the present paper.
To complete the proof of Theorem 2 we apply the standard method of continuity (cf.Theorem 5.1 in [2]).
Proof of Theorem 2.
(Existence) Without any loss of generality we can assume u 0 = 0 (see Proof of Theorem 5.1 in [2]) and τ = ∞.Now, let us take λ ∈ [0, 1] and consider the equation with zero initial condition.By Lemma 8 the a priori estimate (4.8) holds with the same constant C for all λ.Assume that for λ = λ 0 and any D, Q satisfying A3(n, p), equation (4.12) has a unique solution.
For other λ ∈ [0, 1] we rewrite (4.12) as follows: This equation can be solved by iterations.Specifically, take u 0 = 0 and write Fix an arbitrary stopping time τ ≤ T such that Notice u 1 and τ do not depend on λ (only on and (u k ) is a Cauchy sequence on [0, τ].Therefore, there is a continuous in t and H s p -valued process u such hat Obviously u is a solution to (4.12) on [0, τ].Since τ is any stopping time such that I(τ ) is finite, it follows that we have a solution for any |λ − λ 0 | < C −1/p /2 (assuming we have one for λ 0 ).For λ = 1 it does exist by Theorem 3. So, in finite number of steps starting with λ = 1, we get to λ = 0.This proves the statement.
Corollary 2 (cf.Corollary 5.11 in [2]) Assume A, A1(s, p)-A3(s, p).Assume further A1(s, q)-A3(s, q) for q ≥ 2, and suppose that |u 0 | s+1−2/p,p + |u 0 | s+1−2/q,q < ∞ P-a.s.Then the ) is also an H s q −solution of the equation.Moreover, for each T > 0, there is a constant C such that for each stopping time τ ≤ T , Proof We follow the lines of the proof of the Theorem 2 by introducing the parameter λ ∈ [0, 1] and considering the equation (4.12).We can assume that u 0 = 0.The statement holds true for λ = 1 by Lemma 5.11 in [2] applied to each component of u.If it is true for λ 0 , then (4.13) defines a sequence u k of H s p -valued continuous processes that are H s q -valued and continuous as well, and P-a.s. for all t.
For each T > 0, there are constants C l = C(d, l, δ, K, T ), l = p, q such that for all stopping times τ ≤ T, l = p.q. Fix an arbitrary stopping time τ ≤ T such that Therefore, there is a continuous in t and H s p ∩ H s q -valued process u such hat q, and the statement follows.

Some estimates
where W (t) is a one-dimensional Wiener process, D(u) = ∂[f (u(x))](= ∂f (u(x))∂u(x)) and f is a scalar Lipschitz function on R 1 .Then A3(1,p) would require the following estimate: which is false in general even if ∇f is Lipschitz.
On the other hand, the assumptions of the Proposition are satisfied for n = 0. Indeed, where C is the Lipcshitz constant of f.Now,since ∂ is a bounded operator from H s p into H s+1 p , we have (The latter inequality follows from Remark 5.5 in [2].)Thus assumption A3(0,p) is verified and we are done.Then for each T > 0, there is a constant C such that for each stopping time τ ≤ T, Proof Since the assumptions of Theorem 2 are satisfied, there is a unique H s 2 -solution u (t, x) of equation (4.1).Let s = 2m + 1, m = 0, 1, . ... Then ũ = Λs u is L 2 -valued continuous and satisfies the equation where ũ0 = Λs u 0 .On the other hand, by Lemma 7, ) and (s 0 < s).By interpolation theorem, for each ε there is a constant C ε so that Applying Ito formula, we obtain and, by A3(s, 2), for each ε there is a constant C ε so that So, y(t) = |ũ(t)| p 2 is a semimartingale: using (4.15)-(4.17),we find easily that for each ε there is a constant C ε so that and τ be a stopping time such that sup r≤τ y(r) is bounded and Fix an arbitrary stopping time τ .Let τ = τ ∧ τ .Then by Burkhölder's inequality and (4.18) For each ε there is a constant C ε independent of T such that and we can find Now, the estimate easily follows.Now we derive similar estimates for |u(t)| p s,q , q ≥ 2.

R
d denotes d-dimensional Euclidean space with elements x = (x 1 , . . ., x d ); if x, y ∈ R d , we write (x, y) = d i=1 x i y i , |x| = (x, x).
For p ∈ [1, ∞) and s ∈ (−∞, ∞), we define the space H s p = H s p (R d ) as the space of generalized functions u with the finite norm |u| s,p = |Λ s u| p , where | • | p is the L p norm.Obviously, H 0 p = L p .Note that if s ≥ 0 is an integer, the space H s ) denotes the space of Y −valued functions on R d so that the norm ||g|| s,p = | |Λ s g| Y | p < ∞.We also write L p p coincides with the Sobolev space W s p = W s p (R d ).If p ∈ [1, ∞), and s ∈ (−∞, ∞), H s p (Y ) = H s p (R d , Y Also, in this case, the norm ||g|| 0,p is denoted more briefly by ||g|| p .To forcefully distinguish L p −norms in spaces of Y −valued functions, we write || • || p , while in all other cases a norm is denoted by |•| . s between H s q R d , and H −s p R d where p ≥ 2 and q = p/ (p − 1) is defined by

P-a.s., then
x) is a predictable H s p (Y )-valued function and D(v, t) = D(v, t, x) is a predictable H s−1 p -valued function, and P-a.s. for each t p ) dr < ∞ ∀t > 0, P − a.s. ).