Support theorem for a stochastic Cahn-Hilliard equation

In this paper, we establish a Stroock-Varadhan support theorem for the global mild solution to a d ( d ≤ 3)-dimensional stochastic Cahn-Hilliard partial differential equation driven by a spacetime white noise.


Introduction and main result
In this paper, we consider the following stochastic Cahn-Hilliard equation: where ∆ denotes the Laplace operator, the domain D = [0, π] d (d = 1, 2, 3), and f : R → R is a polynomial of degree 3 with positive dominant coefficient (which is due to the background of the equation from material science). Assume that σ : R → R is a bounded and Lipschitzian function anḋ W is a Gaussian space-time white noise on some complete probability space (Ω, , P) satisfying Here δ(·) is the Dirac delta function concentrated at the point zero.
The (deterministic) Cahn-Hilliard equation (i.e., σ ≡ 0 in (1.1)) has been extensively studied (see, e.g., [2; 3; 4; 5; 10; 15; 18]) as a well-known model of the macro-phase separation that occurs in an isothermal binary fluid, when a spatially uniform mixture is quenched below a critical temperature at which it becomes unstable. A stochastic version of the Cahn-Hilliard equation (when σ ≡ 1 in (1.1)) was first proposed by Da Prato and Debussche [8], and the existence, uniqueness and regularity of the global mild solution were explored. In Cardon-Weber [6], the authors considered this type of stochastic equation in a general case on σ, which is equivalent to the following form: where G t (·, * ) denotes the Green kernel corresponding to the operator ∂ /∂ t + ∆ 2 with the homogeneous Neumann's boundary condition as in (1.1). Since Stroock and Varadhan [17] established their famous support theorem for diffusion processes, there have been many research works on this issue, for example, a variety of support theorems for 1-dimensional second-order parabolic and hyperbolic stochastic partial differential equations (abbr. SPDEs) have been discussed in the literature (see, e.g., [1; 7; 13; 14]). Millet and Sanz-Solé [13] characterized the support of the law of the solution to a class of hyperbolic SPDEs, which simplified the proof in [17]. In Bally et al. [1], the authors proved a support theorem for a semi-linear parabolic SPDE. Moreover, a support result for a generalized Burgers SPDE (containing a quadratic term) was established in Cardon-Weber and Millet [7]. Herein, we are attempting to establish a support theorem of the law corresponding to the solution to Equation (1.1) in C([0, T ], L p ([D])) for p ≥ 4. The main strategy used in this paper is an approximation procedure by using a space-time polygonal interpolation for the white noise, and we particularly adopt a localization argument, which was used in [7] for studying a support theorem of a Burgers-type equation. However here we need more technical estimates concerning the high-order Green kernel G t (·, * ), which sharp the estimates in [6] (see Appendix).
In what follows, we introduce the main result of this paper. To do it, we define the following Cameron-Martin space by Let b represent the subset of , in which the first-order derivative h of h ∈ is bounded. For h ∈ , consider the following skeleton equation:
Now we are at the position to state the main result of this paper.
The rest of this paper is organized as follows: In the coming section, we give a difference approximation to the (d + 1)-dimensional space-time white noiseẆ (x, t) and study some concrete properties of the approximating noises. In Section 3, we introduce a localization framework as in [7], and then switch to prove the support theorem by checking the conditions (C1) and (C2) below (see Section 3). Sections 4 and 5 are devoted to checking the validity of the conditions (C1) and (C2), respectively. In Section 6, we prove the continuity of the solution S(h) to the skeleton equation (1.3) in b and finally we complete the proof of Theorem 1.1.

Difference approximation to white noise
In this section, we give a difference approximation to the (d +1)-dimensional space-time white noisė W , which is a space-time polygonal interpolation forẆ .
where j,k = T π d (n d 2 n ) −1 is the volume of the partition j,k for each j = 0, 1, . . . , 2 n − 1 and k ∈ I d n . Next we suppose that (H3). the mappings F, H, K : R → R are bounded, globally Lipschitzian and H ∈ C 3 (R) with bounded first to third-order derivatives.
We now consider the following equations for h ∈ b , and and for each n ∈ N, For α n (t, x) and β n (t, x), by virtue of (A.4) in Lemma A.1, we claim that Indeed, using (A.4), we have for each t ∈ [0, T ], follows from the equality (A.19).
In the following, let = ( t ) 0≤t≤T be the natural filtration generated by W , i.e., Then for every t ∈ [0, T ] and n ∈ N fixed, (Ẇ n (x, t)) x∈D given by (2.1) is t -adapted. More precisely, it is t n -adapted and which is independent of the information t n .
Lemma 2.1. For each fixed n ∈ N and p ≥ 1, we have Proof. By virtue of the definition (2.1), Note that for each j = 1, . . . , 2 n − 1 and k ∈ I d n , For any random variable Z ∼ N (0, σ 2 ), it holds that where Γ denotes the Gamma function. This yields that and which proves the lemma.
Let n ∈ N be fixed. For α > 0 and t ∈]0, T ], we now define an eventΩ α n,t bȳ Note that E exp |Z| 2 4 = 2. Then (2.7) further yields that Thus the proof of the lemma is complete.

Localization framework
In this section, we adopt a localization method used in [7] to deal with Equation (1.1). In addition, we will prove a key proposition, which is useful in the proof of Theorem 1.1. Next we give a sketch for the proof of the conclusion (ii) in Proposition 3.1. The similar argument can also be used to prove the part (i).
Introduce an auxiliary t n -adapted process Recall the localization argument adopted in [7]. For γ ∈ (0, 1) and p ≥ 4, define where · p corresponds to the norm of L p (D) and for δ > 0, Then for t ∈]0, T ], In fact, from the inequality | y| ≤ |x − y| + |x|, it follows that Recall the eventΩ α n,t defined by (2.6) in Section 2 and that α > 2 V (s, ·) p + sup Then for q ≥ 1, However, Lemma 2.2 and Lemma 3.1 (the latter will be proved below) yield that Therefore, by Lemma A.1 in [7] and (3.6), in order to prove it suffices to check that there exist q ≥ p and θ > qᾱ (we have set γ =ᾱ, whereᾱ is the exponent presented in Proposition 3.1) such that Here the eventĀ M n (t) is defined bȳ and which satisfies the order relation:

Proof. Note that for
Therefore, it remains to prove Define for (t, x) ∈ T ,
The following lemma tells us that, to check As for Γ 3 n (t, x), using (A.13) with 1 Hence from (4.2), it follows that Note that the following equivalent relations holds: Then the desired result follows from the Gronwall's lemma.
Recall the t n -adapted process X − n (t, x) defined by (3.3). Then we have
On the other hand, using (A.4) and the boundedness of F , Further, the Hölder inequality, Lemma 2.1 and the boundedness of H jointly imply that Note that f is a polynomial of degree 3. Then by virtue of (A.13) with κ = 1 Thanks to (4.4), we conclude that Thus the estimate (4.3) follows from (4.5)-(4.8).

Remark 4.1. We can easily check that there exists a constant C M
. In fact, for the proof of (4.9), the only estimate that needs to be checked is that of 1] to T 4 n and we can get (4.9).
Next we prove a useful lemma, which will be used frequently later.
follows from the fact that F (X − n (r, z)) is s n -measurable when r ≤ s. This proves the lemma. The following lemma shows the local Hölder continuity of -adapted process X n (t, x) defined by (2.2). Recall the assumption (H2), in which the initial function ψ is -Hölder continuous ( ∈ ]0, 1]).
We first estimate the term J 1,1 where λ k n (V )(t, s, x, y) is defined by (4.12). By virtue of the Burkhölder inequality, (A.16) with κ = 2 q − 2 q + 1 = 1 and Lemma 4.2, (4.32) Next we turn to estimate the term J 1,2 Applying the discrete Burkhölder inequality and Jensen's inequality to conclude that E |J According to a similar proof to that of (4.31), for V (r, z) defined by (4.33), one gets Also using Lemma 4.3,

E 1Ān
This implies that J From the above estimations, it follows that there existᾱ ∈]0, min{ 1 (4.39) Now we turn to estimate the term J 2 3 (t, s, x, y). The procedure is similar to that of J 1 3 (t, s, x, y). Replace F (X n (r, z)) by H(X n (r, z)), F (X − n (r, z)) by H(X − n (r, z)), W (dz, dr) by W n (dz, dr), and α n (r, z, X n (r, z)) by β n (r, z, X n (r, z)), respectively. Then there existᾱ,β presented in (4.39) such that E 1ĀM As for the term J 3 3 (t, s, x, y), we have E 1ĀM s, x, y), [η(t, s, x, y)](r, z)Ḣ(X − n (r, z))Ẇ n (z, r) By virtue of (A.11) and (A.12) with κ = 1 Since X − n (u, v) = G u−u n (v, X n (u n , ·)), it is obvious that By virtue of (4.7), As for J 5 3 (t, s, x, y), using (A.11) and (A.12) with κ := 1 When d = 3, we can obtain a more precise estimate than the cases of d = 1, 2, which will be concluded in Lemma 6.1 of Section 6. Thus we complete the proof of the lemma.

The proof of (C1)
The condition (C1) presented by Section 3 shall be verified in this section. By Lemma 4.1, to check (C1), we only need to prove the right hand side of (4.1) approaches 0, when n → ∞. Note that for each fixed n ∈ N, Let π n be the orthogonal projection of above basis and for any mapping g : R → R, define Then for each t ∈ [0, T ] and -predictable process (ψ(t, x); x ∈ D) 0≤t≤T , Recall Λ n (t, x) defined by (3.2) and we have (s, y)) − H(X n (s, y))]W (d y, ds), × α n (s, y)F (X n (s, y)) + β n (s, y)H(X n (s, y)) d yds.
We begin with the estimation of the termΛ 1 n (t, x). Note that Then by the Hölder inequality and Burkhölder's inequality, and note that π n is an orthogonal projection of L 2 ([0, t] × D), we have for q ≥ p, Take the boundedness of the mapping H into account, from (A.8) and (A.9) in Lemma A.2, it follows that for On the other hand, applying (A.16) with κ = 2 q − 2 q + 1 = 1, the Hölder inequality, Lemma 4.2, and Lemma 6.1 (in Section 6) to conclude that As for the termΛ 1,2 Then the B-D-G inequality yields that for q ≥ p, . By the boundedness of H and Dini's theorem, we have for t ∈ [0, T ], follows from the fact that when n → ∞, it holds that for all (t, x) ∈ T (a compact set in R d+1 ). Because H is Lipschitzian continuous, using the Hölder inequality Then from Lemma 4.2, and Lemma 6.1 (in Section 6), it follows that Finally, we turn to the estimation of the termΛ 2 n (t, x). In light of the B-D-G inequality and (A.16) Thus we prove that the condition (C1) holds.

The proof of (C2)
The aim of this section is to check the validity of the condition (C2) presented in Section 3. Note that for all s < t ∈ [0, T ], we have for q ≥ p > 3d 2 , Observe the forms of Equations (2.2) and (2.3), X is a particular case of X n . Hence in order to prove that (C2) holds, it suffices to check that for all 0 ≤ s < t ≤ T , there exist q ≥ p and θ > qᾱ (ᾱ is presented in Theorem 1.1) such that for each n ∈ N, From (2.2), it follows that for (s, y), (t, x) ∈ T , X n (t, x) − X n (s, y) t, x, y)](r, z) F (X n (r, z))W (dz, dr) + H(X n (r, z)W n (dz, dr)) t, x, y)](r, z) K(X n (r, z))ḣ(r, z) − α n (r, z)(FḢ)(X n (r, z)) + β n (r, z)(HḢ)(X n (r, z)) dzdr To prove (C2) , we will sharp the estimations in Lemma 4.5 by the following lemma. where ξ is the same as in (i).

Proof.
Recall the entire proof of Lemma 4.5. We only need to re-estimate J 1 3 (t, s, x, y), J 2 3 (t, s, x, y) and J 5 3 (t, s, x, y) given in Lemma 4.5. To improve the estimation of J 1 3 (t, s, x, y), we only need to estimate the term J 1,4 3 (t, s, x, y). Note that, for (s, y), (t, x) ∈ T , Using a similar argument to that of (4.42), we get for α, β ∈ (0, 1),
Then by virtue of (A.13) with κ = 1 q − 3 q + 1 = 1 − 2 q ∈ [0, 1], On the other hand, under (H1), the Schwarz's inequality implies that where the Hölder norm · α,q := · α,q,∞ , which is defined in Section 3. Thus we complete the proof of the lemma. Now we are at the position to prove Theorem 1.1.
Proof of Theorem 1.1. We adopt the method used in Theorem 2.1 of [1]. We only provide a sketch for this proof. For parsimony, we only prove the part (b) of Theorem 1.1, since the proof of the part (a) is similar. Given h ∈ b , let X h n be the solution to Equation (2.2) with h = F = K = 0 and H = σ. Define H n : Ω → b bẏ H n (s, y) =Ẇ n ( y, s) −σ(X n (s, y))β n (s, y).