Points of positive density for smooth functionals

In this paper we show that the set of points where the density of a Wiener functional is strictly positive is an open connected set, assuming some regularity conditions.


Introduction
The stochastic calculus of variations has been applied to derive properties of the support of a given Wiener functional.In [3] Fang proved that the support of a smooth Wiener functional is a connected set using the techniques of the quasi-sure analysis and the Ornstein-Uhlenbeck process.A simple proof of the connectivity of the support for random vectors whose components belong to D 1,p with p > 1 is given in [6, Proposition 4.1.1]using the Wiener chaos expansion.
An interesting question is to study the properties of the set where the density p(x) of an m-dimensional Wiener functional F is positive.In the one-dimensional case we know that the density is always strictly positive in the interior of the support (which is a closed interval) if the random variable belongs to D 1,p with p > 2 and it possesses a locally Lipschitz density ( [5]).In dimension bigger than one this result is not true.In [4] the authors present a simple example of a two-dimensional nondegenerate smooth Wiener functional whose density vanishes in the interior of the support.As a consequence, the set Γ = {x : p(x) > 0} is, in general, strictly included in the interior of the support of the law of the functional.
In [4], using the approach introduced by Fang in [3] to handle the connectivity of the support, Hirsch and Song proved that the open set Γ is connected.The aim of this paper is to prove that the open set Γ is connected using the ideas introduced in the proof of the one-dimensional case, and assuming weak regularity assumptions on the Wiener functional.

Preliminaries
We will first introduce the basic notations and present some preliminary results that will be needed later.
Suppose that H is a real separable Hilbert space whose norm and inner product are denoted by • H and • H , respectively.We associate with H a Gaussian and centered family of random variables for all h, g ∈ H.
Let S denote the class of smooth random variables of the form where f belongs to C ∞ p (R n ) (i.e., f and all of its partial derivatives have polynomial growth order).If F has the form (2.1) we define its derivative DF as the H-valued random variable given by For any real number p ≥ 1 and any positive integer k will denote by D k,p the completion of S with respect to the norm where D j denotes the jth iteration of the operator D.
For any separable real Hilbert space V the spaces D k,p (V ) of V -valued functionals are introduced in a similar way.We will denote by δ the adjoint of the operator D which is continuous from The following results are proved in [6, Lemma 1.4.2,Lemma 2.4.2].
Lemma 2.1 Suppose that a set A ∈ F verifies 1 A ∈ D 1,1 .Then P (A) is zero or one.
Lemma 2.2 Let {F n , n ≥ 1} ∈ D 1,p , p > 1 be a sequence of random variables converging to F in L p .Suppose that sup n DF n L p (Ω;H) < ∞.Then F ∈ D 1,p , and there exists a subsequence {F n(i) , i ≥ 1} which converges to F in the weak topology of L p (Ω; H).
The spaces D 1,p are stable under the composition with Lipschitz functions.More precisely, we have the following result ([5, Proposition 1.2.3]): ) is a random vector whose components belong to the space D 1,p , p > 1.Then φ(G) belongs to D 1,p , and there exists a random vector S = (S 1 , S 2 , . .., S m ) bounded by K such that

Connectivity of the set of positive density
Let us introduce the following condition of a random vector F : (H): F = (F 1 , F 2 , . . ., F m ) possesses a C 2 density p with respect to the Lebesgue measure such that The main result of this section is the following theorem: For each ε > 0, let f ε : R m → R + be the function defined by : That is, This implies clearly that f ε is a Lipschitzian function with Lipschitz constant Using Proposition 2.1 with φ = f ε , G = F and p = r, it is clear that the functional Φ ε belongs to D 1,r and its derivative is given by the formula : where the S i verify m i=1 S 2 i ≤ 1 ε .These random variables cancel almost surely outside the set {0 < d(F, A c ) < ε} because DΦ ε (F ) = 0 a.s. on the two sets : {F ∈ A c } and {d(F, A c ) ≥ ε}, due to the local property of the derivative operator.
Clearly Φ ε converges a.s. and in L p for each p ≥ 1 to 1 A (F ) as ε goes to zero.Hence, if we prove that for some p > 1, Lemma 2.2 will imply that 1 A (F ) belongs to D 1,p .But, according to Lemma 2.1 The definition of A implies that P (F ∈ A) > 0. Hence, P (F ∈ A) = 1, and the proof will be complete.
Let us prove the uniform estimate (3.3) for the derivatives.We have : Hölder's inequality implies that for every 1 ≤ p ≤ r : We can express Let x ∈ R m be a point such that 0 < d(x, A c ) < ε.The set A c being closed, we can find a point x in A c such that d(x, A c ) = d(x, x).The point x belongs to the boundary of A. This implies p(x) = 0 which corresponds to a minimum of the function p, so ∇p(x) = 0. Using the Tayor expansion, we can write : This implies that for 0 < ε < 1, one has the bound Coming back to the estimate and using hypothesis (H), we have : It remains to note that given r > 2, there exists p = 2r r+2 > 1 proving the uniform estimate.

Appendix
We give here a sufficient condition for (H).Let γ be the Malliavin covariance matrix of F : Proposition 4.1 Suppose that there exist real numbers s 1 and s 2 depending on m such that F satisfies: Then (H) is fulfilled.
Proof: We decompose the integral appearing in (H) into 2 m integrals on each of the 2 m quadrants of R m : We take Q n as a generic quadrant of R m that we write as (using an eventual permutation of coordinates): Then, we use an adequate representation of |∇ 2 p(z)| well fitted to the quadrant: where , with H k (G) being defined for any k = 1, . .., m and any G ∈ ∪ r>1 D 1,r by the formula This implies that (we take the norm in R m defined by |x| = sup i |x i |): where , where 1 k is the point with first k coordinates equal to −1 and the m − k remaining coordinates equal to 1. So, we have to evaluate for all i, j = 1, . .., m By Fubini's theorem we obtain where and by Hölder's inequality we get where 1 s + 1 s = 1.Let us first estimate H k (G) q for any k = 1, . .., m and for any random variable G ∈ ∪ r>1 D 1,2q and some q > 1.We have by Meyer's inequality where As a consequence, H 1 (1) q ≤ c q T 1 D 1,q (H) , H) , and by iteration, This estimate completes the proof of the proposition.
We could specify the exponents s 1 and s 2 appearing in the statement of Proposition 4.1.To do this it suffices to estimate T k D m+2,2 m+1 q (H) by Sobolev norms of F and L p norms of (det γ) −1 .