Sharp estimates for the convergence of the density of the Euler scheme in small time

In this work, we approximate a diﬀusion process by its Euler scheme and we study the convergence of the density of the marginal laws. We improve previous estimates especially for small time


Introduction
Let us consider a d-dimensional diffusion process (X s ) 0≤s≤T and a q-dimensional Brownian motion (W s ) 0≤s≤T .X satisfies the following SDE dX i s = b i (s, X s )ds + q j=1 σ ij (s, X s )dW j s , X i 0 = x i , ∀i ∈ {1, • • • , d}. (1.1) We approximate X by its Euler scheme with N (N ≥ 1) time steps, say X N , defined as follows.We consider the regular grid We put X N 0 = x and for all i ∈ {1, • • • , d} we define The continuous Euler scheme is an Itô process verifying where ϕ(u) := sup{t k : t k ≤ u}.If σ is uniformly elliptic, the Markov process X admits a transition probability density p(0, x; s, y).Concerning X N (which is not Markovian except at times (t k ) k ), X N s has a probability density p N (0, x; s, y), for any s > 0. We aim at proving sharp estimates of the difference p(0, x; s, y) − p N (0, x; s, y).
It is well known (see Bally and Talay (1996), Konakov and Mammen (2002), Guyon (2006)) that this difference is of order 1 N .However, the known upper bounds of this difference are too rough for small values of s.In this work, we provide tight upper bounds of |p(0, x; s, y) − p N (0, x; s, y)| in s (see Theorem 3), so that we can estimate quantities like (without any regularity assumptions on f ) more accurately than before (see Theorem 5).
For other applications, see Labart (2007).Unlike previous references, we allow b and σ to be time-dependent and assume they are only C 3 in space.Besides, we use Malliavin's calculus tools.

Background results
The difference p(0, x; s, y) − p N (0, x; s, y) has been studied a lot.We can found several results in the literature on expansions w.r.t.N .First, we mention a result from Bally and Talay (1996) (Corollary 2.7).The authors assume Hypothesis 1 σ is elliptic (with σ only depending on x) and b, σ are C ∞ (R d ) functions whose derivatives of any order greater or equal to 1 are bounded.
By using Malliavin's calculus, they show that , where c > 0, q > 0 and K(•) is a non decreasing function.We point out that q is unknown, which doesn't enable to deduce the behavior of p − p N when T → 0.
Besides that, Konakov and Mammen (2002) have proposed an analytical approach based on the so-called parametrix method to bound p(0, x; 1, y) − p N (0, x; 1, y) from above.They assume Hypothesis 2 σ is elliptic and b, σ are C ∞ (R d ) functions whose derivatives of any order are bounded.
For each pair (x, y) they get an expansion of arbitrary order j of p N (0, x; 1, y).The coefficients of the expansion depend on N p(0, x; 1, y) − p N (0, x; 1, y) (1.5) The coefficients have Gaussian tails : for each i they find constants To do so, they use upper bounds for the partial derivatives of p (coming from Friedman (1964)) and prove analogous results on the derivatives of p N .Strong though this result may be, nothing is said when replacing 1 by t, for t → 0. That's why we present now the work of Guyon (2006).Guyon (2006) improves (1.4) and (1.5) in the following way.
Definition 1.Let G l (R d ), l ∈ Z be the set of all measurable functions π : • for all t ∈ (0, 1], π(•; t, •) is infinitely differentiable, • for all α, β ∈ N d , there exist two constants c 1 ≥ 0 and c 2 > 0 s.t. for all t ∈ (0, 1] and Under Hypothesis 2 and for T = 1, the author has proved the following expansions , and (π ′ N,i , N ≥ 1), (π ′′ N,i , N ≥ 1) are two bounded families in G 2i (R d ).These expansions can be seen as improvements of (1.4) and (1.5) : it also allows infinite differentiations w.r.t.x and y and makes precise the way the coefficients explode when t tends to 0.
As a consequence (see Guyon (2006, Corollary 22)), one gets for two positive constants c 1 and c 2 , and for any x, y and s ≤ 1.This result should be compared with the one of Theorem 3 (when T = 1), in which the upper bound is tighter (s has a smaller power).

Main Results
Before stating the main result of the paper, we introduce the following notation Definition 2. C k,l b denotes the set of continuously differentiable bounded functions φ : (t, x) ∈ [0, T ] × R d with uniformly bounded derivatives w.r.t.t (resp.w.r.t.x) up to order k (resp.up to order l).
The main result of the paper, whose proof is postponed to Section 4, is established under the following Hypothesis Theorem 3. Assume Hypothesis 3.Then, there exist a constant c > 0 and a non decreasing function K, depending on the dimension d and on the upper bounds of σ, b and their derivatives s Corollary 4. Assume Hypothesis 3. From the last inequality and Aronson's inequality (A.1), we deduce This inequality yields p(0, x; T, x) ∼ p N (0, x; T, x) when T → 0.
Theorem 3 enables to bound quantities like in (1.3) in the following way Theorem 5. Assume Hypothesis 3.For any function Had we used the results stated by Guyon (2006) (and more precisely the one recalled in (1.8)), we would have obtained Intuitively, this result is not optimal: the right hand side doesn't tend to 0 when T goes to 0 while it should.Analogously, regarding E x; T, y)−p(0, x; T, y))dy and using Theorem 3 yield the first result.
Concerning the second result, we split where we use the easy inequality . Second, we write Then, Proposition 13 yields In the next section, we give results related to Malliavin's calculus, that will be useful for the proof of Theorem 3.

Basic results on Malliavin's calculus
We refer the reader to Nualart (2006), for more details.Fix a filtered probability space (Ω, F, (F t ), P) and let (W t ) t≥0 be a q-dimensional Brownian motion.For h(•) as the H valued random variable given by The operator D is closable as an operator from L p (Ω) to L p (Ω; H), for p ≥ 1.Its domain is denoted by D 1,p w.r.t. the norm F 1,p = [E|F | p + E( DF p H )] 1/p .We can define the iteration of the operator D, in such a way that for a smooth random variable F , the derivative D k F is a random variable with values on H ⊗k .As in the case k = 1, the operator we denote its domain by D k,p .Finally, set D k,∞ = ∩ p≥1 D k,p , and D ∞ = ∩ k,p≥1 D k,p .One has the following chain rule property We now introduce the Skorohod integral δ, defined as the adjoint operator of D.
Proposition 7. δ is a linear operator on • the domain of δ (denoted by is the one element of L 2 (Ω) characterized by the integration by parts formula Remark 8.If u is an adapted process belonging to L 2 ([0, T ] × Ω, R q ), then the Skorohod integral and the Itô integral coincide : δ(u) = T 0 u t dW t , and the preceding integration by parts formula becomes This equality is also called the duality formula.
This duality formula is the corner stone to establish general integration by parts formula of the form for any non degenerate random variables F .We only give the formulation in the case of interest F = X N t .
Proposition 9. We assume that σ is uniformly elliptic and b and σ are in C 0,3 b .For all p > 1, for all multi-index α s.t.|α| ≤ 2, for all t ∈]0, T ], all u, r, s ∈ [0, T ] and for any functions f and g in C |α| b , there exist a random variable H α ∈ L p and a function K(T ) (uniform in N, x, s, u, r, t, f and g) s.t.
These results are given in the article of Kusuoka and Stroock (1984) Although this upper bound seems to be quite standard, to our knowledge such a result has not appeared in the literature before, except in the case of time homogeneous coefficients (see Konakov and Mammen (2002), proof of Theorem 1.1).

Proof of Theorem 3
In the following, K(•) denotes a generic non decreasing function (which may depend on b and σ).To prove Theorem 3, we take advantage of Propositions 9 and 10.The scheme of the proof is the following • Use a PDE and Itô's calculus to write the difference p N (0, x; s, y) − p(0, x; s, y) where c ′ > 0.
• Use Malliavin's calculus, Proposition 10 and the intermediate result, to show that each term E 1 and E 2 (see (4.1)) is bounded by Definition 11.We say that a term E(x, s, y) satisfies property 4.1 Proof of equality (4.1) First, the transition density function (r, x) −→ p(r, x; s, y) satisfies the PDE where The function, as well as its first derivatives, are uniformly bounded by a constant depending on ǫ for |s − r| ≥ ǫ (see Appendix A).Second, since p N (0, x; s, y) is a continuous function in s and y (convolution of Gaussian densities), we observe that Then, for any ǫ > 0, Itô's formula leads to From the PDE, the above equality becomes To get (4.1), it remains to prove that E(φ(r)) is integrable over [0, s].We check it by looking at the rest of the proof.

Proof of the intermediate result (4.2)
We prove inequality (4. )dz i is the convolution product of the density of two independant Gaussian random variables N (−x i , r 2c ′ ) and N (y i , s−r 2c ) computed at 0. Hence, the integral is equal to 1 and (4.2) follows.

Upper bound for E 1
We recall that For each i, we apply Itô's formula to b i (u, X N u ) between u = ϕ(r) and u = r.We get where α i u depends on ∂ t b, ∂ x b, ∂ 2 x b, σ, and b , α i and (β i,k ) 1≤k≤q uniformly bounded.Using (4.3) and the duality formula (3.1) yield where β i u is a row vector of q components.We upper bound E 11 and E 12 .
Bound for Besides that, from Proposition 13, |∂ x p(r, X N r ; s, y)| ≤ K(T ) (s−r) Using the intermediate result (4.2) yields and thus, E 11 satisfies property P (see Definition 11).
Bound for To rewrite E 12 , we use the expression of β i u and Proposition 6, which gives D u (∂ x i p(r, X N r ; s, y)) = ∇ x (∂ x i p(r, X N r ; s, y))σ(ϕ(r), X N ϕ(r) ).Then, Using the integration by formula (3.2), we get that Hence, E 12 is bounded by The above integral equals s , and E 12 satisfies property P.

Upper bound for E 2
We recall x i x j p(r, X N r ; s, y)]dr.As we did for E 1 , we apply Itô's formula to a ij (u, X N u ) between ϕ(r) and r.We get a ij (ϕ(r), ).Then, the duality formula (3.1) leads to Bound for ) * ] k appearing in (4.5).Thus, E 21 can be treated as E 12 and satisfies to the same estimate.

Bound for E
To rewrite E 22 , we use the expression of δ ij u and Proposition 6, which asserts ).Thus, To complete this proof, we split E 22 in two terms : E 1 22 (resp E 2 22 ) corresponds to the integral in r from 0 to s 2 (resp.from s 2 to s).
• On [0, s 2 ], E A Bounds for the transition density function and its derivatives We bring together classical results related to bounds for the transition probability density of X defined by (1.1).
Proposition 12 ( Aronson (1967)).Assume that the coefficients σ and b are bounded measurable functions and that σ is uniformly elliptic.There exist positive constants K, α 0 , α 1 s.t. for any x, y in R d and any 0 ≤ t < s ≤ T , one has : (3.3) is owed to Theorem 1.20 and Corollary 3.7.Another consequence of the duality formula is the derivation of an upper bound for p N .Proposition 10.Assume σ is uniformly elliptic and b and σ are in C 0,2 b .Then, for any x, y ∈ R d , s ∈]0, T ], one has p N (0, x; s, y) ≤ K(T ) s d/2 e −c |x−y| 2 constant c and a non decreasing function K, both depending on d and on the upper bounds for b, σ and their derivatives.
This easily completes the proof in the case |x − y| ≥ √ s.
r, X N r ; s, y)| and α u is uniformly bounded in u, we have |E 11 | ≤ C T N s 0 E|∂ x p(r, X N r ; s, y)|dr.