On some Non Asymptotic Bounds for the Euler Scheme

We obtain non asymptotic bounds for the Monte Carlo algorithm associated to the Euler discretization of some diffusion processes. The key tool is the Gaussian concentration satisfied by the density of the discretization scheme. This Gaussian concentration is derived from a Gaussian upper bound of the density of the scheme and a modification of the so-called"Herbst argument"used to prove Logarithmic Sobolev inequalities. We eventually establish a Gaussian lower bound for the density of the scheme that emphasizes the concentration is sharp.


Statement of the problem
Let the R d -valued process (X t ) t 0 satisfy the dynamics where (W t ) t 0 is a d ′ -dimensional (d ′ d) standard Brownian motion defined on a filtered probability space (Ω, F , (F t ) t 0 , P) satisfying the usual assumptions.The matrix are assumed to be Lipschitz continuous in space, 1/2-Hölder continuous in time so that there exists a unique strong solution to (1.1).
Let us fix T > 0 and introduce for (t, x) where f is a measurable function, bounded in time and with polynomial growth in space.The numerical approximation of Q(t, x) appears in many applicative fields.In mathematical finance, Q(t, x) can be related to the price of an option when the underlying asset follows the dynamics (1.1).In this framework we consider two important cases: (a) If d = d ′ , Q(t, x) corresponds to the price at time t when X t = x of the vanilla option with maturity T and pay-off f .
) corresponds to the price of an Asian option.
It is also well known, see e.g.Friedman [Fri75], that Q(t, x) is the Feynman-Kac representation of the solution of the parabolic PDE ∂ t Q(t, x) + LQ(t, x) = 0, (t, x) where L stands for the infinitesimal generator of (1.1).Hence, the quantity Q(t, x) can also be related to problems of heat diffusion with Cauchy boundary conditions (case (a)) or to kinetic systems (case (b)).
The natural probabilistic approximation of Q(t, x) consists in considering the Monte Carlo algorithm.This approach is particularly relevant compared to deterministic methods if the dimension d is large.To this end we introduce some discretization schemes.For case (a) we consider the Euler scheme with time step ∆ := T /N, N ∈ N * .Set ∀i ∈ N, t i = i∆ and for t 0, define φ(t) = t i for t i t < t i+1 .The Euler scheme writes σ(φ(s), X ∆ φ(s) )dW s . (1.3) For case (b) we define where . Equation (1.4) defines a completely simulatable scheme with Gaussian increments.On every time step, the last d ′ components are the integral of a Gaussian process.
The weak error for the above problems has been widely investigated in the literature.Under suitable assumptions on the coefficients b, σ and f (namely smoothness) it is shown in Talay and Tubaro [TT90] that ). Bally and Talay [BT96a] then extended this result to the case of bounded measurable functions f in a hypoelliptic setting for time homogeneous coefficients b, σ.Also, still for time homogeneous coefficients, similar expansions have been derived for the difference of the densities of the process and the discretization scheme, see Konakov and Mammen [KM02] in case (a), Konakov et al. [KMM09] in case (b) for a uniformly elliptic diffusion coefficient σσ * , and eventually Bally and Talay [BT96b] for a hypoelliptic diffusion and a slight modification of the Euler scheme.The constant C in the above development involves the derivatives of Q and therefore depends on f, b, σ, x.
The expansion of E D (∆) gives a good control on the impact of the discretization procedure of the initial diffusion, and also permits to improve the convergence rate using e.g.Richardson-Romberg extrapolation (see [TT90]).Anyhow, to have a global sharp control of the numerical procedure it remains to consider the quantities (1.5) In the previous quantities M stands for the number of independent samples in the Monte Carlo algorithm and (X ∆ t ) i t 0 i∈[[1,M]] are independent sample paths.Indeed, the global error associated to the Monte Carlo algorithm writes: where E D (∆) is the discretization error and E MC (M, ∆) is the pure Monte Carlo error.The convergence of E MC (M, ∆), to 0 when M → ∞ is ensured under the above assumptions on f by the strong law of large numbers.A speed of convergence can also be derived from the central limit theorem, but these results are asymptotic, i.e. they hold for a sufficiently large M .On the other hand, a non asymptotic result is provided by the Berry-Esseen Theorem that compares the distribution function of the normalized Monte Carlo error to the distribution function of the normal law at order O(M −1/2 ).
In the current work we are interested in giving, for Lipschitz continuous in space functions f , non asymptotic error bounds for the quantity E MC (M, ∆).Similar issues had previously been studied by Malrieu and Talay [MT06].In that work, the authors investigated the concentration properties of the Euler scheme and obtained Logarithmic Sobolev inequalities, that imply Gaussian concentration see e.g.Ledoux [Led99], for multi-dimensional Euler schemes with constant diffusion coefficients.Their goal was in some sense different than ours since they were mainly interested in ergodic simulations.In that framework we also mention the recent work of Joulin and Ollivier for Markov chains [JO09].
Our strategy is here different.We are interested in the approximation of Q(t, x), t T where T > 0 is fixed.It turns out that the log-Sobolev machinery is in some sense too rigid and too ergodic oriented.Also, as far as approximation schemes are concerned it seems really difficult to obtain log-Sobolev inequalities in dimension greater or equal than two without the constant diffusion assumption, see [MT06].Anyhow, under suitable assumptions on b, σ (namely uniform ellipticity of σσ * and mild space regularity), the discretization schemes (1.3), (1.4) can be shown to have a density admitting a Gaussian upper bound.From this a priori control we can modify Herbst's argument to obtain an expected Gaussian concentration as well as the tensorization property (see [Led99]) that will yield for r > 0 and a Lipschitz continuous in space f , Here δ 0 is a bias term (independent of M ) depending on the constants appearing in the Gaussian domination (see Theorem 2.1) and on the Wasserstein distance between the law of the discretization scheme and the Gaussian upper bound.We also prove that a Gaussian lower bound holds true for the density of the scheme.Hence, the Gaussian concentration is sharp, i.e. for a function f with suitable non vanishing behavior at infinity, the concentration is at most Gaussian, i.e.
), for r large enough, δ depending on f , and the Gaussian upper and lower bounds, ᾱ(T ) > 0 independent of M uniformly in ∆ = T /N .
The paper is organized as follows, we first give our standing assumptions and some notations in Section 1.2.We state our main results in Section 2. Section 3 is dedicated to concentration properties and non asymptotic Monte Carlo bounds for random variables whose law admits a density dominated by a probability density satisfying a log-Sobolev inequality.We prove our main deviations results at the end of that section as well.
In Section 4 we show how to obtain the previously mentioned Gaussian bounds in the two cases introduced above.The main tool for the upper bound is a discrete parametrix representation of Mc Kean-Singer type for the density of the scheme, see [MS67] and Konakov and Mammen [KM00] or [KM02].The lower bound is then derived through suitable chaining arguments adapted to our non Markovian setting.

Assumptions and Notations
We first specify some assumptions on the coefficients.Namely, we assume: where a(t, x) := σσ * (t, x), and |.| stands for the Euclidean norm.(SB) The diffusion matrix a is uniformly η-Hölder continuous in space, η > 0, uniformly in time, and the drift b is bounded.That is there exists Throughout the paper we assume that (UE), (SB) are in force.
In the following we will denote by C a generic positive constant that can depend on L 0 , λ 0 , η, d, T .We reserve the notation c for constant depending on L 0 , λ 0 , η, d but not on T .In particular the constants c, C are uniform w.r.t the discretization parameter ∆ = T /N and eventually the value of both c, C may change from line to line.
To establish concentration properties, we will work with the class of Lipschitz continuous functions Remark 1.1.The above assumption simply means that for |y| ρ 0 the graph of F stays above a given hyperplane.In particular, for all z ∈ A, F (rz) → r→+∞ +∞.
The bounds of the quantities E MC (M, ∆) will be established for real valued functions f that are uniformly Lipschitz continuous in space and measurable bounded in time, such that for a fixed T , F (.) := f (T, .)will be Lispchitz continuous satisfying |∇F | ∞ 1.Moreover, for the lower bounds, we will suppose that the above F satisfies (G ρ0,β ).

Results
Let us first justify that under the assumptions (UE), (SB), the discretization schemes admit a density.For all where the notation p ∆ (t i , t i+1 , x i , x i+1 ), i ∈ [[0, N −1]] stands in case (a) for the density at point x i+1 of a Gaussian random variable with mean x i +b(t i , x i )∆ and non degenerated covariance matrix a(t i , x i )∆, whereas in case (b) it stands for the density of a Gaussian random variable with mean and non degenerated as well covariance matrix , where ∀y ∈ R d , y 1,d ′ = (y 1 , . . ., y d ′ ) * and y d ′ +1,d = (y d ′ +1 , . . ., y d ) * .Equation (2.1) therefore guarantees the existence of the density for the discretization schemes.From now on, we denote by p ∆ (t j , t j ′ , x, •) the transition densities between times t j and t j ′ , 0 j < j ′ N , of the discretization schemes (1.3), (1.4).Let us denote by P x (resp.P tj ,x , 0 j < N ) the conditional probability given X ∆ 0 = x (resp.{X ∆ tj = x}), so that in particular P x X ∆ T ∈ A = A p ∆ (0, T, x, x ′ )dx ′ .We have the following Gaussian estimates for the densities of the schemes.Theorem 2.1 ("Aronson" Gaussian estimates for the discrete Euler scheme).Assume (UE), (SB).There exist constants c > 0, C 1, s.t. for every 0 j < j ′ N : where for all 0 s < t T , in case (a), and in case (b) Note that p c enjoys the semigroup property, i.e. ∀0 < s < t, R d p c (t − s, x, u)p c (s, u, x ′ )du = p c (t, x, x ′ ) (see Kolmogorov [Kol34] or [KMM09] for case (b)).
Remark 2.1.The above upper bound can be found in [KM02] in the case of time homogeneous Lipschitz continuous coefficients.Both bounds can be derived (for time dependent coefficients) from the work of Gobet and Labart [GL08] under stronger smoothness assumptions.Here, our framework is the one of the "standard" PDE assumptions to derive Aronson's estimates for the fundamental solution of non degenerated non-divergence form second order operators, see e.g.Sheu [She91] or [DM09].In particular no regularity in time is needed.
Our second result is the Gaussian concentration of the Monte Carlo error E MC (M, ∆) defined in (1.5) for a fixed M uniformly in ∆ = T /N, N 1.
Theorem 2.2 (Gaussian concentration).Assume (UE), (SB).For the constants c and C of Theorem 2.1, we have for every ∆ = T /N, N 1, and every Lipschitz continuous function in space and measurable bounded in time f : ) Moreover, if F (.) := f (T, .)0 satisfies for a given ρ 0 > 0 and β > 0, the growth assumption (G ρ0,β ), where δc,C,T,f = (1 + √ 2) α(T ) log C + γ c −1 ,T (F ) + ρ 0 β − F , γ c −1 ,T (dx ′ ) = p c −1 (T, x, x ′ )dx ′ , and F := inf s∈S d−1 F (sρ 0 ).The constant ᾱ(T ) −1 appearing in (2.5) writes in case (a) where for all d ∈ N * , A ⊂ S d−1 appearing in (G ρ0,β ), , d odd. (2.6) In case (b), d is even and From Theorem 2.1 and our current assumptions on f , we can deduce from the central limit theorem that From this asymptotic regime, we thus derive that for large M the typical deviation rate r (i.e. the size of the confidence interval) in (2.3) has order cσ(f, ∆)M −1/2 where for a given threshold α ∈ (0, 1), c := c(α) can be deduced from the inverse of the Gaussian distribution function.In other words, r is typically small for large M .On the other hand, we have a systematic bias δ C,α(T ) , independently of M .In whole generality, this bias is inherent to the concentration arguments used to derive the above bounds, see Section 3, and cannot be avoided.Hence, those bounds turn out to be particularly relevant to derive non asymptotic confidence intervals when r and δ C,α(T ) have the same order.In particular, the parameter M is not meant to go to infinity.This kind of result can be useful if for instance it is particularly heavy to simulate the underlying Euler scheme and that only a relatively small number M of samples is reasonably allowed.On the other hand, the smaller T is the bigger M can be.Precisely, one can prove that the constant C of Theorem 2.1 is bounded by c exp(L 0 T ) (see Section 4).Hence from (2.4), we have δ C,α(T ) = O(T ) for T small.Remark 2.2.For the lower bound, the "expected" value for ᾱ(T ) −1 would be λ corresponding to the largest eigenvalue of one half the inverse of the covariance matrix of the random variable with density p c −1 (T, x, .)appearing in the lower bound of Theorem 2.1.There are two corrections with respect to this intuitive approach.First, there is in case (a) an additional multiplicative term θ > 1 (that can be optimized) when d is odd.This correction is unavoidable for d = 1, anyhow for odd d > 1, it can be avoided up to an additional additive factor like the above χ (see the proof of Proposition 3.3 for details).We kept this presentation to be homogeneous for all odd dimensions.Also, an additive correction (or penalty) factor χ appears.It is mainly due to our growth assumption (G ρ0,β ).Observe anyhow that, for given T > 0, C 1, ε > 0 s.t.|A| ε, if the dimension d is large enough, by definition of K(d, A), we have χ = 0. Still, for d = 1 (which can only occur in case (a)) we cannot avoid the correction factor χ.
Remark 2.3.Let us also specify that in the above definition of χ, ρ 0 is not meant to go to zero, even though some useful functions like |.| satisfy (G ρ0,1 ) with any ρ 0 > 0. Actually, the bound is particularly relevant in 'large regimes", that is when r/β is not assumed to be small.Also, we could replace in the above definition of χ, ρ 0 by R > 0 as soon as r/β R. In particular, if F satisfies (G ρ0,β ), for R ρ 0 it also satisfies (G R,β ).We gave the statement with ρ 0 in order to be uniform w.r.t. the threshold ρ 0 appearing in the growth assumption of F but the correction term can be improved in function of the deviation factor r/β. Remark 2.4.Note that under (UE), (SB), in case (a), the martingale problem in the sense of Stroock and Varadhan is well posed for equation (1.1), see Theorem 7.2.1 in [SV79].Also, from Theorem 2.1 and the estimates of Section 4, one can deduce that the unique weak solution of the martingale problem has a smooth density that satisfies Aronson like bounds.Furthermore, a careful reading of [KM02] emphasizes that the discretization error analysis carried therein can be extended to our current framework, i.e. we only need boundedness of the drift and uniform spatial Hölder continuity of the (non-degenerated) diffusion coefficient to control E D (∆).Hence, the above concentration result gives that in case (a), one can control the global error E(M, ∆) := E D (∆) + E MC (M, ∆).The well-posedness of the martingale problem in case (b) remains to our best knowledge an open question and will concern further research.
Remark 2.5.In case (b), the concentration regime in the above bounds highly depends on T .Since the two components do not have the same scale we have that, in short time, the concentration regime is the one of the non degenerated component in the upper bound (resp. of the degenerated component in the lower bound).For large T , it is the contrary.
We now consider an important case for applications in case (b).Namely, in kinetic models (resp. in financial mathematics) it is often useful to evaluate the expectation of functions that involve the difference of the first component and its normalized average (which corresponds to a time normalization of the second component).This allows to compare the velocity (resp.the price) at a given time T and the averaged velocity (resp.averaged price) on the associated time interval.Obviously, the normalization is made so that the two components have time-homogeneous scales.We have the following result.
and g is a Lipschitz continuous function in space and measurable bounded in time satisfying |∇g(T, .)|∞ 1 then we have for every ∆ = T /N, N 1, A lower bound could be derived similarly to Theorem 2.2.The proof of Theorems 2.1 and 2.2 (as well as Corollary 2.1) are respectively postponed to Sections 4.2 and 3.3.

Gaussian concentration -Upper bound
We recall that a probability measure γ on R d satisfies a logarithmic Sobolev inequality with constant α > 0 if for all f ∈ H 1 (dγ where Ent γ (φ) = φ log(φ)dγ − φdγ log φdγ denotes the entropy of the measure γ.In particular, we have the following result (see [Led99] Section 2.2 eq.(2.17)).
Proposition 3.1.Let V be a C 2 convex function on R d with HessV λI d×d , λ > 0 and such that e −V is integrable with respect to the Lebesgue measure.Let γ(dx) = 1 Z e −V (x) dx be a probability measure (Gibbs measure).Then γ satisfies a logarithmic Sobolev inequality with constant α = 2 λ .Throughout this section we consider a probability measure µ with density m with respect to the Lebesgue measure λ K on R K (here we have in mind K = d or K = M d, M being the number of Monte Carlo paths).We assume that µ is dominated by a probability measure γ in the following sense γ(dx) = q(x)dx satisfies (LSI α ) and ∃κ 1, ∀x ∈ R K , m(x) κq(x).
Lemma 3.1.Assume that µ with density m and γ with density q satisfy the domination condition and that there exist (α, Proof.Recall first that for a non-negative function f , we have the following variational formulation of the entropy: 1 and We then have An optimization in λ yields Now using the domination condition, one has Ent γ m q = m q log m q dγ log(κ) and the results follows.
Using the tensorization property of the logarithmic Sobolev inequality we derive the following Corollary.Note that the term δ κ,α can be seen as a penalty term due on the one hand to the transport between µ and γ, and on the other hand to the explosion of the domination constant κ M between µ ⊗M and γ ⊗M when M tends to infinity.We emphasize that the bias δ κ,α is independent of M .Hence, the result below is especially relevant when r and δ κ,α have the same order.In particular, the non-asymptotic confidence interval given by (3.5) cannot be compared to the asymptotic confidence interval deriving from the central limit theorem whose size has order O(M −1/2 ).
Proof.Let r > 0 and M 1.Clearly, changing f into −f , it suffices to prove that By tensorization, the measure γ ⊗M satisfies an (LSI α ) with the same constant α as γ, and then the probabilities Applying Proposition 3.2 with the measures µ ⊗M and γ ⊗M , the function and we easily conclude.
Remark 3.2.Note that to obtain the non-asymptotic bounds of the Monte Carlo procedure (3.5), we successively used the concentration properties of the reference measure γ, the control of the distance W 1 (µ, γ) given by the variational formulation of the entropy (see Lemma 3.1) and the tensorization property of the functional inequality satisfied by γ.The same arguments can therefore be applied to a reference measure γ satisfying a Poincaré inequality.

Gaussian concentration -Lower bound
Concerning the previous deviation rate of Proposition 3.2, a natural question consists in understanding whether it is sharp or not.Namely, for a given function f satisfying suitable growth conditions at infinity, otherwise we cannot see the asymptotic growth, do we have a lower bound of the same order, i.e. with Gaussian decay at infinity?The next proposition gives a positive answer to that question.
For a C 2 function V on R d such that e −V is integrable with respect to λ d and s.t.∃ λ 1, λI d×d Hess(V ) 0, let γ(dx) = e −V (x) Z −1 dx be the associated Gibbs probability measure.We assume that ∃κ 1 s.t. for |x| ρ 0 the measures µ(dx) = m(x)dx and γ(dx) satisfy .
We have Here we use the convention that for , we have where σ(ds) stands for the Lebesgue measure of S d−1 .Now, Hess(V ) λI d×d yields ∀ρ and therefore We now have the following explicit expression: with the convention that , where Y ∼ N (0 2×1 , I 2×2 ) is a standard bidimensional Gaussian vector and ), we derive that 2 ), d even, which plugged into (3.8)yields: Corollary 3.2.Under the assumptions of Proposition 3.3, let Y 1 , • • • , Y M be i.i.d.R d -valued random variables with law µ.We have ∀r > 0, ∀M 1, Proof.We only consider d even.By independence of the (( , we thus obtain , which completes the proof. .
In case (a), the Gaussian probability γ c,T with density p c (T, x, .)defined in Theorem 2.1 satisfies a logarithmic Sobolev inequality with constant α(T ) = 2T c .The result then follows from Theorem 2.1 and Corollary 3.1.
In case (b), γ c,T (dx (3.9) The Hessian matrix of V T,x satisfies With the notation , the Hessian of the potential where observing that for our Gaussian bounds Λ = λ 2 , and and K(d, A) defined in (2.6).
Observe now that in case (a), the normalization factor Z = Z(T, d) associated to p c −1 (T, x, .)writes Z = (2πcT ) d/2 .Hence, recalling that λ = (cT ) −1 , we obtain in this case . Eventually, since in case (b) we always have d even, the correction writes This completes the proof.
Note that the random variable Y ∆ T = T −1 T X ∆ T admits the density p ∆ Y (T, y, y ′ ) = T d ′ p ∆ (0, T, T T y, T T y ′ ) with respect to λ d (dy ′ ).By Theorem 2.1 this density is dominated by (ZT d ′ ) −1 e −V T ,T T y (TT y ′ ) where V T,x is defined in (3.9).The Hessian of y ′ → V T,TT y (T T y ′ ) satisfies ).We still conclude by Proposition 3.1, and Corollary 3.1.

Parametrix representation of the densities
We first derive a parametrix representation of the densities of the schemes.The key idea is to express this density in terms of iterated convolutions of the density of a scheme with frozen coefficients, that therefore admits a Gaussian density, and a suitable kernel, that has an integrable singularity.These representations have previously been obtained in Konakov and Mammen [KM00] and Konakov et al. [KMM09].
We first need to introduce some objects and notations.Let us begin with the "frozen" inhomogeneous scheme.For fixed x, x ′ ∈ R d , 0 j < j ′ N , we define for case (a).Note that in the above definition the coefficients of the process are frozen at x ′ , but we omit this dependence for notational convenience.In case (b) we define That is, in case (b) the frozen process also depends on j ′ through an additional term in the diffusion coefficient.This correction term is needed, in order to have good continuity properties w.r.t. the underlying metric associated to p c when performing differences of the form a(t j , x) − a(t j , x ′ − , see the definition (4.7) and Sections 4.2 and 4.3 for details.From now on, p ∆ (t j , t j ′ , x, •) and p ∆,t j ′ ,x ′ (t j , t j ′ , x, •) denote the transition densities between times t j and t j ′ of the discretization schemes (1.3), (1.4) and the "frozen" schemes (4.1), (4.2) respectively.Let us introduce a discrete "analogue" to the inhomogeneous infinitesimal generators of the continuous objects from which we derive the kernel of the discrete parametrix representation.For a sufficiently smooth function ψ : R d → R and fixed x ′ ∈ R d , j ′ ∈ (0, N ]], define the family of operators (L ∆ tj ) j∈[[0,j ′ ) and ( Using the notation p ∆ (t j , t j ′ , x, x ′ ) = p ∆,t j ′ ,x ′ (t j , t j ′ , x, x ′ ), we now define the discrete kernel H ∆ by Note carefully that the fixed variable x ′ appears here twice: as the final point where we consider the density and as freezing point in the previous schemes (4.1), (4.2).Note also that if j ′ = j + 1 i.e. t j ′ = t j + ∆, the transition probability p ∆,t j ′ ,x ′ (t j+1 , t j+1 , ., x ′ ) is the Dirac measure δ x ′ so that From the previous definition (4.3), for all 0 j < j ′ N , x ′ (t j , t j+1 , x, u) p ∆,t j ′ ,x ′ (t j+1 , t j ′ , u, x ′ )du.
Analogously to Lemma 3.6 in [KM00] we obtain the following result.

Proof of the Gaussian estimates of Theorem 2.1
The key argument for the proof is given in the following lemma whose proof is postponed to Section 4.3.
Lemma 4.1.There exists c > 0, C 1 s.t. for all 0 j < j ′ N , for all r ∈ [[0, In the above equation B(m, n) := 1 0 s m−1 (1 − s) n−1 ds stands for the β function.The upper bound in (2.2) then follows from Proposition 4.1 and the asymptotics of the β function.It is also useful to achieve the first step of the lower bound.
Proof of the lower bound.We provide in this section the global lower bound in short time.W.l.o.g.we assume that T 1.This allows to substitute the constant C appearing in (4.5) by a constant c 0 c exp(|b| ∞ ) uniformly for t j ′ − t j T .From the upper bound, we derive the lower bound in short time, on the compact sets of the underlying metric, see (4.7) below.This gives the diagonal decay.To get the whole bound in short time it remains to obtain the "off-diagonal" bound.To this end a chaining argument is needed.In case (a) it is quite standard in the Markovian framework, see Chapter VII of Bass [Bas97] or Kusuoka and Stroock [KS87].In case (b), the chaining in the appendix of [DM09] can be adapted to our discrete framework.We adapt below these arguments to our non Markovian setting for the sake of completeness.
Eventually, to derive the lower bound for an arbitrary fixed T > 0 it suffices to use the bound in short time and the semigroup property of p c −1 .Naturally, the biggest is T , the worse is the constant in the global lower bound.
From Proposition 4.1 we have ) and (4.5) (replacing C by c 0 ) for the last inequality.Equation (4.6) provides a lower bound on compact sets provided that T is small enough.Precisely, denoting in case (a), we have that, for a given where the parameter S is the intrinsic scale of the scheme.In case (a) S = d/2, in case (b) S = d.Hence, up to a modification of c −1 0 we have that Chaining in case (a).Let us introduce: ∀0 s < t T, (x, x ′ , y) . Equation (4.8) provides a lower bound for the density of the scheme when s, t correspond to discretization times.For the chaining the first step consists in extending this result to arbitrary times 0 s < t T .Precisely, if d 2 t−s (x, x ′ ) R 0 /12 we prove that ∃c 0 1, ∀0 s < t T, ∀y, p ∆,y (s, t, x, x ′ ) c −1 0 (t − s) −d/2 .(4.9) If φ(t) = φ(s), the above density is Gaussian and (4.9) holds.If φ(t) = (φ(s) + ∆), equation (4.9) directly follows from a convolution argument between two Gaussian random variables.Note anyhow carefully that the "crude" convolution argument cannot be iterated L times for an arbitrary large L. Indeed, in that case the constants would have a geometric decay.Thus, for φ(t) − (φ(s) + ∆) ∆ we write R} for R > 0 to be specified later on.Now, for (x 1 , x 2 ) ∈ B R (s, t, x, x ′ ), where we used that for φ(t) − (φ(s) + ∆) ∆, We therefore derive from (4.8) and (4.10) that ∃c 0 > 0, Since φ(t) − (φ(s) + ∆) t − s and there exists c > 0 s.t.
where |.| stands for the Lebesgue measure of a given set in R d , we derive (4.9) from the above equation up to a modification of c 0 .
It now remains to do the chaining when for 0 j < j ′ N, (x, x ′ ) ∈ (R d ) 2 we have d 2 t j ′ −tj (x, x ′ ) 2R 0 1.Set L = ⌈Kd 2 t j ′ −tj (x, x ′ )⌉, for K 1 to be specified later on and h := (t j ′ − t j )/L.Note that L 1.For all i ∈ [[0, L]] we denote (4.11) We can now choose K large enough s.t.
We have To proceed we have to distinguish two cases: h ∆ and h < ∆.
-If h ∆, write from (4.13), Since we consider the events X ∆ sL−1 ∈ B L−1 , we derive from (4.11), (4.12) that |X . Hence, from (4.9) for the previous R and therefore (4.9) yields Iterating the process we finally get Observing that we obtain from the previous definition of h and L: for a suitable c up to a modification of c 0 .-If h < ∆.We have to introduce for all k ∈ [[j, j ′ ), (4.16) Introducing we derive from (4.11), (4.12) and (4.9) Plugging this estimate in (4.16) we obtain using once again (4.12), (4.9) for the last inequality.Iterating this procedure we still obtain (4.15) and can conclude as in the previous case.
Chaining in case (b).If d 2 t−s (x, x ′ ) c−1 R 0 , for c large enough, we derive similarly to case (a) that Similarly to the previous paragraph we reduce to the case φ(t) − (φ(s) + ∆) ∆.Then, equation (4.10) still holds and for the previous set B R with the current definition of d 2 .(., .).From standard computations, we derive taking a suitable R that ∀ We have that ∀z ∈ BR (u, y): which plugged into (4.18)yields (4.17).
It now remains to do the chaining when d 2 t j ′ −tj (x, x ′ ) 2R 0 .The crucial point is to choose a "good" path between x and x ′ .In the non degenerated case it was naturally the straight line between the two points (Euclidean geodesic).In our current framework we can relate d 2 t j ′ −tj (x, x ′ ) to a deterministic control problem.Introduce: istic controllability problem that has a unique solution reached for where R stands for the resolvent, i.e. ∀0 t, t 0 t j ′ − t j , ∂ t R(t, t 0 ) = AR(t, t 0 ), R(t 0 , t 0 ) = I d×d and Q t j ′ −tj = t j ′ −tj 0 R(t j ′ − t j , s)BB * R(t j ′ − t j , s) * ds is the Gram matrix, see e.g.Theorem 1.11 Chapter 1 in Coron [Cor07].For (CD) the resolvent writes R(t, t 0 ) = and therefore the Gram matrix of the control problem corresponds to the covariance matrix of the process . Hence, explicit computations give: (4.20) and thus, 1 2 I(t j ′ − t j , x, x ′ ) = d 2 t j ′ −tj (x, x ′ ) defined in (4.7).Now we have a candidate for a deterministic curve around which we can do the chaining.It is simply the deterministic curve (φ s ) s∈[0,t j ′ −tj ] solution of (CD) for the above control (ϕ s ) s∈[0,t j ′ −tj ] .
To complete the proof of the chaining it remains to specify how to define the (s i ) i 1 , (y i ) i 1 and the associated sets.Recall that 2R 0 1.We set here L := ⌈Kd 2 t j ′ −tj (x, x ′ )⌉ for an integer K 3 to be specified later on.In term of the new distance, L is similar in its definition to the one of the previous paragraph.Define s 0 = 0, The previous conditions on R 0 , K give the well posedness of this definition.Lemma 4.2 (Controls on the time step).Set for all i 0, ε i := s i+1 − s i .There exist a constant c 1 1 and an integer L ∈ Proof.We first set L = inf{k ≥ 1 : s k = t j ′ − t j }.The set {k ≥ 1 : s k = t j ′ − t j } is clearly nonempty.The upper bound in (4.21) then follows from the definition of the family (s i ) i 1 .Suppose now that s i < (t j ′ − t j )(1 − 2/L) for a given 0 ≤ i ≤ L − 2. Assume also that s i+1 − s i < (t j ′ − t j )/L (otherwise ε i = (t j ′ − t j )/L).Then, si+1 si |ϕ s | 2 ds = I(t j ′ − t j , x, x ′ )/L.From (4.19), (4.20), we deduce that ∃c 2 > 0, sup Hence, we obtain Recalling that I(t j ′ − t j , x, x ′ ) = 2d 2 t j ′ −tj (x, x ′ ), the lower bound in (4.21) follows for all i s.t.s i < T (1 − 2/L).The bound for L and the last time step are then easily derived.
Define now for all i ∈ [[0, L]], y i = φ si (in particular y 0 = x and y L = x ′ ), and for all i ∈ [[1, L − 1]], where ρ := d t j ′ −tj (x, x ′ )(t j ′ − t j ) 1/2 /L.Because of the transport term, we are led to consider sets that involve the forward transport from the previous point on the optimal curve and the backward transport of the next point in the above definition.Equation (4.13) still holds with L replaced by L. Following the strategy of the previous paragraph concerning the conditioning, the end of the proof relies on the following Lemma 4.3 (Controls for the chaining).With the previous assumptions and definitions we have that for K large enough: For the same c 1 as in Lemma 4.2, The key estimate is the following control of the convolution kernel H ∆ .There exist c > 0, C 1, s.t. for all 0 j < j (4.24) Indeed this bound yields that for all 0 j < j using the inequality p ∆ (t j+k − t j , x, u) Cp c (t j , t j+k , x, u) (cf.Lemma 3.1 of [KMM09] in case (b)) and the semigroup property of p c for the last but one inequality.The bound (4.5) then follows from the above control and (4.24) by induction.
Proof of (4.24)We consider two cases.
-j ′ = j + 1.From (4.3) we have in this case, ∀x, which are Gaussian densities.In case (a) we have , where ∀z ∈ R d , G(z) = exp(−|z| 2 /2)(2π) −d/2 stands for the density of the standard Gaussian vector of R d .In case (b) we get allows to have good continuity properties to equilibrate the singularities coming from the difference |x − ( ∆| with the terms appearing in the exponential.In all cases, tedious but elementary computations involving the mean value theorem yield that ∃c > 0, C 1 s.t. |H ∆ (t j , t j ′ , x, x ′ )| C∆ −1+η/2 p c (∆, x, x ′ ).
Remark 4.1.Note that the time dependence in the frozen dynamics (4.2) somehow corresponds to the backward transport of the terminal condition.It is crucial in order to allow from (4.31) the compensation of the exploding terms associated to derivatives in x 1,d ′ of order greater than 2 and derivatives in x d ′ +1,d of order greater than 1 appearing in the kernel H ∆ .A similar construction was used in [KMM09].