A converse comparison theorem for BSDEs and related properties of g-expectation

In [1], Z. Chen proved that, if for each terminal condition $\xi$, the solution of the BSDE associated to the standard parameter $(\xi, g_1)$ is equal at time $t=0$ to the solution of the BSDE associated to $(\xi, g_2)$ then we must have $g_1\equiv g_2$. This result yields a natural question: what happens in the case of an inequality in place of an equality? In this paper, we try to investigate this question and we prove some properties of ``$g$-expectation'', notion introduced by S. Peng in [8].


Introduction
It is by now well-known that there exists a unique, adapted and square integrable, solution to a backward stochastic di erential equation (BSDE for short in the remaining of the paper) of type Y t = + Z T t f(s; Y s ; Z s )ds ?Z T t Z s dW s ; 0 t T; providing, for instance, that the generator is Lipschitz in both variables y and z and that and ?f(s; 0; 0) s2 0;T] are square integrable.We refer of course to E. Pardoux and S. Peng 4,5] and to N. El Karoui, S. Peng and M.-C.Quenez 2] for a survey of the applications of this theory in nance.
One of the great achievement of the theory of BSDEs is the comparison theorem for real-valued BSDEs due to S. Peng 7] at rst and then generalized by several authors, see e.g.N. El Karoui, S. Peng and M.-C.Quenez 2, Theorem 2.2].It allows to compare the solutions of two BSDEs whenever we can compare the terminal conditions and the generators.In this paper we try to investigate an inverse problem: if we can compare the solutions of two BSDEs (at time t = 0) with the same terminal condition, for all terminal conditions, can we compare the generators?
The result of Z. Chen 1] can be read as the rst step in solving this problem.Indeed, he proved using the language of \g{expectation" introduced by S. Peng in 8] that, given two generators, say g 1 and g 2 , then if, for each 2 L 2 , we have Y 1 0 ( ) = Y 2 0 ( ) where ?(Y i t ( ); Z i t ( ) t2 0;T] stands for the solution of the BSDE: Y t = + Z T t g i (s; Y s ; Z s )ds ?Z T t Z s dW s ; 0 t T; for i = 1; 2, then we have g 1 g 2 .With the above notations, the main issue of this paper is to address the following question: if, for each 2 L 2 , we have Y 1 0 ( ) Y 2 0 ( ), do we have g 1 g 2 ?
The paper is organized as follows: in section 2, we introduce some notations and we make our assumptions.In section 3, we prove the result in the case of deterministic generators and we give an application of these techniques to partial di erential equations (PDEs for short in the rest of the paper).In section 4, we prove a converse to the comparison theorem for BSDEs and then, with the help of this result we study the case when the generators do not depend on the variable y.Finally, in section 5, we discuss the Jensen inequality for \g{expectation".

Notations and assumptions
Let ( ; F; IP) be a probability space carrying a standard d-dimensional Brownian motion, (W t ) t 0 , starting from W 0 = 0, and let ?F t t 0 be the -algebra generated by (W t ) t 0 .We do the usual IP-augmentation to each F t such that ?F t t 0 is right continuous and complete.If z belongs to IR d , j jzj j denotes its Euclidean norm.We de ne the following usual spaces of processes: S 2 = ( progressively measurable; j j j j 2 S2 := IE sup 0 t T j t j 2 < 1 ) , H 2 = ( progressively measurable; j j j j 2 2 := IE Z T 0 j j t j j 2 dt < 1 ) .
Let us consider a function g, which will be in the following the generator of the BSDE, de ned on 0; T] IR IR d , with values in IR, s.t. the process ?g(t; y; z) t2 0;T] is progressively measurable for each (y; z) in IR IR d .For the function g, we will use, through-out the paper, the following assumptions.
It is by now well known that under the assumptions (A 1) and (A 2), for any random variable in L 2 (F T ), the BSDE (1) has a unique adapted solution, say ?(Y t ; Z t ) t2 0;T] s.t.Z is in the space H 2 .Actually, by classical results in this eld, the process Y belongs to S 2 .
In 8], S. Peng adopted a new point of view in the study of BSDEs.Indeed, if g satis es the assumptions (A 1) and (A 3), he introduced the function E g de ned on L 2 (F T ) with values in IR by simply setting E g ( ) := Y 0 where (Y; Z) is the solution of the BSDE (1) (since this solution is adapted, Y 0 is deterministic).He called E g g{expectation and he proved that some properties of the classical expectation are preserved (monotonicity for instance) but, since g is not linear in general, the linearity is not preserved (we will see at the end of the paper that the Jensen inequality does not hold in general for E g ).Related to this \nonlinear expectation", he de ned also a conditional g{expectation by setting E g ?j F t := Y t which is the unique random variable , F t {measurable and square{integrable, s.t.
8A 2 F t ; E g ? 1 A = E g (1 A : Z. Chen in 1] used these notions to prove the following result: Theorem 2.1 Let the assumptions (A 1), (A 3) and (A 4) hold for g 1 and g 2 and let us assume moreover that, for each 2 L 2 (F T ), E g1 ? ) = E g2 ? .
In this note, as mentioned in the introduction, we will try to investigate the following question: if, for each 2 L 2 (F T ), E g1 ? ) E g2 ?, do we have g 1 (t; y; z) g 2 (t; y; z)?This problem is roughly speaking a converse to the comparison theorem for BSDEs, since if g 1 g 2 then E g1 ( ) E g2 ( ).
We close this subsection by a comment about the assumption (A 3).Under (A 3), if is a random variable F S {measurable with S < T, then we have: E g ?j F t = if S t T and E g ?j F t = y t for 0 t < S where ?(y r ; z r ) r2 0;S] stands for the solution on 0; S] of the BSDE:

Technical Results
In this subsection, we establish a technical result which will be useful in the next section.We start by giving an a priori estimate for BSDEs which is of standard where C is a universal constant.
Proof.We outline the proof for the convenience of the reader.
As usual we start with Itô's formula to see that, noting M u for 2 Z T u e s Y s Z s dW s , for each u 2 0; T], e u jY u j 2 + Z T u e s j jZ s j j 2 ds = e T j j 2 + 2 Z T u e s Y s g(s; Y s ; Z s )ds ?Z T u e s jY s j 2 ds ?M u : Using the Lipschitz assumption on g and then the inequality 2Kjyj j jzj j 2K 2 jyj 2 + (1=2)j jzj j 2 , we deduce that, taking into account the de nition of , e u jY u j 2 + 1 2 Z T u e s j jZ s j j 2 ds e T j j 2 + 2 Z T u e s jY s j jg(s; 0; 0)jds ? 2 Z T u e s Y s Z s dW s : (2) In particular, taking the conditional expectation w.r.t.F t of the previous inequality written for u = t, we deduce since the conditional expectation of the stochastic integral vanishes, IE Z T t e s j jZ s j j 2 ds F t 2IE e T j j 2 + 2 Z T t e s jY s j jg(s; 0; 0)jds F t : (3) Moreover, coming back to the inequality (2), we get sup t u T e u jY u j 2 e T j j 2 + 2 jX t;x s j 2 + jg(s; 0; 0)j 2 F t : Taking the expectation in the previous inequality we obtain, since, by the additional assumption, IE sup 0 s T ?jX t;x s j 2 + jg(s; 0; 0)j 2 C(1 + jxj 2 ), the following estimate IE sup t s t+" j e Y " s j 2 + Z t+" t j j e Z " s j j 2 ds C" 2 ; (7) where C depends on x; y; p which is not important here since (x; y; p) are xed.With this inequality in hands, it is easy to prove the result.Indeed, taking the conditional expectation in the BSDE (6), we get 1 " Y " t ?y = 1 " e Y " t = 1 " IE Z t+" t g ?u; y +p (X t;x u ?x)+ e Y " u ; t (X t;x u )p+ e Z " u +p b(X t;x u ) du F t : We split the right hand side of the previous equality as follows: 1 " Y " t ?y = 1 " IE Z t+" t g ?u; y + p (X t;x u ?x); t (X t;x u )p + p b(X t;x u ) du F t + R " ; where R " stands for 1 " IE Z t+" t g ?u; y+p (X t;x u ?x)+ e Y " u ; t (X t;x u )p+ e Z " u ?g ?u; y+p (X t;x u ?x); t (X t;x u )p du F t : It is very easy to check that R " goes to 0 in L 2 when " tends to 0 + .Indeed, since g is Lipschitz, we have jR " j K " IE Z t+" t j e Y " u j + j j e Z " u j j du F t ; and then using H older's inequality we obtain IE h jR " j 2 i K 2 " 2 IE Z t+" t j e Y " u j + j j e Z " u j j du 2 2K 2 " IE Z t+" t j e Y " u j 2 + j j e Z " u j j 2 du : Taking into account the estimate (7), the previous inequality yields IE jR " j 2 C(" 2 + ") which shows the convergence of R " to 0.
It remains only to check that, as " !0 + , in the sense of L which is assumed to be integrable.Thus the result follows from Lebesgue's theorem.The proof is complete.2 Remark.As we can see in the proof, the continuity of the process ?g(t; y; z t2 0;T] (assumption (A 4)) is not really needed.We can prove the result if this process is only right-continuous.
Remark.It is worth noting that the assumption \IE sup 0 t T jg(t; 0; 0)j 2 nite" holds when g satis es (A 3) or in the Markovian situation (see the subsection 3.2 below).3 The deterministic case

Main result
This subsection is devoted to the study of the deterministic case for which we can prove a useful property of g{expectation.In this paragraph, g is de ned from 0; T] IR IR d into IR and satis es (A 1) and (A 3).Since E g is a generalization of the expectation, a natural question which arises is the following: if is independent of F t , do we have E g ?F t = E g ?? W claim the following proposition which is mainly contained in 2].Proposition 3.1 If g is deterministic, then, for each 2 L 2 (F T ), we have E g ?F t = E g ?as soon as is independent of F t .
Proof.It is enough to check, from the assumption (A 3), that E g ?F t is deterministic.Indeed, by construction we have E g ?= E g E g ?F t and, under the assumption (A 3) (see S. Peng  and thus g is deterministic.
We give a counterexample when g is not deterministic.Actually, even in the simplest case, the linear case which corresponds to a Girsanov change of measures, the above property does not hold.
To show that, let us x T > 0 and let us pick t 2]0; T .Let f : IR ?! IR be a continuous and bounded function.We de ne, for each (s; z) 2 0; T] IR, g(s; z) := f(W s^t )z.Moreover, let us set = W T ?W t which of course is independent of F t .By classical results for linear BSDEs, see e.g. 2, Proposition 2.2], E g where l Q is the probability measure on ?; F T whose density w.r.t.IP is given by . It follows immediately that E g ?F t = (T ?t)f(W t ) and thus, if f is not constant, we see that E g ?W T ?W t F t is not deterministic which gives the desired result.
We are now able to answer the question asked in the introduction in the deterministic context.
Let, for i = 1; 2, g i : 0; T] IR IR d ?! IR.We claim the following result: Theorem 3.2 Let the assumptions (A 1), (A 3) and (A 4) hold for g i , i = 1; 2. Assume moreover that, 8 2 L 2 (F T ); E g1 ( ) E g2 ( ): n j F t ?y : Passing to the limit when n goes to in nity, we obtain, since g 1 and g 2 are deterministic, the inequality g 1 (t; y; z) g 2 (t; y; z), which concludes the proof, since (t; y; z) is arbitrary and both functions g 1 ( ; y; z) and g 2 ( ; y; z) are continuous at the point T. 2 Remark.As we can see in the proof, we only need to assume that E g1 ( ) E g2 ( ) for of the form y + z (W s ?W t ), for each (t; s; y; z), to get the result of this theorem.Actually, we can weaken a little bit more the assumption since it is enough to have the property when s ?t is small enough, say less than , and this may depend on (y; z).

An application to PDEs
We give in this subsection an application of the techniques described before to partial di erential equations (PDEs for short).Semilinear PDEs constitute one of the eld of applications of the theory of BSDEs as it was revealed by S. Peng 6] in the classical case and by E. Pardoux and S. Peng 5] for viscosity solutions of PDEs.Our setting is very close to that of F. Pradeilles 10].
Let f : IR n IR IR d ?! IR be a function s.t.(A 6).There exist two constants K 0 and q 1, s.t.
1. 8(y; z), x ?! f(x; y; z) is continuous; 2. 8x, 8(y; z); (y 0 ; z 0 ), f(x; y; z) ?f(x; y 0 ; z 0 ) K ?jy ?y 0 j + j jz ?z 0 j j ; 3. 8x, f(x; 0; 0) K ? 1 + jxj q .In addition, let us consider h : IR n ?! IR a continuous function with polynomial growth.Then it is by now well known that BSDEs in Markovian context give a nonlinear Feynman-Kac formula for semilinear PDEs in the sense that the viscosity solution, say u, of the semilinear PDE @ t u = 1  2 tr Z t s Z t;x u dW u ; 0 s t; where X 0;x is the solution of the SDE (4).
Using this formula, we claim the following proposition: Proposition 3.3 Assume that the assumption (A 6) holds for two functions f 1 and f 2 and that b and satis es the assumption (A 5).
If, for (x; y; p) 2 IR n IR IR n , there exists > 0 s.t.8" < ; u 1 ("; x) u 2 ("; x), where u i is the viscosity solution of the PDE (8) with the semilinear part f i and the initial condition equal to h( ) = y + p ( ?x), then f 1 (x; y; t (x)p) f 2 (x; y; t (x)p).Proof.Let us rewrite the assumption in a probabilistic way using the nonlinear Feynman-Kac formula mentioned above.Fix (x; y; p) in the space IR n IR IR n .Then, there exists > 0, s.t.8" < ; 1 Y ";x 0 2 Y ";x 0 , where ( i Y ";x ; i Z ";x ) is the solution of the BSDE i Y ";x s = y + p ? X 0;x " ?x + Z " s f i (X 0;x u ; i Y ";x u ; i Z ";x u )du ?
Z " s i Z ";x u dW u ; 0 s ": This solution exists since by classical results on SDEs, see e.g.H. Kunita  On the other hand, by the hypothesis we deduce that n Y 1 t ( n ) ? y n Y 2 t ( n ) ? y : Extracting a subsequence to get the convergence IP ?a:s: and then passing to the limit when n goes to in nity, we obtain, the inequality IP?a:s:; g 1 (t; y; z) g 2 (t; y; z).By continuity, we obtain nally that IP ?a:s:, 8(t; y; z) 2 0; T] IR IR d ; g 1 (t; y; z) g 2 (t; y; z): The proof is complete.2 Remark.We give in this remark the main lines of a di erent proof of this result.This approach is based on a \nonlinear decomposition theorem of Doob{Meyer's type" due to S. Peng 9].For a given , the assumption of the theorem says that the process Y 2 ( ) is a g 1 {supermartingale and actually it can be seen that it is a g 1 {supermartingale in the strong sense.Therefore, we can apply Theorem 3.3 in 9] to see that this process is also a g 1 {supersolution.From this we deduce easily that, for each and for each 0 t 0 < t T, 1 t ?and choosing, as in Z. Chen 1], = X T , where, for a given (t 0 ; y 0 ; z 0 ), X is the solution of the SDE X t = y 0 ?
Z T t e s j jZ s j j 2 ds F t CIE e T j j 2 + Z T t e ( =2)s g(s; 0; 0) ds 2 F t ; Let the assumptions (A 1), (A 2) and (A 4) hold for the function g and let the notation (A 5) holds.Let us assume moreover that IE sup 0 t T jg(t; 0; 0)j 2 is nite.Then, for each (t; x; y; p) 2 0; T IR n IR IR n , we have: t :Combining the inequality (3) with the previous one, we easily derive that, for a universal constant C, we have and to change C one more time to nish the proof of this proposition.2Before stating our rst result, we need some further notations.(A5).Let b : IR n ?! IR n and : IR n ?! IR n d be two Lipschitz functions.
.1 A converse to the comparison theorem for BSDEs Since the work of S.Peng 7], it is known that, under the usual assumptions, if IP ?a:s:, the inequality g 1 (t; y; z) g 2 (s; y; z) holds, then for each , we have We prove in the next theorem a converse to this result with the additional assumption (A 3).Theorem 4.1 Let the assumptions (A 1), (A 3) and (A 4) hold for g i , i = 1; 2. Assume moreover , we have, IP ?a:s:, 8t 2 0; T]; 8(y; z) 2 IR IR d ; g 1 (t; y; z) g 2 (t; y; z): Proof.Let us x (t; y; z) 2 0; T IR IR d , and, for n 2 IN large enough, let us consider n = y + z (W t+(1=n) ?W t ).We have by Proposition 2.3 n 4then?Y i t ( n ) ? y ?! g i (t; y; z); in L 2 :