Global existence for quadratic FBSDE systems and application to stochastic differential games

In this note, we extend some recent results on systems of backward stochastic differential equations (BSDEs) with quadratic growth to the case of coupled forward-backward stochastic differential equations (FBSDEs). We work in a Markovian setting, and use results from the quadratic BSDE literature together with PDE techniques to obtain a-priori estimates which lead to an existence result. We also identify a general class of stochastic differential games whose corresponding FBSDE systems are covered by our main existence result. This leads to the existence of Markovian Nash equilibria for such games.


Introduction
Recent years have witnessed much activity and progress in the area of quadratic BSDE systems, i.e. systems of backward stochastic differential equations (BSDEs) whose driver f has quadratic growth in the control variable, typically denoted z. In the Markovian case, the most general global existence results appear in [XŽ18], while in the non-Markovian case global existence is obtained under various structural conditions in [HT16], [Nam19], and [JŽ21]. Fewer efforts have been made to understand quadratic systems of forward-backward stochastic differential equations (FBSDEs), possibly because existence for general FBSDEs is a very challenging problem even when all coefficients are Lipschitz. The works we are aware of which consider quadratic FBSDE systems are [AH06], [FI13], [KLT18] and [LT17], which all require either smallness or some type of monotonicity condition.
In this note, we consider the FBSDE dX t = b(t, X t , Y t , Z t )dt + σ(t, X t )dB t , dY t = −f (t, X t , Y t , Z t )dt + Z t dB t , Y T = g(X T ). (1.1) We are particularly interested in the case that Y is multidimensional and the driver f = f (t, x, y, z) exhibits quadratic growth in the variable z. In particular, the first objective of this note is to extend the global existence results for quadratic BSDE systems obtained in [XŽ18] to the quadratic FBSDE (1.1). The FBSDE (1.1) is related, at least formally, to two other analytical objects: the BSDE During the preparation of this work the first author has been supported by the National Science Foundation under Grant No. DGE1610403 (2020-2023). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation (NSF). .
The author wishes to thank Daniel Lacker and Ludovic Tangpi for helpful comments on an early version of this note.
(1.4) 1.1. Main results. The first contribution of this note is an existence result (Theorem 2.5) for (1.1) when f exhibits quadratic growth in z but satisfies the structural conditions (H AB ) and (H BF ), and the data σ, b, and g satisfy some minimal regularity conditions. The proof relies on a sequence of a-priori estimates. Together with a somewhat standard approximation procedure, these a-priori estimates allow us to produce a solution to (1.1) through a compactness argument. The first a-priori estimate is Lemma 2.1, which shows that the structural condition (H AB ) leads to L ∞ estimates on the decoupling field (see Definition 1.1) of (1.1). We emphasize that (H AB ) is only a convenient condition to guarantee a-priori estimates in L ∞ ; if such a-priori estimates are established through another method the rest of the analysis goes through unchanged. 2) which is regular enough to also be a decoupling field for the FBSDE (1.1). We emphasize that we require very little regularity of the driver f , to be obtain our estimates and existence result, in partiular f need not be even locally Lipschitz in (y, z). The second contribution is to apply our results to a class of stochastic differential games. Typically, quadratic BSDE systems arise when stochastic differential games (with uncontrolled drift and quadratic costs) are treated through the popular weak formulation. But if the same games are treated in strong formulation, then a quadratic FBSDE arises in place of the quadratic BSDEroughly speaking, in order to find a Markovian Nash equilibrium, one must solve (1.1) in place of (1.2). We emphasize that in this approach the FBSDE involved is not the one obtained from the stochastic maximum principle, but the one which represents the value of the game. We make this connection between Markovian Nash equilibria and FBSDEs precise under fairly general conditions in Proposition 3.2. Then, we identify a general class of stochastic differential games whose corresponding FBSDEs have a structure covered by Theorem 2.5. These games are characterized by a diagonal cost structure (player i's control does not enter player j's running cost, when i = j) and a drift b = b(t, x, a 1 , ..., a n ) which decomposes additively as b(t, x, a 1 , ..., a n ) = n j=1 b j (t, x, a j ) (see Section 4.1 for notation). This leads to an existence result for Markovian Nash equilibria, which is stated precisely in Proposition 3.3.
1.2. Preliminaries and notations. The dimensions n and d are fixed throughout the paper, as is the terminal time T ∈ (0, ∞). We also fix throughout the paper a probability space (Ω, F , P) hosting a d-dimensional Brownian motion B, whose augmented filtration is denoted by F = (F t ) 0≤t≤T . We use the usual notation L p , 1 ≤ p ≤ ∞ for the space of p-integrable F T -measurable random variables with norm · L p . For a continuous and adapted process Y taking values in some Euclidean space, we define Y S p = sup 0≤t≤T |Y t | L p , and we write bmo for the set of all adapted processes Z such that Z bmo = sup τ E τ [´T τ |Z s | 2 ds] < ∞, the supremum being taken over all stopping times 0 ≤ τ ≤ T and E τ [·] denoting condition expectaition with respect to F τ . Finally, we mention that all the spaces and norms here can be extended in natural ways to include processes defined only on [t, T ], for some t ∈ [0, T ].
Let us mention that we will write Dv for the spatial gradient of a map v : [0, T ] × R d → R, and for u = (u 1 , ..., u n ) : [0, T ] × R d → R, Du will denote (Du 1 , ..., Du n ), viewed as an element of (R d ) n . We will also view the unknown Z appearing in (1.1) (and (1.2)) as taking values in (R d ) n . We will manipulate elements of (R d ) n in a natural "element-wise way" as in [JŽ21], e.g. if p ∈ (R d ) n and q ∈ R d×d , then pq denotes an element of (R d ) n whose i th entry is qp i . Likewise if p ∈ (R d ) n and q ∈ R d , then pq ∈ R n and (pq) i = p i · q. This philosophy will in particular be used when interpreting the stochastic differential Z t dB t and expressions like Z t σ(t, X t ).
We will be working with certain parabolic Hölder spaces, defined as follows. Fix α ∈ (0, 1). For a function v = v(t, x) : [0, T ] × R d → E, E being some Euclidean space with norm | · |, we define the Hölder seminorm Given an open sut U ⊂ [0, T ] × R d , we define u C α (U) and u C 1+α (U) similarly. We define the Hölder spaces of functions defined on R d in the same way, i.e. for g : At this point, we need to make precise the notions of solutions we will be working with.
has a unique strong solution on [t, T ], and with (Y t,x , Z t,x ) defined by (Y t,x , Z t,x ) = u(·, X t,x ), v(·, X t,x ) , the triple (X t,x , Y t,x , Z t,x ) solves the equation Remark 1.2. We note that our definition of decoupling field differs from the usual one in that we include the function v as part of the decoupling field. This is to make the relationship between the decoupling field for (1.1) and the Markovian solution of (1.2) easier to state. Moreover, we note that the existence of a decoupling field for (1.1) implies the existence, for any x ∈ R d , of a strong solution to the equation i.e. a pair of adapted processes (Y, Z) satisfying (1.8) path-wise a.s.
The following is a consequence of Itô's formula and the Girsanov transform.
is also a bmo decoupling field for (1.1). Conversely, any bmo decoupling field is also a bmo Markovian solution of (1.2).
Proof. Let us first assume that (u, v) is a bmo decoupling field for (1.1). For fixed t and x, let X t,x be defined by (1.5) and set . By the definition of decoupling field, we have the relationship Recalling the definitions of Y t,x and Z t,x , we find that x s )dB s , then since Z t,x ∈ bmo, and |b(t, x, y, z)| ≤ C 0 (1 + |z|), Girsanov's Theorem yields a probability measure Q such that the law of X t,x under Q is the same as the law of X t,x under P. Thus the relationship for any ǫ > 0. Then for any (t, x) and ǫ > 0, the SDE (1.5) has a unique strong solution on [t, T ), thanks to a classical result which can be traced to Veretennikov (see [Zha05] and the references therein for more information about the solvability of SDEs with irregular drift). The fact that v is a bmo-decoupling field implies a bound on the processb s : T )). Together with the boundedness of σ, this implies easily that a.s., X t,x s has a limit as s → ∞, which lets us extend X t,x uniquely to all of [t, T ]. Now we set (Y t,x , Z t,x ) = (u(·, X t,x ), v(·, X t,x )). Checking that (Y t,x , Z t,x ) solves (1.6) amounts to running the above change-of-measure argument in reverse.
1.3. Assumptions. We now describe some conditions on the data, which consists of measurable maps which we will later impose in various combinations in order to get estimates and existence results. We start with the conditions on σ and b which will be used throughout the paper.
The next condition will be used to guarantee an a-priori estimate on Y S ∞ for the equation (1.1), provided that the terminal condition is bounded (see Lemma 2.1).
There exists a constant ρ and a finite collection {a m } = (a 1 , . . . , a M ) of vectors in R n such that a 1 , . . . , a M positively span R n , and (H AB ) The next condition states that the driver f has quadratic growth in z.
It is well-known that a quadratic growth assumption like H Q is not enough to obtain regularity estimates on the PDE system (1.4), so we will impose the following structural condition. The condition can be traced back to [BF02], and a similar condition appeared in [XŽ18], where it was termed the Bensoussan-Frehse condition.
There exists a constant C Q and a sub-quadratic function κ :

A-priori estimates and existence
Lemma 2.1. Let (u, v) be a Markovian solution of (1.2), and suppose that H AB holds. Then we have Proof. Dropping the superscripts, we define X to be the solution of the equation X t =´t 0 σ(s, X s )dB s . We then set (Y, Z) = (u(·, X), v(·, X)). Since (u, v) is a Markovian solution to (1.2), we have whereB = B −´σ −1 (·, X)b(·, X, Y, Z)dt. Since σ −1 is bounded, |b(t, x, y, z)| ≤ C 0 (1 + |z|), and Z ∈ bmo, we deduce thatB is a Brownian motion under the measureP, where dP = E ´σ −1 (·, X)b(·, X, Y, Z)· dB dP. Now we consider the process R t := exp 2a T m Y +´· 0 2ρ t dt . We compute x, y, z) ≤ ρ + |a T m z| 2 , and so −a T m f (t, x, y, z) + |a T m z| 2 + ρ ≥ 0. In particular, R is a submartingale with terminal element R T = exp 2a T m g(X T ) + 2´T 0 ρ s ds , which satisfies R T L ∞ ≤ C, C = C(a m , g L ∞ , ρ L 1,∞ ). From the definition of R, we see that for each m we have Since {a m } positively spans R n , this gives us an estimate Y S ∞ , which transfers to the desired estimate on u L ∞ .
The following is a consequence of Theorem 2.5 of [XŽ18].
Proposition 2.2. Suppose that (H BF ) holds, g ∈ C α for some α ∈ (0, 1), and that (u, v) is a Markovian solution of (1.2), with u bounded. Then for some β ∈ (0, 1) depending on α, g C α , C 0 , and u L ∞ , we have (2.1) Proof. The only thing to check is that if (H BF ) holds, then F has a decomposition as in (2.8) of [XŽ18], so that Proposition 2.11 of [XŽ18] implies the existence of an appropriate Lyapunov function. For this, we set x, y, z) 1 + |z i ||z| + j<i |z j | 2 + κ(|z|) (1 + j<i |z j | 2 ), , Then some algebra shows that we have F i (t, x, y, z) = z i · l i (t, x, y, z) + q i (t, x, y, z) + s i (t, x, y, z), and l, q and s satisfy the estimates appearing in Proposition 2.11 in [XŽ18]. Thus we can apply Theorem 2.5 of [XŽ18] to complete the proof.
Proposition 2.3. Suppose that (H Q ) holds and that u is a classical solution of (1.4) such that u ∈ C α for some α ∈ (0, 1), and Du is bounded. Then for each β ∈ (0, 1) there is a constant C depending on β, α, u C α , C 0 and C Q such that Moreover, if g is Lipschitz with Lipschitz constant L, then Proof. Fix p ∈ (1, ∞). Throughout this proof, C denotes a constant which can change from line to line but depends only on p, α, u C α , and C Q . We will introduce below parameters R > 0 and t 0 ∈ [0, T ), and it is important that C does not depend on R or t 0 . For constants which can depend on R (but not t 0 ) in addition to the constants p, α, C α and C Q we use C R . We now fix a function ρ ∈ C ∞ c (R d ) such that 0 ≤ ρ ≤ 1, ρ(x) = 1 for |x| ≤ 1, ρ(x) = 0 for |x| > 2. Then we define for each x 0 ∈ R d and R > 0, the function ρ R,x0 (x) = ρ( x−x0 R ), and note that ρ R,x0 (x) = 1 for x ∈ B R (x 0 ) and ρ R,x0 (x) = 0 for x ∈ B 2R (x 0 ) c . Next, we fix a smooth function κ = κ(t) : [0, T ] → [0, 1] with κ(t) = 1 for 0 ≤ t ≤ t 0 and κ(t) = 0 for t > (t 0 + T )/2. We can choose κ so that |κ ′ (t)| ≤ 3 T −t0 . Next, we find the equation satisfied byũ i (t, x) = κ(t)ρ R,x0 (x)u i (t, x). Some computations show that We use Young's inequality to estimate the right-hand side of (2.5), and then deduce from the theory of linear parabolic equations the existence of constants C and C R such that T 0ˆB2r (x0) holds for all R ≤ 1 and all x 0 ∈ R d , t 0 ∈ [0, T ), and where in the last line we increased C and C R (and we recall that C and C R may depend on p).
The next step is to set and then follow a computation from [BF02], integrating by parts in space to find Applying Young's inequality to the right hand side of the the estimate above, we get We can combine this with (2.7) to find that and so taking R sufficiently small, we conclude The estimate (2.3) now follows from the Sobolev embedding. The proof that (2.4) holds when g is Lipschitz is entirely similar, so we provide only a brief description of the argument. First, we set v i to be the unique solution to the linear equation x,ũ, σDũ) = 0, u(T, x) = 0, whereF i (t, x, y, z) = F i (t, x, y + v(t, x), z + σDv(t, x)) satisfies (H Q ) with a new constant C ′ Q depending on C Q and v L ∞ , Dv L ∞ . Now we can repeat the same computations as above, but without multiplying by κ, to get an estimate on Dũ C β ([0,T ]×R d ) , which implies the estimate (2.4).
Corollary 2.4. Under the same hypotheses as Proposition 2.3, for each ǫ > 0 there is a constant C depending on ǫ, β, α, u C α , C 0 and C Q such that In particular, we have for each ǫ > 0 a constant C such that Proof. Combine Proposition 2.3 with Exercise 3.2.6 of [Kry96]. Now we come to the main existence result.
Some computations show that the data (b (k),ǫ , f (k),ǫ , g ǫ ) satisfy the conditions (H 0 ), (H AB ), and (H BF ) uniformly in the parameters k and ǫ. Applying Propositions 2.2 and 2.3 we obtain a constant C > 0 such that the estimates u (k),ǫ C α ≤ C, Du (k),ǫ C α ([0,t0]×R d ) ≤ C T −t0 hold for each t 0 < T and each k, ǫ. A standard compactness argument gives us a function u ∈ C α ([0, T ] × R d ) ∩ C 1+α loc ([0, T ) × R d ) satisfying the same estimates as the u (k),ǫ , and such that for some k j ↑ ∞, ǫ j ↓ 0, we have u (kj),ǫj → u locally uniformly on [0, T ] × R d and Du (kj ),ǫj → Du locally uniformly on [0, T ) × R d . Fix (t, x) ∈ [0, T ] × R d and define X = X t,x by (1.5). By passing to the limit in the equation we confirm that the pair (u, σDu) is a Markovian solution for (1.2). The boundedness of u and the fact that F admits a Lyapunov function can be used to verify that (u, σDu) is a bmo decoupling field, and hence by Proposition 1.3 a decoupling field (1.1). It is clear that if g is Lipschitz, then by Proposition 2.3 the u (k),ǫ,i are Lipschitz in space, uniformly in k and ǫ, from which it follows that Du (and hence v) is bounded.
Remark 2.6. Let (u, v) be the decoupling field produced by the above compactness argument. The convergence we obtain is strong enough to guarantee that u is in fact a weak solution of the PDE (1.4) in the sense of integration by parts, see e.g. Definition 4.1 in [FWZ18]. Verifying that any decoupling field of (1.1) corresponds to a weak solution of (1.4) and vice-versa is much more subtle, and relates to a line of research on the connection between BSDEs and weak solutions of PDEs (rather than viscosity solutions) that dates back to [BL97].
3. Application to stochastic differential games 3.1. Set-up and definition of Markovian Nash equilibrium. We consider a game in which players i = 1, ..., n choose controls α 1 , ..., α n which take values in measurable sets A i ⊂ R k , and influence the d-dimensional state process X through the dynamics Here α denotes (α 1 , ..., α k ). The goal of player i is to maximize the payoff functional J i ( α) = E[g i (X T ) +´T 0 r i (t, X t , α(t, X t ))ds]. More precisely, the game is specified by the following data: • for each i, a number k i ∈ N and a set A i ⊂ R ki which represents the set of possible actions of player i (we could take A i to be an arbitrary metric space, but we will use subsets of Euclidean space for simplicity of notation), We assume for the moment that we have for each t ∈ [0, T ] and x ∈ R d a unique strong solution to the SDE dX t,x s = b(s, X t,x s , α(s, X t,x s ))ds + σ(s, X t,x s )dB s , X t,x t = x. (3.1) For each (t, x) ∈ [0, T ] × R d , player i has a payoff functional J i t,x : A → R, defined by We also assume for the moment that the integrals appearing in the definition of J i t,x are well-defined for each α ∈ A.
Definition 3.1. We say that α = (α 1 , ..., α n ) ∈ A is a Markovian Nash equilibrium (MNE) for the game with data (A i , b, σ, r, g) if for each i ∈ {1, ..., n}, β ∈ A i and each (t, Our approach to producing Nash equilibria will be through an appropriate FBSDE system, which we describe here. We define for each i the (reduced) Hamiltonian t, x, p i , a 1 , ..., a n ) = b(t, x, a 1 , ..., a n ) · p i + r i (t, x, a 1 , ..., a n ).
The generalized Isaacs condition holds with optimizerâ, the map σ satisfies the conditions appearing in H 0 , g is bounded and the estimates 1) The following is a verification result, stated in terms of the FBSDE (3.3) instead of the PDE (3.4).
Proof. We will show that α is a closed loop Nash equilibrium in three steps.

3.2.
Games with diagonal cost structures and additive drift. We now describe a general class of games to which our results on FBSDEs can be applied. We assume that the dynamics take the form dX t = n j=1 b j (t, X t , α j t ) dt + σ(t, X t )dB t while the payoff for player i takes the form Player i's Hamiltonian in this case is given by x, a i ).
In particular, the Isaacs condition holds as soon as there exists for each i a measurable mapâ i = a i (t, x, p i ) : x,â i (t, x, p i )) · p i + r i (t, x, a i ) = sup a b i (t, x,â i (t, x, p i )) · p i + r i (t, x, a i ) (3.6) for each (t, x, a i ). We note that in terms of the notation introduced in the previous subsection, we have b(t, x, a) = j b j (t, x, a j ), r i (t, x, a) = r i (t, x, a j ). Let us list the necessary assumptions on the data.
The functions b i , σ, r i , g i are all continuous, σ satisfies the conditions in H 0 and there is a constant C diag such that the estimates 1)|b i (t, x, a i )| ≤ C diag (1 + |a i |) x, a i )| ≤ C diag (1 + |a i | 2 ) hold for all x, x ′ ∈ R d , t ∈ [0, T ], a i ∈ A i . Moreover there exist continuous functionŝ a i satisfying (3.6), and such that 4)|â i (t, x, p i )| ≤ C diag (1 + |p i |).
Remark 3.4. It is natural to ask whether the equilibrium we produce is unique. If we only impose H G , we cannot expect uniqueness, in short because we cannot guarantee uniqueness of the FBSDE (3.7) (or of the corresponding PDE) without additional regularity conditions. Nevertheless, under appropriate technical conditions one can guarantee a one-to-one correspondence between Markovan Nash equilibria and certain generalized solutions of the HJB system by following the arguments in Proposition 6.27 in [CD18]. This gives one way to check that if (u, v) is a decoupling field for (3.7) with v bounded, then u must in fact solve the corresponding PDE in an appropriate sense. To make this rigorous requires a discussion of weak solutions for the PDE system (1.4), regularity properties of scalar Hamilton-Jacobi equations with irregular Hamiltonians and the Itô Krylov formula. We do not pursue this analysis for the sake of brevity.
Proof. This is a matter of checking that if H diag holds, then the functions b, σ, f, g with b(t, x, z) = j b j (t, x,â j (t, x, σ −1 (t, x)z j ), f i (t, x, z) = r i (t, x,â i (t, x, σ −1 (t, x)z i )) satisfy the conditions of Theorem 2.5. The only thing which is not obvious is H AB . For this, we note that we can easily check |f i (t, x, z)| ≤ C 1 + |z i | 2 , which implies that the condition (AB) is satisfied, with {a m } = {±λe m } n m=1 , ρ = λ, where λ is a large enough positive constant and e m is the m th standard basis vector in R n .