Sharpness of Lenglart's domination inequality and a sharp monotone version

We prove that the best so far known constant $c_p=\frac{p^{-p}}{1-p},\, p\in(0,1)$ of a domination inequality, which originates to Lenglart, is sharp. In particular, we solve an open question posed by Revuz and Yor. Motivated by the application to maximal inequalities, like e.g. the Burkholder-Davis-Gundy inequality, we also study the domination inequality under an additional monotonicity assumption. In this special case, a constant which stays bounded for $p$ near $1$ was proven by Pratelli and Lenglart. We provide the sharp constant for this case.


Introduction
In this note, we prove that the best so far known constant c p of a domination inequality, which originates to Lenglart [6, Corollaire II] (see Theorem 1.1), is sharp. In particular, we solve an open question posed by Revuz and Yor [13,Question IV.1,p.178]. Furthermore, motivated by the method of applying Lenglart's inequality to extend maximal inequalities to small exponents, we study Lenglart's domination inequality under an additional monotonicity assumption: A result by Pratelli [11] and Lenglart [6] implies (under the additional monotonicity assumption) a constant, which is bounded by 2, and hence considerably improves the constant of Lenglart's inequality for p near 1. We provide a sharp constant. The sharpness of our monotone version of Lenglart's inequality is related to a result by Wang [17].
Let (Ω, F , P, (F t ) t≥0 ) be a filtered probability space satisfying the usual conditions. The following lemma is [8, Lemma 2.2 (ii)]: Theorem 1.1 (Lenglart's inequality). Let X and G be non-negative adapted right-continuous processes, and let G be in addition non-decreasing and predictable such that Lenglart's inequality yields a very short proof of the Burkholder-Davis-Gundy inequality for continuous local martingales for small exponents (see e.g. [13,Theorem IV.4 Using the BDG inequality for q = 2 we have E[X τ ] ≤ E[G τ ] for any bounded stopping time τ . Applying Lenglart's inequality with p = q/2, we obtain For q = 1, this implies c BDG,1 = c q/2 = 2 √ 2 ≈ 2, 8284. The optimal BDG constant can be computed numerically for this case (see Schachermayer and Stebegg [14]) and is c (opt) BDG,1 ≈ 1, 2727. A better constant than c q/2 can be achieved if we apply the following proposition due to Lenglart [ Let X and G be as in Theorem 1.1. Assume in addition that both processes start in 0. Then Proposition 1.2 implies, choosing F (x) = x p for some p ∈ (0, 1) and optimizing over . Hence, Proposition 1.2 gives c BDG,1 = 2. We show that the constant of Proposition 1.2 in the special case F (x) := x p , p ∈ (0, 1) can be improved to p −p (see Theorem 2.2), which is sharp. In particular, by the argument described above we now achieve c BDG,1 = √ 2 ≈ 1, 4142. For the right-hand side of the BDG inequality t ], the sharp constant for q = 1 (C (opt) 1,BDG ≈ 1.4658) was found by Osekowski [10]. Here, the monotone version of Lenglart's inequality does not yield a sharper constant than the normal Lenglart's inequality.

Main results
We assume, unless otherwise stated, that all processes are defined on an underlying filtered probability space (Ω, F , P, (F t ) t≥0 ) which satisfies the usual conditions. (1) In particular, the constant c p = p −p 1−p in Theorem 1.1 is sharp. As explained in the introduction, the application to maximal inequalities motivates us to consider the following monotone version of Lenglart's inequality. We assume in addition that X is non-decreasing and obtain a considerably improved constant for p near 1.
Theorem 2.2 (Sharp monotone Lenglart's inequality). Let X and G be non-decreasing non-negative adapted right-continuous processes, and let G be in addition predictable such that E[X τ | F 0 ] ≤ E[G τ | F 0 ] ≤ ∞ for any bounded stopping time τ . Then for all p ∈ (0, 1), Furthermore, for all p ∈ (0, 1) there exist continuous processesX = (X t ) t≥0 andG = (G t ) t≥0 , satisfying the assumptions above such that Remark 2.5. In Theorem 2.2, the assumption that G is right-continuous and predictable can be replaced by the assumption that G is left-continuous.
Remark 2.6. A key part of the proof of Lenglart's inequality is the inequality for all c, d > 0. If X is non-decreasing, this can be improved to which is used to prove the monotone version of Lenglart's inequality.
where c p := p −p 1−p and the constant c p is sharp.
If we assume in addition, that (X n ) n∈N 0 is non-decreasing, then we have and the constant p −p is sharp.

Proof of Theorem 2.1
Proof of Theorem 2.1. Choose an arbitrary p ∈ (0, 1) for the remainder of this proof. First, we define non-decreasing processesX = (X t ) t≥0 andG = (G t ) t≥0 which satisfy the assumptions of Theorem 1.1, such that To obtain the extra factor (1 − p) −1 , we modifyX andG using an independent Brownian motion: This gives us the families {(X (n) t ) t≥0 , n ∈ N} and {(G Note that if we have non-negative random variables X RV := 1 and G RV with E[X RV ] = E[G RV ], then we obtain E[X p RV ] >> E[G p RV ] for example by choosing G RV to be very large on a set with small probability and everywhere else 0. Keeping this in mind, we constructX andG as follows: Let Z be an exponentially distributed random variable on a complete probability space (Ω, F , P) with E[Z] = 1. Set Define for all t ≥ 0X ChooseF t := σ({Z ≤ r} | 0 ≤ r ≤ t) for all t ≥ 0. Observe thatX andG are non-decreasing non-negative adapted right-continuous processes, andG is in addition continuous, hence predictable. Furthermore, due to Z being exponentially distributed,G is the compensator ofX, implying E[X τ ] = E[G τ ] for all bounded τ . Now we use the processesX andG to construct the families {(X (n) t ) t≥0 , n ∈ N} and {(G (n) g n,n+1 (t) = 0 ∀t ≤ n, and g n,n+1 (t) = 1 ∀t ≥ n + 1. Define: The stopping time τ (n) ensures that X (n) t is non-negative. By construction, we have for every bounded (F t ) t≥0 stopping time τ Hence, (X which implies in particular that E sup t≥0 G We use the following formulas for positive random variables Z (equation (10) is a direct consequence of (9), alternatively see also [3,Theorem 20.1, p. 38-39]): We will apply (10) to X ∞ . To estimate E[X ∞ ∧ t | F 0 ], we fix some t, λ > 0 and define: Because (G t ) t≥0 is predictable, there exists a sequence of stopping times (τ (n) ) n∈N that announces τ . Therefore, we have on the set {G 0 ≤ λt}: On {τ = ∞} ∪ {G 0 > λt} we have lim n→∞ X τ (n) ∧ λt = X ∞ ∧ λt, which implies: Combining inequalities (11) and (12) gives: Applying (10) to X ∞ and inserting (13) gives: Applying (9) and (10) to G ∞ in the previous inequality implies: Choosing λ = p implies the assertion of the theorem.

Proof of Corollary 2.8
Proof of Corollary 2.8. We first prove inequalities (3) and (4): We turn the processes (X n ) n∈N 0 and (G n ) n∈N 0 into càdlàg processes in continuous time as follows: Set for all n ∈ N 0 , t ∈ [n, n + 1): X t := X n , G t := G n , F t := F n .
As we can approximate (G t ) t≥0 by left-continuous adapted processes, it is predictable. Now Theorem 1.1 and Theorem 2.2 immediately imply inequalities (3) and (4).
Sharpness of p −p follows from [17,Theorem 2]. We show that p −p 1−p is sharp. Let X (n) , G (n) , A and (F t ) t≥0 be as in proof of Theorem 2.1. Fix some arbitrary N ∈ N. Set for all k, n ∈ N