SHARP INEQUALITY FOR BOUNDED SUBMARTINGALES AND THEIR DIFFERENTIAL SUBORDINATES

Let α be a ﬁxed number from the interval [ 0,1 ] . We obtain the sharp probability bounds for the maximal function of the process which is α -differentially subordinate to a bounded submartingale. This generalizes the previous results of Burkholder and Hammack.


Introduction
Let (Ω, , ) be a probability space, equipped with a discrete filtration ( n ). Let f = ( f n ) ∞ n=0 , g = (g n ) ∞ n=0 be adapted integrable processes taking values in a certain separable Hilbert space . The difference sequences d f = (d f n ), d g = (d g n ) of these processes are given by d f 0 = f 0 , d f n = f n − f n−1 , d g 0 = g 0 , d g n = g n − g n−1 , n = 1, 2, . . . .
Let g * stand for the maximal function of g, that is, g * = max n |g n |. The following notion of differential subordination is due to Burkholder. The process g is differentially subordinate to f (or, in short, subordinate to f ) if for any nonnegative integer n we have, almost surely, |d g n | ≤ |d f n |.

Lemma 1.
For λ > 2, let φ λ , ψ λ denote the partial derivatives of U λ with respect to x, y on the interiors of A λ , B λ , C λ , D λ , E λ , extended continuously to the whole of these sets. The following statements hold. Then (iv) For any λ > 2 we have the inequality (v) For λ > 2 and any (x, y) ∈ S we have Proof. We start with computing the derivatives. Let y ′ = y/| y| stand for the sign of y, Finally, for λ ≥ 4, set Now the properties (i), (ii), (iii) follow by straightforward computation. To prove (iv), note first that for any λ > 2 the condition (9) is clearly satisfied on the sets A λ and B λ .
We will need the following fact, proved by Burkholder; see page 17 of [1].
defined on the set {t : |x + th| ≤ 1}. It is easy to check that G is continuous. As explained in [1], the inequality (11) follows once the concavity of G is established. This will be done by proving the inequality G ′′ ≤ 0 at the points, where G is twice differentiable and checking the inequality for which G is not differentiable (even once). Note that we may assume t = 0, by a translation argument G ′′ x, y,h,k (t) = G ′′ x+th, y+t k,h,k (0), with analogous equalities for onesided derivatives. Clearly, we may assume that h ≥ 0, changing the signs of both h, k, if necessary. Due to the symmetry of U λ , we are allowed to consider y ≥ 0 only. We start from the observation that G ′′ (0) = 0 on the interior of A λ and G ′ The latter inequality holds since U λ ≡ 1 on A λ and U λ ≤ 1 on B λ . For the remaining inequalities, we consider the cases λ ∈ (2, 4), λ ≥ 4 separately.
which follows from |k| ≤ h and the fact that as |k| ≤ h. Finally, on E λ , the concavity follows by Lemma 3. It remains to check the inequalities for one-sided derivatives. By Lemma 1 (ii), the points (x, y), for which G is not differentiable at 0, do not belong to S λ . Since we excluded the set A λ ∩ B λ , they lie on the line y = x − 1 + λ. For such points (x, y), the left derivative equals while the right one is given by In the first case, the inequality G ′ while in the remaining one, 2 Both inequalities follow from the estimate λ − y ≤ 2 and the condition |k| ≤ h.
The case λ ≥ 4. On the set B λ the concavity is clear. For C λ , we have that the formula (12) holds.
If (x, y) lies in the interior of D λ , then The concavity on E λ is a consequence of Lemma 3. It remains to check the inequality for one-sided derivatives. By Lemma 1 (ii), we may assume y = x + λ − 1, and the inequality G ′ an obvious one, as λ − y ≤ 2.

The main theorem
Now we may state and prove the main result of the paper.
Theorem 1. Suppose f is a submartingale satisfying || f || ∞ ≤ 1 and g is an adapted process which is α-subordinate to f . Then for all λ > 0 we have Proof. If λ ≤ 2, then this follows immediately from the result of Hammack [4]; indeed, note that U λ coincides with Hammack's special function and, furthermore, since g is α-subordinate to f , it is also 1-subordinate to f . Fix λ > 2. We may assume α < 1. It suffices to show that for any nonnegative integer n, To see that this implies (13), fix ǫ > 0 and consider a stopping time τ = inf{k : |g k | ≥ λ − ǫ}. The process f τ = ( f τ∧n ), by Doob's optional sampling theorem, is a submartingale. Furthermore, we obviously have that || f τ || ∞ ≤ 1 and the process g τ = (g τ∧n ) is α-subordinate to f τ . Therefore, by (14), and by left-continuity of U λ as a function of λ, (13) follows. Thus it remains to establish (14). By Lemma 1 (v), (|g n | ≥ λ) ≤ U λ ( f n , g n ) and it suffices to show that for all 1 ≤ j ≤ n we have To do this, note that, since |d g j | ≤ |d f j | almost surely, the inequality (11) yields By α-subordination, the condition (9) and the submartingale property Therefore, it suffices to take the expectation of both sides of (16) to obtain (15). Thus we will be done if we show the integrability of In both the cases λ ∈ (2, 4), λ ≥ 4, all we need is that the variables , since outside it the derivatives φ λ , ψ λ are bounded by a constant depending only on α, λ and |d f j |, |d g j | do not exceed 2. The integrability is proved exactly in the same manner as in [4]. We omit the details.
We will now establish the following sharp exponential inequality.
Theorem 2. Suppose f is a submartingale satisfying || f || ∞ ≤ 1 and g is an adapted process which is α-subordinate to f . In addition, assume that |g 0 | ≤ | f 0 | with probability 1. Then for λ ≥ 4 we have where

The inequality is sharp.
This should be compared to Burkholder's estimate (Theorem 8.1 in in the case when f , g are Hilbert space-valued martingales and g is subordinate to f . For α = 1, we obtain the inequality of Hammack [4], Proof of the inequality (18). We will prove that the maximum of U λ on the set K = {(x, y) ∈ S : | y| ≤ |x|} is given by the right hand side of (18). This, together with the inequality (13) and the assumption (( f 0 , g 0 ) ∈ K) = 1, will imply the desired estimate. Clearly, by symmetry, we may restrict ourselves to the set K + = K ∩ { y ≥ 0}. If (x, y) ∈ K + and x ≥ 0, then it is easy to check that Furthermore, a straightforward computation shows that the function F : [0, 1] → given by F (s) = U λ (s, s) is nonincreasing. Thus we have U λ (x, y) ≤ U λ (0, 0). On the other hand, if (x, y) ∈ K + and x ≤ 0, then it is easy to prove that U λ (x, y) ≤ U λ (−1, x + y + 1) and the function G : [0, 1] → given by G(s) = U λ (−1, s) is nondecreasing. Combining all these facts we have that for any (x, y) Thus (18) holds. The sharpness will be shown in the next section.

Sharpness
Recall the function V λ = V α,λ defined by (1) in the introduction. The main result in this section is Theorem 3 below, which, combined with Theorem 1, implies that the functions U λ and V λ coincide. If we apply this at the point (−1, 1) and use the equality appearing in (19), we obtain that the inequality (18) is sharp.
The main tool in the proof is the following "splicing" argument. Assume that the underlying probability space is the interval [0, 1] with the Lebesgue measure.
Proof. Let N be such that ( f N , g N ) = ( f ∞ , g ∞ ) and fix ǫ > 0. With no loss of generality, we may assume that σ-field generated by f , g is generated by the family of intervals {[a i , a i+1 ) : There exists a filtration and a pair ( f i , g i ) of adapted processes, with f being a submartingale bounded in absolute value by 1 and g being α-subordinate to f , which satisfy for k > N . It is easy to check that there exists a filtration, relative to which the process F is a submartingale satisfying ||F || ∞ ≤ 1 and G is an adapted process which is α-subordinate to F . Furthermore, we have Since ǫ was arbitrary, the result follows.
Proof of Theorem 3. First note the following obvious properties of the functions V λ , λ > 0: we have V λ ∈ [0, 1] and V λ (x, y) = V λ (x, − y). The second equality is an immediate consequence of the fact that if g is α-subordinate to f , then so is −g.