Local characteristics and tangency of vector-valued martingales

This paper is devoted to tangent martingales in Banach spaces. We provide the definition of tangency through local characteristics, basic $L^p$- and $\phi$-estimates, a precise construction of a decoupled tangent martingale, new estimates for vector-valued stochastic integrals, and several other claims concerning tangent martingales and local characteristics in infinite dimensions. This work extends various real-valued and vector-valued results in this direction e.g. due to Grigelionis, Hitczenko, Jacod, Kallenberg, Kwapie\'{n}, McConnell, and Woyczy\'{n}ski. The vast majority of the assertions presented in the paper is done under the sufficient and necessary UMD assumption on the corresponding Banach space.

. Let X be a Banach space, 1 ≤ p < ∞. Then X is UMD if and only if for any X-valued tangent martingale difference sequences (d n ) n≥1 and (e n ) n≥1 one has that E sup (1.2) (Note that the paper [73] did not cover the case p = 1, and [44] was never published. Nevertheless, the reader can find this case in [25, pp. 424-425] and in Theorem 5.9).
A classical example of tangent martingale different sequences is provided by independent mean zero random variables. Let (ξ n ) n≥1 be real-valued mean zero independent random variables, let (v n ) n≥1 be X-valued bounded predictable (i.e. v n depends only on ξ 1 , . . . , ξ n−1 ). Then (v n ξ n ) n≥1 is a martingale difference sequence. Moreover, then (v n ξ n ) n≥1 is a tangent martingale difference sequence for (ξ n ) n≥1 being an independent copy of (ξ n ) n≥1 (see Example 2.28), so in the UMD case (1.2) yields (1. 3) It turned out that (1.3) characterizes the UMD property if one sets (ξ n ) n≥1 to be Rademachers 3 (see Bourgain [11] and Garling [36,37]), Gaussians (see Garling [36] and McConnell [73]), or Poissons (see Proposition 3.23). In the Gaussian and Poisson cases the equivalence of (1.3) and the UMD property basically says that the following estimates hold for X-valued stochastic integrals E sup (here Φ and F are X-valued elementary predictable, W is a Brownian motion, N is a compensated standard Poisson process, W and N ind are independent copies of W and N respectively), which allows one to change the driving Brownian or Poisson noise in a stochastic integral by an independent copy without losing the information about strong L p -norms of the stochastic integral, are equivalent to your Banach space X having the UMD property. Estimates of the form (1.4) turned out to be exceptionally important in vector-valued stochastic integration theory as the right-hand side of (1.4) is nothing but a γ-norm (see Subsection 2.11) of Φ which is a natural extension of the Hilbert-Schmidt norm to general Banach spaces (see McConnell [73] and van Neerven, Veraar, and Weis [81], see also [83,106,108] for a general continuous martingale case and Dirksen [30] for the Poisson case). Estimates (1.4) and (1.5) justify that tangent martingales are extremely important for vector-valued stochastic integration. The procedure of changing the noise by an independent copy (in our case this was (ξ n ) → (ξ n )) together with extending the filtration in the corresponding way (i.e. F n := σ(F n , ξ 1 , . . . , ξ n )) creates a special tangent martingale difference sequence, namely a decoupled one which can be defined in the following way: (e n ) is a decoupled tangent martingale difference sequence to (d n ) if (e n ) are conditionally independent given G := σ (d n ) , i.e. for any Borel B 1 , . . . , B N ⊂ X a.s. P(e 1 ∈ B 1 , . . . , e N ∈ B N |G) = P(e 1 ∈ B 1 |G) · . . . · P(e N ∈ B N |G), and P(e n |F n−1 ) = P(e n |G) for any n ≥ 1. Note that such a martingale difference sequence might not exist on the probability space with the original filtration, so one may need to extend the probability space and filtration in such a way that (d n ) preserves its martingale property. Existence and uniqueness of such a decoupled (e n ) was proved by Kwapień and Woyczyński in [64] (see also de la Peña [28], de la Peña and Giné [29], especially [29, Section 6.1] for a detailed proof, Kallenberg [58], and S.G. Cox and Geiss [24]). The goal of the present paper is to extend Theorem 1.1 to the continuous-time setting and to discover in this case the explicit form of a decoupled tangent local martingale.
Let us start with explaining what continuous-time tangent local martingales are. To this end we will need Lévy martingales. What do we know about them? Well, one of the most fundamental features of Lévy processes is the Lévy-Khinchin formula which is the case of a Lévy martingale L with L 0 = 0 has the following form (see e.g. [52,102]) (1.6) for some fixed σ ≥ 0 and for some fixed measure ν on R. It turns out that the pair (σ, ν) characterizes the distribution of a Lévy martingale, and it has the following analogue for a general real- Continuous-times tangent martingales and local characteristics were intensively studied by Jacod [48,49,50], Jacod and Shiryaev [52], Jacod and Sadi [51], Kwapień and Woyczyński [62,63,64,65], and Kallenberg [58] (see also [73,81,85,86]). In particular, Kallenberg proved in [58] that for any real-valued continuous-time tangent martingales M and N one has that with more general inequalities (including concave functions of moderate growth) under additional assumptions on M and N (e.g. conditional symmetry). Furthermore, in [50,51,58,64] it was shown that any real-valued martingale M has a decoupled tangent local martingale N , i.e. a tangent local martingale N defined on an enlarged probability space with an enlarged filtration such that N (ω) is a martingale with independent increments and with local characteristics ([M c ](ω), ν M (ω)) for a.e. ω ∈ Ω from the original probability space. Moreover, in the quasi-left continuous setting it was shown in [50,51,64] that such a martingale can be obtained via the following procedure: if we discretize M on [0, T ], i.e. consider a discrete martingale (f n k ) n k=1 = (M T k/n ) n k=1 , and consider a decoupled tangent martingalef n := (f n k ) n k=1 , thenf n converges in distribution to N as random variables with values in the Skorokhod space D([0, T ], R) (see Definition 2.2) as n → ∞. This in particular justifies the definition of a continuous-time decoupled tangent martingale.
In the present paper we are going to explore various facts concerning vectorvalued continuous-time tangent martingales. We will mainly focus on the following three questions: • How do local characteristics look like in Banach spaces?
• What is a decoupled tangent martingale in this case? • Can we extend decoupling inequalities (1.8) to infinite dimensions?
We will also try to answer all the supplementary and related problems appearing while working on these three questions. Let us outline the structure of the paper section-by-section.
In Section 2 we present some preliminaries to the paper, i.e. certain assertions (e.g. concerning martingales, random measures, stochastic integration, et cetera) which we will heavily need throughout the paper.
Our main Section 3 is devoted to the definition of vector-valued continuoustime tangent martingales, basic L p -estimates for these martingales, and the construction of a decoupled tangent martingale. How do we define tangent martingales in the vector-valued case? As we saw in Theorem 1.1, a Banach space X having the UMD property plays an important rôle for existence of L p -bounds for discrete tangent martingales. This also turned out to be equivalent to existence of local characteristics of a general X-valued martingale M . Namely, due to [116,118] X has the UMD property if and only if a general X-valued martingale M has the Meyer-Yoeurp decomposition, i.e. it can be uniquely decomposed into a sum of a continuous local martingale M c and a purely discontinuous local martingale M d (see Remark 2.19). In this case we define the local characteristics for any x * ∈ X * a.s. (such a process exists because of Remark 2.13), and ν M is a compensator of a random measure μ M defined on R + × X analogously to (1.7) (see Subsection 2. 6 and 2.8). Similarly to the real-valued case, two X-valued martingales are tangent if they have the same local characteristics.
Next, we present L p -estimates for UMD-valued tangent martingales. In Theorem 3.7 we extend the result (1.8) of Kallenberg to any UMD Banach space X, i.e. we prove that for any UMD Banach space X and for any X-valued tangent martingales M and N one has that Let us say a couple of words about how do we gain (1.9). To this end we need the canonical decomposition. Thanks to Meyer [76] and Yoeurp [120] any realvalued martingale M can be uniquely decomposed into a sum of a continuous local martingale M c (the Wiener-like part), a purely discontinuous quasi-left continuous local martingale M q (the Poisson-like part), and a purely discontinuous local martingale M a with accessible jumps (the discrete-like part). It turned out that this decomposition can be expanded to the vector-valued case if and only if X has the UMD property (see [116,118]). Moreover, as it is shown in Subsection 3.2 if M = M c + M q + M a and N = N c + N q + N a are the canonical decompositions of tangent martingales M and N , then M i and N i are tangent for any i ∈ {c, q, a}, and thus by strong L p -estimates for the canonical decomposition presented in [119] (see Theorem 2.18) we need to show (1.9) separately for each of these three cases. Then the continuous case immediately follows from weak differential subordination inequalities obtained in [91,116,119] and the discrete-like case can be shown via a standard discretization trick (see Subsection B.1) and Theorem 1.1. The most complicated and the most interesting mathematically is the Poissonlike case. First we show that (1.5) holds true not just for a compensated Poisson process, but for any stochastic integral with respect to a Poisson random measure (see Proposition 3.23). Next we prove that any UMD-valued quasi-left continuous purely discontinuous martingale can be presented as a stochastic integral with respect to a quasi-left continuous compensated random measure (see Theorem 3.30). Finally, by exploiting a certain approximation argument, we may assume that this random measure is defined over a finite jump space, and hence this is a time-changed Poisson random measure thanks to a fundamental result by Meyer [77] and Papangelou [92] (see e.g. also [1,12,56]) which says that any quasi-left continuous integer random measure after a certain time change becomes a Poisson random measure. As this time change depends only on the compensator measure (which is one of local characteristics and which is the same for M q and N q ), (1.5) immediately yields (1.9) for the quasi-left continuous purely discontinuous case.
Another highlight point of Section 3 is existence, uniqueness, and construction of a decoupled tangent martingale. First, in Theorem 3.8 we extend the result of Jacod [50], Kwapień and Woyczyński [64], and Kallenberg [58] on existence of a decoupled tangent martingale to general UMD-valued martingales (recall that they have shown this existence only in the real-valued case). Next in Subsection 3.8 we show that a decoupled tangent martingale is unique in distribution (which extends the discrete case, see [29,64]). Finally, in Subsection 3.9 we prove that if N is a decoupled tangent martingale of M , then N has independent increments given the local characteristics ([[M c ]], ν M ) of M which e.g. generalizes [58,Theorem 3.1].
It is of interest to take a closer look at the structure of tangent martingales. Let us consider a particular case of (1.4) and (1.5). Intuitively it seems that stochastic integrals Φ d W and F d N ind occurring in (1.4) and (1.5) should be decoupled tangent martingales to Φ dW and F d N respectively. And this is true as Φ(ω) d W is a.s. a martingale with independent increments and with the local characteristics (Φ(ω)Φ * (ω), 0) (here we can consider Φ ∈ L(L 2 (R + ), X) instead of Φ : R + → X a.s. as Φ is elementary predictable, see Subsection 2.10 and Section 6), and F (ω) d N ind has a.s. independent increments and the local characteristics (0, ν F (ω)) with the measure ν F (ω) defined on R + × X by For a general martingale we have an expanded version the this construction. Recall that for a given UMD Banach space X any X-valued martingale M has the canonical decomposition M = M c + M q + M a . Let us present a corresponding decoupled tangent martingale N c , N q , and N a for each of the cases separately (in the end we can simply sum up N := N c + N q + N a these cases, see Subsection 3.7). It turns out that by Subsection 3.3 we have that M c • τ c = Φ dW H for some time-change τ c , some Hilbert space H, some H-cylindrical Brownian motion W H (see Subsection 2.10), and some Φ : Ω → γ(L 2 (R + ; H), X) (see Subsection 2.11; we are allowed to integrate such functions due to [81]). Then it is sufficient to set N c := Φ d W H • A c (where A c is the inverse time change to τ , i.e. τ • A t = A • τ t = t a.s. for any t ≥ 0) to be the corresponding decoupled tangent martingale N c to M c for some independent H-cylindrical Brownian motion W H . Therefore N c (ω) is a time-changed Wiener integral with a deterministic integrator, which agrees with (1.4). The construction of a decoupled tangent martingale N a to M a simply copies the one done in the discrete case due to the approximation argument presented in Proposition B.1 (see [28,29,64,65] and Subsection 3.6).
The most intriguing thing happens in the quasi-left continuous case. Recall that M q can be presented as an integral with respect to a compensated random measure, namely x dμ M q (·, x), t ≥ 0, (1.10) where μ M q is defined by (1.7), ν M q is the corresponding compensator,μ M q = μ M q − ν M q (see Theorem 3.30). It turns out that in this case Cox processes were introduced by D.R. Cox in [22], and in the present case this is a random measure on an enlarged probability space such that μ M q Cox (ω) is a Poisson random measure on R + ×X with the intensity (or compensator, see Subsection 2.9) ν M q (ω) for a.e. ω ∈ Ω from the original probability space. Thus N q (ω) is a Poisson integral with deterministic integrator, which corresponds to (1.5). The idea of employing Cox processes for creating decoupled tangent processes here is not new (see e.g. [58]), but what is the most difficult in the vector-valued case is to show that both integrals (1.10) and (1.11) make sense and tangent (see Subsection 3.5).
It is worth noticing that in Subsection 3.4 we are discussing L p -estimates for general vector-valued integrals with respect to general random measures. Recall that this type of estimates goes back to Novikov [84], where he upper bounded an L p -moment of a real-valued stochastic integral F dμ by integrals in terms of F and the compensator ν of μ (hereμ = μ − ν; see Lemma 3.4). Later on sharp estimates of this form have been proven by Marinelli and Röckner [71] in the Hilbert space case and by Dirksen and the author [32] in the L q case (1 < q < ∞). In Theorem 3.22 we show that for any UMD-valued elementary predictable F and for any quasi-left continuous random measure μ one has that where ν is a compensator of μ,μ := μ − ν, μ Cox is a Cox process directed by ν, andμ Cox := μ Cox − ν. Note that though it seems that the right-hand side of (1.12) depends on F and μ Cox , the distribution of the Cox process entirely depends on ν (in particular, μ Cox (ω) is a Poisson random measure with the intensity ν(ω)), and so on the right-hand side of (1.12) we in fact have E F p p,X,ν , where F (ω) p,X,ν(ω) is the L p -norm of a stochastic integral of a deterministic function F (ω) with respect to the corresponding compensated Poisson random measure (see Subsection 2.9 and [2,3]). Thus even though (1.12) does not provide an explicit formula for a stochastic integral in terms of F and ν, as it was done in [32,71,84], nevertheless it semigeneralizes the papers [32,71,84] as it tells us that in order to get L p bounds for UMD-valued stochastic integrals with respect to a general random measure we need only to prove the corresponding estimates for the Poisson case with deterministic integrands (see e.g. Remark 3.26).
In Section 4 we show that if X satisfies the so-called decoupling property (e.g. if X = L 1 ), then inequalities of the form (1.13) are possible for an X-valued martingale M satisfying broad assumptions (see e.g. Remark 6.5), where N is a corresponding decoupled tangent local martingale.
Recall that the decoupling property was introduced by S.G. Cox and Veraar in [25,26] as a natural property while working with discrete decoupled tangent martingales and stochastic integrals.
In [58] Kallenberg also has shown φ-inequalities for tangent continuous martingales (where φ is a convex function of moderate growth; recall that one can even omit the convexity assumption for conditionally symmetric martingales). In Section 5 we extend these inequalities to full generality (i.e. general martingales in UMD Banach spaces). Though [58] also treats the semimartingale case, it is not known to the author how to prove such inequalities for vector-valued semimartingales.
In Section 6 we present estimates for vector-valued stochastic integrals with respect to a general martingale which extend both (1.4) and (1.5). Namely, we show that for a general H-valued martingale M (where H is a Hilbert space) and an L(H, X)-valued elementary predictable process Φ one has that for any 1 ≤ p < ∞ E sup where M = M c + M q + M a is the canonical decomposition, q M c is a quadratic variation derivative of M c (see Subsection 2.6), and N a is a decoupled tangent martingale to M a . Note that the right-hand side of (1.14) in fact can be seen as an L p moment of a predictable process. Such estimates are in the spirit of works of Novikov [84] and Dirksen and the author in [32], and they are very different from the classical vector-valued Burkholder-Davis-Gundy inequalities presented 554 I. S. Yaroslavtsev e.g. in [21,72,109,119]. Note that the upper bound of (1.14) characterizes the decoupling property (see Section 4 and Remark 6.5).
As it was discussed above, the notion of tangency heavily exploits the Meyer-Yoeurp decomposition, which existence for a general X-valued martingale is equivalent to X obtaining the UMD property. But what if we have weak tangency, i.e. what if for a given Banach space X and a pair of X-valued martingales M and N we have that M, x * and N, x * are tangent for any x * ∈ X * ? How does this correspond to the tangency property and will we then have L pestimates for a family of Banach spaces different from the UMD one? In Section 7 we show that in the UMD case weak tangency and tangency coincide. Moreover, in the non-UMD setting no estimate of the form (1.9) for weakly tangent martingales is possible.
In Section 8 we discuss for which Banach spaces it is possible to extend the definition of decoupled tangent local martingales (and prove their existence) via using weak local characteristics. It turns out that this is possible for Banach spaces with the so-called recoupling property which is dual to the decoupling property (1.13) and which occurs to be equivalent to the well-discovered UMD + property. Moreover, the converse holds true, i.e. a Banach space X having the recoupling property is necessary for any X-valued local martingale to have a decoupled tangent local martingale (see Theorem 8.6 and Remark 8.7). It remains open whether recoupling and UMD are identical (see e.g. [46,Section O]).
In Section 9 we consider vector-valued martingales with independent increments. First recall that one of the inventors of local characteristics was Grigelionis (that is why local characteristics are sometimes called Grigelionis characteristics). In particular, in [42] he proved that a real-valued martingale has independent increments if and only if it has deterministic local characteristics (this result was extended by Jacod and Shiryaev in [52] to multi dimensions). In Section 9 we extend this celebrated result to infinite dimensions. In preliminary Subsection 9.1 we show that for any Banach space X, an X-valued local martingale M has independent increments if and only if it has deterministic weak local characteristics, i.e. the family ([ M, x * c , ν M,x * ]) x * ∈X * is deterministic (such an object always exists since M, x * has local characteristics as a real-valued local martingale). Next in Subsection 9.2 we prove that if this is the case, then M actually has local characteristics (which are of course deterministic), and moreover, M has the canonical decomposition M = M c + M q + M a so that M c , M q , and M a are mutually independent, and there exists a deterministic time-change τ c such that M c • τ c = Φ dW H is a stochastic integral of some deterministic Φ ∈ γ(L 2 (R + ; H), X) with respect to some H-cylindrical Brownian motion W H , M q = x d N (·, x) for some fixed Poisson random measure N on R + ×X, and M a is a sum of its independent jumps which occur at deterministic family of times (t n ) n≥1 . Note that throughout Section 9 X is a general Banach space and there is no need in the UMD property.
Recall that Jacod [50] and Kwapień and Woyczyński [64] proved that for a real-valued quasi-left continuous martingale M a decoupled tangent martingale N on [0, T ] is nothing but a limit in distribution of discrete decoupled tangent martingalesf n as n → ∞, where for each n ≥ 1 a martingalef n := (f n k ) n k=1 is a decoupled tangent martingale to a discrete martingale (f n k ) n k=1 = (M T k/n ) n k=1 , and the limit is considered as a limit in distribution of random variables with values in the Skorokhod space D([0, T ], R) (see Definition 2.2). In Section 10 we extend this result to general UMD-valued martingales (thus somehow mixing together the discrete works of McConnell [73], Hitczenko [44], and de la Peña [28] and quasi-left continuous works of Jacod [49,50] and Kwapień and Woyczyński [64]). In our setting such a limit theorem is possible since we know what the limiting object is (i.e. how does a decoupled tangent martingale look like) due to Section 3, because of certain approximation techniques, and thanks to properties of stochastic integrals and the canonical decomposition.
Section 11 is devoted to a characterization of the local characteristics of a general UMD-valued martingale via an exponential formula which can be considered as an extension of the Lévy-Khinchin formula. There we show that for any UMD-valued martingale M with the local characteristics ([[M c ]], ν M ) and for any , ν M ) are unique bilinear form-valued predictable process and predictable random measure such that (1.15) is a local martingale on [0, τ G(x * ) ). This is a natural generalization of the Lévy-Khinchin formula (1.6) as if we set M to be quasi-left continuous with independent increments, then τ G(x * ) = ∞ and G(x * ) is deterministic, and consequently (1.15) being a local martingale implies (1.6). The proof of the fact that (1.15) is a local martingale on [0, τ G(x * ) ) presented in Section 11 follows directly from the multidimensional case shown by Jacod and Shiryaev in [52]. In Section 12 we discover L p -inequalities for characteristically subordinated and characteristically dominated martingales. These notions are predictable versions of weak differential subordination of martingales (see [91,115,116,118]) and martingale domination (see [19,89,119]) and have the following form: for a Banach space X an X-valued martingale N is characteristically subordinate to an X-valued martingale M if for any x * ∈ X * we have that a.s.
and N is characteristically dominated by M if a.s.
(here M c and N c are the continuous parts of M and N , see Subsection 2.7). In Subsection 12.1 we compare weak differential subordination and characteristic subordination (these properties turn out to be incomparable) and show inequalities (1.9) for characteristically subordinated martingales. In Subsection 12.2 we show inequalities (1.9) for quasi-left continuous characteristically dominated martingales (both estimates are proven in the UMD setting). L pestimates for general characteristically dominated martingales remain open (see Remark 12.10) as the author does not know how to gain such estimates in the discrete case, though this case is very much in the spirit of the original work of Zinn [122].
In the end of the present paper we have appendix Sections A and B where we collect some technical facts concerning tangency and martingale approximations.
All over this section we used to talk about some mysterious UMD spaces. Recall that UMD spaces were introduced by Burkholder in 1980's while working with martingale transforms (see e.g. [15,16,17,20]), and nowadays these spaces are used abundantly in vector-valued stochastic and harmonic analysis (see e.g. [11,39,46,81,101,115,119]). Let us shortly outline here where exactly the UMD property is needed/used in the present paper.
On the other hand, we obtain several new characterizations of the UMD property, such as This demonstrates once again that the UMD property is not just a technical assumption, but a key player in any game involving martingales in Banach spaces.

Preliminaries
Throughout the present article any Banach space is considered to be over the scalar field R. (This is done as we are going to work with continuous-time martingales, which properties are well discovered only in the case of the real scalar field, see e.g. [52,56,95].) Let X be a Banach space, B ⊂ X be Borel. Then we denote the σ-algebra of all Borel subsets of B by B(B).
For a, b ∈ R we write a A b if there exists a constant c depending only on A such that a ≤ cb. A is defined analogously. We write a A b if both a A b and a A b hold simultaneously.
We will need the following definitions. Recall that D(A, X) endowed with the sup-norm is a Banach space (see e.g. [105,115]).
For a Banach space X and for a measurable space (S, Σ) a function f : S → X is called strongly measurable if there exists a sequence (f n ) n≥1 of simple functions such that f n → f pointwise on S (see [46,Section 1.1]). In the sequel we will call a function f strongly predictable if it is strongly measurable with respect to the predictable σ-algebra (which is either P, see Subsection 2.5, or P, see Subsection 2.8, depending on the underlying S).
For a Banach space X and a function A : R + → X we set A * ∈ R + to be A * := sup t≥0 A t .
Throughout the paper, unless stated otherwise, the probability space and filtration are assumed to be generated by all the processes involved.

Enlargement of a filtered probability space
We will need the following definition of an enlargement of a filtered probability space (see e.g. [64, pp. 172-174]).
Then a probability space (Ω, F, P) with a filtration F = (F t ) t≥0 is called to be an enlargement of (Ω, F, P) and F if there exists a measurable space ( Ω, F) such that Ω = Ω × Ω and F = F ⊗ F, if there exists a family of probability measures

I. S. Yaroslavtsev
and if for any ω ∈ Ω there exists a filtration F ω = ( F ω t ) t≥0 such that for any Example 2.4. A classical example of an enlargement of a filtered probability space can be a product space, i.e. the case when P ω = P and F ω t = F t , t ≥ 0, for any ω ∈ Ω for some fixed measure P and some fixed filtration F = ( F t ) t≥0 .

Conditional expectation on a product space. Conditional probability and conditional independence
Let (Ω, F, P) be a probability space, and assume that there exist probability spaces (Ω , F , P ) and (Ω , F , P ω ) ω ∈Ω (where P ω depends on ω ∈ Ω in F -measurable way, see Subsection 2.1) such that A particular example would be if P ω = P is a probability measure which does not depend on ω ∈ Ω . Let X be a Banach space, and let f ∈ L 1 (Ω; X) (see [46,Section 1.2] for the definition of L p (Ω; X)). Then E(f |F ) is well defined (see [46,Section 2.6]; by E(·|F ) here we mean E(·|F ⊗ {Ω , ∅})), and moreover, by Fubini's theorem f (ω , ·) exists and strongly measurable for a.e. ω ∈ Ω (the proof is analogous to the one provided by [9,Section 3.4]). It is easy to see that for a.e. ω ∈ Ω where the notation E Ω means averaging for every fixed ω ∈ Ω over Ω . Indeed, for any A ∈ F by Fubini's theorem we have that Let (Ω, F, P) be a probability space, (S, Σ) be a measurable space, ξ : Ω → S be a random variable. Let G ⊂ F be a sub-σ-algebra. Then we define the conditional probability P(ξ|G) : Σ → L 1 (Ω) to be as follows Now let N ≥ 1, (ξ n ) N n=1 be S-valued random variables. Then ξ 1 , . . . , ξ N are called conditionally independent given G if for any sets B 1 , . . . , B N ∈ Σ we have that In the sequel we will need the following proposition.
Proof. By the definition of conditional independence we need to show that for any sets To this end note that by (2.2) for P -a.e. ω ∈ Ω which terminates the proof.
We will also need the following consequence of the proposition.
Corollary 2.7. Let (S, Σ) and (T, T ) be measurable spaces, let (Ω, F, P) be defined by (2.1), and let ξ : Ω → S and η : Ω → T be measurable. Assume that Proof. The corollary follows from Proposition 2.6 if one sets Ω := A, P := L(ξ), and where the latter exists by [ We refer the reader to [46] for further details on vector-valued integration and vector-valued conditional expectation.

The UMD property
A Banach space X is called a UMD 4 space if for some (equivalently, for all) p ∈ (1, ∞) there exists a constant β > 0 such that for every N ≥ 1, every The least admissible constant β is denoted by β p,X and is called the UMD p constant of X (or just the UMD constant of X if the value of p is understood). It is well known (see [46,Chapter 4]) that β p,X ≥ p * − 1 and that β p,H = p * − 1 for a Hilbert space H and any 1 < p < ∞ (here p * := max{p, p/(p − 1)}). We will also frequently use the following equivalent definition of the UMD property. X is UMD if and only if for any 1 ≤ p < ∞ and for any (d n ) N n=1 and (ε n ) N n=1 as above we have that Note that a similar definition of the UMD property can be provided for a general convex function of moderate growth (see e.g. [15, p. 1000]). We refer the reader to [15,20,29,39,40,46,47,68,94,101,119] for details on UMD Banach spaces.

Stopping times
A stopping time τ is called predictable if there exists a sequence of stopping times (τ n ) n≥1 such that τ n < τ a.s. on {τ > 0} and τ n τ a.s. as n → ∞. A stopping time τ is called totally inaccessible if P(τ = σ = ∞) = 0 for any predictable stopping time σ.
With a predictable stopping time τ we associate a σ-field F τ − which has the following form (2.5) where (τ n ) n≥1 is a sequence of stopping time announcing τ (see [56, p. 491] for details).
Later on we will work with different types of martingales based on the properties of their jumps, and in particular we will frequently use the following definition (see e.g. Subsection 2.7). Recall that for a càdlàg process A and for a stopping time τ we set ΔA τ : We refer the reader to [32,52,56,116,118] for further details.

Martingales: real-and Banach space-valued
Let (Ω, F, P) be a probability space with a filtration F = (F t ) t≥0 which satisfies the usual conditions (see [52,56,95]). Then particularly F is right-continuous. A predictable σ-algebra P is a σ-algebra on R + × Ω generated by all predictable rectangles of the form (s, t] × B, where 0 ≤ s < t and B ∈ F s . Let X be a Banach space. An adapted process M : is called a local martingale if there exists a nondecreasing sequence (τ n ) n≥1 of stopping times such that τ n ∞ a.s. as n → ∞ and M τn is a martingale for any n ≥ 1 (recall that for a stopping time τ we set M τ t := M τ ∧t , t ≥ 0, which is a local martingale given M is a local martingale, see [52,56,95]). It is well known that in the real-valued case any local martingale is càdlàg (i.e. has a version which is right-continuous and that has limits from the left-hand side). The same holds for a general X-valued local martingale M as well (see e.g. [105,115]), so for any stopping time τ one can define ΔM τ : Since · : X → R + is a convex function, and M is a martingale, M is a submartingale by Jensen's inequality, and hence by Doob's inequality (see e.g. In fact, the following theorem holds for martingales having strong L p -moments (see e.g. [110,111] for the real-valued case, the infinite dimensional case can be proven analogously, see e.g. [32,105,115,116,117,119] X)). Indeed, set (τ n ) n≥1 be a localizing sequence and for each n ≥ 1 set σ n := inf{t ≥ 0 : M t ≥ n}. Then σ n → ∞ as n → ∞ a.s. since M has càdlàg paths, and thus τ n ∧ σ n ∧ n → ∞ as n → ∞ a.s. as well. On the other hand we have that for each n ≥ 1

Remark 2.10. Recall that any local martingale
where we used the fact that M τn∧σn is a martingale as M τn is a martingale (see e.g. [56]).
Later we will need the following lemma proven e.g. in [32,Subsection 5.3] (see also [56,119]). Lemma 2.11. Let X be a Banach space, M : R + × Ω → X be a martingale such that lim sup t→∞ E M t < ∞. Let τ be a finite predictable stopping time. Then ΔM τ is integrable and We refer the reader to [46,56,74,75,90,94,95,105,117] for further information on martingales.

Quadratic variation
Let H be a Hilbert space, M : R + × Ω → H be a local martingale. We define a quadratic variation of M in the following way: where the limit in probability is taken over partitions 0 = t 0 < . . . < t N = t. Note that [M ] exists and is nondecreasing a.s. The reader can find more information on quadratic variations in [74,75,108] for the vector-valued setting, and in [56,75,95] for the real-valued setting.
As it was shown in [78, Proposition 1] (see also [100,Theorem 2.13] and [108,Example 3.19] for the continuous case), for any H-valued martingale M there exists an adapted process q M : R + × Ω → L(H) which we will call a quadratic variation derivative, such that the trace of q M does not exceed 1 on R + × Ω, q M is self-adjoint nonnegative on R + × Ω, and for any h, g ∈ H a.s.
For any martingales M, N : We refer the reader to [119] for further details.

The canonical decomposition
In this subsection we discuss the so-called canonical decomposition of martingales. First let us start with the following technical definitions. Recall that a càdlàg function A : R + → X is called pure jump if A t = A 0 + 0<s≤t ΔA s for any t ≥ 0, where the latter sum converges absolutely. for any x * ∈ X * (see e.g. [32,116,118]). Remark 2.17. Note that by [52,56,116,118] if the canonical decomposition of a local martingale M exists, then M q and M a collect different jumps of M , i.e. a.s. (2.8) Then the following theorem holds, which was first proved in [76,120] in the real-valued case, and in [116,118,119] in the vector-valued case (see also [56,Chapter 25] If we will have a closer look on each of the parts of the canonical decomposition, then we will figure out that M c is in fact a time changed stochastic integral with respect to a cylindrical Brownian motion (see Subsection 3.3), M q is a time changed stochastic integral with respect to a Poisson random measure (see Subsection 2.9), while M a can be represented as a discrete martingale if it has finitely many jumps (see Subsection 3.6 and B.1; see also [32,56,116]). Thus we often call M c the Wiener-like part, M q the Poisson-like part, while M a is often called a discrete-like part of M : in many cases the corresponding techniques help in finding required inequalities for M c , M q , and M a .
Later we will need the following lemma shown in [32, Subsection 5.1] (see [56] for the real-valued version). Recall that two stopping times τ and σ have disjoint graphs if P(τ = σ < ∞) = 0. Lemma 2.20. Let X be a Banach space, M : R + × Ω → X be a purely discontinuous local martingale with accessible jumps. Then there exist a sequence (τ n ) n≥1 of finite predictable stopping times with disjoint graphs such that a.s.

Random measures
Let (J, J ) be a measurable space so that J is countably generated. A family A random measure μ is called integer-valued if it takes values in N∪{∞}, i.e. for each A ∈ B(R + )⊗J one has that μ(A) ∈ N∪{∞} a.s., and if μ({t}×J) ∈ {0, 1} a.s. for all t ≥ 0 (so μ is a sum of atoms with a.s. disjoint supports, see [52, Proposition II.1.14]). We say that μ is non-atomic in time if μ({t} × J) = 0 a.s. for all t ≥ 0.
Let O be the optional σ-algebra on R + ×Ω, i.e. the σ-algebra generated by all càdlàg adapted processes. Let O := O ⊗ J , P := P ⊗ J (see Subsection 2.5 for the definition of P). A random measure μ is called optional (resp. predictable) if for any O-measurable (resp. P-measurable) nonnegative F : as a function from R + × Ω to R + is optional (resp. predictable).
Let X be a Banach space. Then we can extend stochastic integration with respect to random measures to X-valued processes in the following way. Let F : R + ×Ω×J → X be elementary predictable, i.e. there exists partition B 1 , . . . , B N of J, 0 = t 0 < t 1 . . . < t L , and simple X-valued random variables (ξ n, ) N,L n=1,m=1 such that ξ n, is F t −1 -measurable for any 1 ≤ ≤ L and 1 ≤ n ≤ N and Let μ be a random measure. The integral is well-defined and optional (resp. predictable) if μ is optional (resp. predictable), and R+×J F dμ is a.s. bounded.
A random measure μ is called P-σ-finite if there exists an increasing sequence of sets (A n ) n≥1 ⊂ P such that R+×J 1 An (s, ω, x)μ(ω; ds, dx) is finite a.s. and ∪ n A n = R + × Ω × J. According to [52, Theorem II.1.8] every P-σ-finite optional random measure μ has a compensator : a unique P-σ-finite predictable random measure ν such that for each P-measurable real-valued nonnegative F . For any optional P-σ-finite measure μ we define the associated compensated random measure byμ := μ−ν.
For each P-strongly-measurable F : (2.13) For an X-valued martingale M we associate a jump measure μ M which is a random measure on R + × X that counts the jumps of M (2.14) Note that μ M is P-σ-finite and we will frequently use the following fact which was proved in [52, Corollary II. 1.19] (see also [32,56,57] We refer the reader to [32,41,50,52,56,57,69,71,84,85,119] for details on random measures and stochastic integration with respect to random measures.

Poisson random measures
An important example of random measures is a Poisson random measure.
In the sequel we will need the following definition of an integral with respect to a Poisson random measure. Definition 2.24. Let X be a Banach space, (S, Σ, ρ) be a measure space, N ρ be a Poisson random measure on S with the intensity ρ. Then a strongly Σmeasurable function F : S → X is called integrable with respect to N ρ = N ρ − ρ if there exist an increasing family of sets (A n ) n≥1 ∈ Σ such that ∪ An = S, An F dρ < ∞, and An F d N ρ converges in L 1 (Ω; X) as n → ∞. Remark 2.25. Let G : S → X be strongly Σ-measurable such that S G dρ < ∞. Then G ∈ L 1 (S, ρ; X), and as for any step function H ∈ L 1 (S, ρ; X) we have that E S H dN ρ = H L 1 (S,ρ;X) by the definition of N ρ (in particular, EN ρ (A) = ρ(A) for any A ∈ Σ), we can extend the stochastic integral definition to G by a standard expanding operator procedure. Thus An F d N ρ := , and hence independent of ξ 1 , . . . , ξ n . Thus we have that for any x * ∈ X * , E( ξ, x * |σ(N ρ | A n )) = ξ n , x * for any n ≥ 1 (which follows from the fact that from [46,Theorem 3.3.2], and from the definition (2.15) of ξ), so ξ n , x * converges to ξ, x * by [46,Theorem 3.3.2], thus by the Itô-Nisio theorem [47, Theorem 6.4.1] we have that ξ n converges to ξ in L 1 (Ω; X).

Stochastic integration
Let H be a Hilbert space, X be a Banach space. For each x ∈ X and h ∈ H we denote the linear operator g → g, h x, g ∈ H, by h ⊗ x. The process where 0 = t 0 < . . . < t K < ∞, for each k = 1, . . . , K the sets B 1k , . . . , B Lk are in F t k−1 , the vectors h 1 , . . . , h N are in H, and (x k n ) K,L,M k, ,n=1 are elements of X. Let M : R + × Ω → H be a local martingale. Then we define the stochastic integral Φ · M : R + × Ω → X of Φ with respect to M as follows: For an H-cylindrical Brownian motion W H we can define a stochastic integral of Φ of the form (2.16) in the following way Further, if X = R, then due to [27,Theorem 4.12] (see also [56,81,108]) it is known that a.s. (2.17) and in particular by the Burkholder-Davis-Gundy inequalities [56,Theorem 17.7] we have that for any 0 We refer the reader to [27,32,52,56,74,75,76,81,108,119] for further details on stochastic integration and cylindrical Brownian motions.

γ-radonifying operators
Let H be a separable Hilbert space and let X be a Banach space. Let T ∈ L(H, X). Then T is called γ-radonifying if where (h n ) n≥1 is an orthonormal basis of H, and (γ n ) n≥1 is a sequence of independent standard Gaussian random variables (if the series on the righthand side of (2.19) does not converge, then we set T γ(H,X) := ∞). Note that T γ(H,X) does not depend on the choice of the orthonormal basis (h n ) n≥1 (see [47,Section 9.2] and [80] for details). Often we will call T γ(H,X) the γ-norm of T . Note that if X is a Hilbert space, then T γ(H,X) coincides with the Hilbert-Schmidt norm of T . γ-norms are exceptionally important in analysis as they are easily computable and enjoy a number of useful properties such as the ideal property, γ-multiplier theorems, Fubini-type theorems, etc., see [47,80].

Tangent martingales: the discrete case
Let X be a Banach space, (d n ) n≥1 and (e n ) n≥1 be X-valued martingale difference sequences. Example 2.28. Let (v n ) n≥1 be a predictable uniformly bounded X-valued sequence, (ξ n ) n≥1 and (ξ n ) n≥1 be adapted sequences of mean-zero real-valued independent random variables such that ξ n and ξ n are equidistributed, integrable, and independent of F n−1 for any n ≥ 1. Then martingale difference sequences (ξ n v n ) n≥1 and (ξ n v n ) n≥1 are tangent. Indeed, for any n ≥ 1 and A ∈ B(X) we have that a.s.
where ( * ) follows from the fact that ξ n and ξ n are i.i.d. and independent from F n−1 , and the fact that v n is F n−1 measurable, where for A ⊂ X and x ∈ X we define A/x ⊂ R by It was shown by Hitczenko in [44] (see also [24,28,29,32,46,64]) that any X-valued martingale difference sequence (d n ) n≥1 has a decoupled tangent martingale difference sequence on an enlarged probability space with an enlarged filtration, i.e. there exists an enlarged filtration F w.r.t. which (d n ) remains being a martingale difference sequence, an F-adapted martingale difference sequence (e n ) n≥1 , and a σ-algebra G ⊂ F ∞ such that P(e n |F n−1 ) = P(e n |G), n ≥ 1, and (e n ) n≥1 are conditionally independent given G (see Subsection 2.2). Moreover, (e n ) n≥1 is unique up to probability. Later in Section 3 we will extend a construction of such a martingale to the continuous-time case.
Remark 2.29. Note that due to Proposition 2.6, the construction of a decoupled tangent martingale difference sequence [28,29,44], and the uniqueness of its distribution we can give the following equivalent definition: (e n ) n≥1 is a decoupled tangent martingale difference sequence to (d n ) n≥1 if and only if for a.e. ω ∈ Ω the sequence (e n (ω)) n≥1 is a sequence of mean zero random variables so that c n (ω) has P(d n |F n−1 )(ω) as its distribution (see [28,29] or the proof of Theorem 3.39 for the construction of P(d n |F n−1 )(ω)).

Tangent martingales: the continuous-time case
This section is devoted to continuous-time tangent martingales and their properties. As the notion of tangency in the continuous-times case (see Definition 3.1 below) only cares about the jumps of a process and the quadratic variation of its continuous part, throughout this section we will assume that any martingale starts in zero. Also, in the sequel we will frequently use the stopping times argument which is allowed by Theorem A.3. In particular, while talking about tangent local martingales M and N we can automatically assume that these martingales have finite strong L 1 -moment, i.e. E sup t≥0 M t and E sup t≥0 N t can be presumed to be finite unless stated otherwise (see Remark 2.10).

Local characteristics and tangency
In order to define tangent martingales in the continuous-time case we need local characteristics. Let

Remark 3.2.
Note that this definition of tangency agrees with the one for discrete martingales given in Subsection 2.12. Indeed, let (d n ) n≥1 and (e n ) n≥1 be tangent martingale difference sequences. Then they are tangent in the continuous-time case if for any n ≥ 1 compensators of random measures μ dn and μ en defined on X by coincide. But these compensators exactly coincide with P(d n |F n−1 ) and with P(e n |F n−1 ) respectively as by the definition (2.3) of P(d n |F n−1 ) and P(e n |F n−1 ) and by (2.20) for any Borel set A ⊂ X one has that The converse direction can be shown similarly.
Now we are ready to define a decoupled tangent local martingale. Recall that conditional independence was defined in (2.4) and an enlargement of a filtered probability space was defined in Definition 2.3. Note that we can always set F := F ∞ , and that N may be assumed to have independent increments given F due to Proposition 2.6. We refer the reader to [64, p. 174] and [24,28,29,51,58,63,65] for further details on decoupled tangent local martingales.   [56,Theorem 26.14], and [116], and the fact that [[M ]] does not depend on the enlargement by its definition (2.7)), ν M can change. E.g. let X = R and let M = N 1 − N 2 , where N 1 and N 2 are independent standard Poisson processes. Let F = (F t ) t≥0 be generated by M and let F t be generated by F t and σ (τ n ) n≥1 , where (τ n ) n≥1 are all jump times of M . Then M is both an Fand F-martingale, but for any A ⊂ R and I ⊂ R + in the first case we have where M is an independent copy of M . As τ n and M are independent, we have holds for any t ≥ 0, and thus M is a martingale by a standard argument. Consequently, N (ω) in the definition above is can be assume to be a martingale instead of a local martingale.
Then M and N are tangent by (3.1), and N (ω) is a martingale with independent increments and local characteristics (0, ν M (ω)) as the same holds for (c n ) n≥1 thanks to Remark 2.29, so N is a decoupled tangent martingale to M . The converse can be shown analogously.
Now we are going to state two main results of the paper.

Theorem 3.7. Let X be a Banach space. Then X is UMD if and only if any Xvalued local martingale has local characteristic. Moreover, if this is the case, then for any tangent X-valued local martingales M and N and for any
Theorem 3.8. Let X be a UMD Banach space. Then for any X-valued local martingale there exists a decoupled tangent local martingale.
In order to prove these theorems we will need to treat each of the cases of the canonical decompositions separately in Subsection 3.3, 3.5, 3.6, and then combine them using Subsection 3.2 in Subsection 3.7.

Local characteristics and canonical decomposition
Let us first show that different parts of the canonical decomposition are responsible for different parts of the corresponding local characteristics, and in particular that if two martingales are tangent, then the same holds for the corresponding parts of the canonical decomposition. We will prove the theorem by using the following elementary propositions.  such that for any A ∈ P we have that a.s.
For the proof we will need the following lemma, which follows from [32, Lemma 3.13. Let (J, J ) be a measurable space, μ be an integer-valued random measure on R + ×Ω, and let τ be a predictable stopping times such that μ1 τ = μ. Then for the corresponding compensator ν we have we have that ν1 τ = ν, i.e. {τ } × X is a set of full ν M -measure a.s. Proof of Proposition 3.12. First notice that [[M c ]] = 0 analogously to Proposition 3.11. As M is purely discontinuous with accessible jumps, by Lemma 2.20

Continuous martingales
First let us consider continuous martingales. The theory of continuous martingales is already classical and in particular due to [36,81,119] the following proposition holds true.
What we are interested in is constructing for an X-valued continuous martingale M a decoupled tangent martingale N (see Definition 3.3), which we will need later in Subsection 3.7. For the proof we will need the following statement concerning Brownian representations.
where ( * ) holds by [56,Theorem 26.12]. Therefore in measure, and as this is a sum of nonnegative random variables, it converges a.s. For the similar reason and by the continuity of the conditional expectation [46, Section 2.6] one has that M is an H-valued martingale so that by e.g. [108, (3.8)] for any 0 ≤ s ≤ t a.s.
Consequently, by [108, Example 3.18 and Proposition 4.7] (see also [88,Theorem 2], [54,114], [27,Theorem 8.2], and [60, Theorem 3.4.2]) there exist an enlarged probability space (Ω, F, P) endowed by an enlarged filtration F = (F t ) t≥0 , an Fadapted cylindrical Brownian motion W H , and a predictable process Φ : Notice that by the construction of Φ presented in [108,Proposition 4.7] it depends only on the family of processes ([ M, h , M, g ]) h,g∈H , and the latter by the fact that covariation is bilinear depends only on the family of processes The desired follows by setting f n := √ a n nΦ * h n for any n ≥ 1, so Proof of Theorem 3.18. Without loss of generality by the Pettis measurability theorem [46, Theorem 1.1.20] we may assume that X is separable (and as X is UMD, it is reflexive, so X * is separable as well). By a stopping time argument we may assume that M is uniformly bounded a.e. on R + × Ω. Let (x * n ) n≥1 be a dense subset of a unit ball in X * , and let us define a random time-change a.s. for any x * ∈ X * , x * ≤ 1, see Remark 2.13 for the definition of γ(·)). Note that as stricktly increasing, this time-change is invertible, i.e. for a time changed filtration G = (G t ) t≥0 := (F τt ) t≥0 there exists a strictly increasing F-predictable continuous process A : thus by Lemma 3.19 we can show that there exist an enlarged probability space (Ω, F, P), an enlarged filtration G = (G t ) t≥0 , a Hilbert space H, a G-adapted cylindrical Wiener process W H , and a family of Let us first show that for any x * ∈ X * there exists a G-predictable process By the definition of Y , by the linearity of a stochastic integral, and by (3.7) for each m ≥ 1 there exists a G-predictable process f ym : where the latter follows from the fact that M is uniformly bounded. Therefore the existence of the desired f x * follows e.g. from (2.18) and the fact that the space L p (Ω; L 2 (R + ; H)) restricted to a proper predictable σ-algebra is a Banach space. Moreover, it follows from (2.17) that for any Let us now show that for a.e. ω ∈ Ω the mapping x * → f x * (ω) can be assumed to be linear. As Y is a linear subspace of X * generated by a countable set, it has a countable Hamel basis (z n ) n≥1 ⊂ X * . Let Z be a Q-span of (z n ) n≥1 . Then by the linearity of a stochastic integral and by the fact that Z is countable for any z 1 , . . . , z k ∈ Z and for any r 1 , . . . , r k ∈ Q we can assume that Moreover, without loss of generality since Z is countable we know that (3.8) holds a.s. for any z ∈ Z. Thus there exists Ω 0 ⊂ Ω of full measure such that for any ω ∈ Ω 0 we have a bounded linear operator T (ω) : . By extending the operator T (ω) to the whole X * and by the construction of f x * for a general x * ∈ X * we have that T (ω)x * = f x * (ω) for a.e. ω ∈ Ω, and the desired holds true.
and thus by [81,Theorem 3.5] and the fact that Finally, let us construct N . Let W H be an independent cylindrical Brownian motion, N := Φ · W H , and let N := N • A. Then N is a martingale on an enlarged probability space ( Ω, F, P) and an enlarged filtration F = (  (2.17) and due to the fact that for any x * ∈ X * a.s.
Now let us prove that N (ω) is a martingale with independent increments and with the local characteristics ([[M ]](ω), 0). This directly follows from the construction of a stochastic integral (see Subsection 2.10 and [81]), the fact that Φ(ω) is deterministic and is in γ(L 2 (R + ; H), X) for a.e. ω ∈ Ω, the fact that the time change (τ t (ω)) t≥0 is deterministic for a.e. ω ∈ Ω, and the fact that W H does not depend on ω ∈ Ω.

Stochastic integrals with respect to random measures
Before treating the case of purely discontinuous quasi-left continuous martingales in Subsection 3.5 we will need to prove a similar result for stochastic integrals with respect to random measures (see Subsection 2.8). This case will be done via Cox processes.

Cox process
Let (J, J ) be a measurable space, μ be an optional integer-valued random measure on R + × J with a compensator ν which is non-atomic in time (which is equivalent to the fact that μ is quasi-left continuous, see [57,Theorem 9.22] or [52,Corollary II.1.19]). Due to Cox [22] (see also [23,52,56,57,61]) it is known that there exists an enlarged probability space (Ω, F, P) = (Ω× Ω, F ⊗ F, P ⊗ P) (where ( Ω, F, P) is an independent probability space where the corresponding Poisson random measure lives, see Example 3.21), an enlarged filtration F = (F t ) t≥0 , and a unique up to distribution integer-valued random measure μ Cox on R + × J optional with respect to F having ν as a compensator so that μ Cox is conditionally Poisson given F, i.e. for any C ∈ P (see Subsection 2.8 for is a Poisson random measure and for almost any fixed ω ∈ Ω (see Subsection 2.9).
Such a random measure μ Cox is called a Cox process directed by ν.
Then the desired measure has the following form In the case of a general P -σ-finite compensator ν the latter can be expressed s., and then the Cox process μ Cox will have the form Cox is constructed analogously (3.9), but with using independent standard Poisson random measures N k on R + × J and compensators ν k respectively.

Random measures: tangency and decoupling
It turns out that Cox processes play an important rôle in random measure theory and in particular if one changes a random measure by the corresponding Cox process then the strong L p -norm does not change much. Recall that a stochastic integral with respect to a random measure is defined by (2.10).
Then X is UMD if and only if for any measurable space (J, J ) and any integer-valued random measure μ on R + × J with a compensator measure ν which is non-atomic in time one has that for any elementary predictable F : For the proof we will need the following proposition. Recall that a Poisson measure N is called nontrivial if its compensator in nonzero (equivalently, if it is nonzero itself).

the corresponding compensated Poisson measure. Then X has the UMD property if and only if for any elementary predictable
where N ind is an independent copy of N .
Proof. First notice that as ν = λ ⊗ κ, N is time-homogeneous, i.e. the distributions of N and shifted N (·, · + t, ·) are the same for any t ≥ 0. The "only if" part follows from the inequalities (1.2) for discrete tangent martingales, Remark 2.28, the definition of a stochastic integral with reapect to a random measure (2.10), and from the fact that by [47, Proposition 6.1.12] Let us show the "if" part. Let (3.11) be satisfied for some 1 ≤ p < ∞ and for any elementary predictable F . Then where N 1 ind and N 2 ind are independent copies of N , and ( * ) follows by a triangle inequality and the L p -contractility of a conditional expectation [46, Corollary 2.6.30]. Therefore for any predictable process a : where ( * ) follows from the fact that we are integrating both aF and F with respect to a symmetric independent noise, so a does not play any role. Let us show that (3.12) implies the UMD property. Without loss of generality by assuming that J := A for some fixed A ⊂ J with 0 < κ(A) < ∞ and that F has only steps of the form 1 A , we may assume that J consists only of one point and thus N is a standard compensated Poisson process with the rate parameter κ(A) (so, by a time-change argument the rate can be assumed 1), and thus (3.12) implies that for any elementary predictable F : First assume that p > 1. Then due to Doob's maximal inequality (2.6), (3.13) is equivalent to (3.14) Let (r n ) N n=1 be a sequence of independent Rademacher random variables (see ( 3.15) To this end we approximate in L p -sense distributions of N n=1 r n φ n (r 1 , . . . , r n−1 ) and Fix ε > 0. Let A > 0 be such that for a stopping time Such A exists since |Δ N τ | ≤ 1 and τ < ∞ a.s. as N is unbounded a.s., so a.s.
and since by [56,Theorem 25.14] E N τ = 0. (3.18) Let τ 0 = 0, τ 1 = τ , and for any 2 ≤ n ≤ N define By strong Markov property of Lévy processes we have that ( are independent, and thus by (3.16) and (3.18) there exists a sequence of independent Rademacher random variables which we without loss of generality can denote by (r n ) N n=1 such that Then one has that F and a are predictable by [52, Theorem I.2.2], and moreover where L > 0 is such that φ n ∞ < L for any 1 ≤ n ≤ N , and ( * ) follows from (3.19). For the same reason we have that By letting ε → 0 and by (3.14) we obtain (3.15).

583
Now let p = 1. Then we need to use good-λ inequalities in order to show that (3.13) holds for any p > 1 (see Section 5).
Let us fix β > 1, δ > 0, and λ > 0, and let us define stopping times , so by the definition of τ and ρ we have that M ≤ 2δλ (note that F is elementary predictable, so ρ is predictable, and so ΔM ρ = ΔL ρ = 0 as M and L are quasi-left continuous), and thus by (3.13) Therefore, as ΔM t ≤ F t a.s. for any t ≥ 0, where ( * ) follows from the fact that if τ = ρ = ∞, then L coincides with L − L τ ∧σ∧ρ on R + , and the fact that P(σ = ρ) = 0 as ρ is predictable and σ is totally inaccessible (see [56,Chapter 25]), while ( * * ) holds by (3.20). On the other hand as M ≤ 2δλ a.s. Consequently, and one has that (3.13) holds for any p > 1 by Lemma 5.2 and by the fact that ΔM * ≤ 2M * a.s., so the UMD property follows from the case p > 1 considered above.

I. S. Yaroslavtsev
For the proof of Theorem 3.22 we will also need the following technical lemma on approximation of continuous increasing predictable functions which simpler form was proven in [32,Subsection 5.5].
s. for any 0 ≤ s ≤ t and any C > 0, and such that F (0) = 0 a.s. Let 0 < p < ∞. Then for any p ∈ (0, 1], for any T ≥ 0, and for any δ > 0 these exists natural K 0 > 0 such that for any K > K 0 and Proof. Let us first show the lemma for p = 1. As it was shown in [32, Subsection 5.5], there exists a predictable process f : by the Fubini theorem and by the fact that a conditional expectation is a contraction on L 1 (Ω) Therefore by [34,Lemma 9.4.7] it is sufficient to show that T k f n → f n for a converging to f sequence (f n ) n≥1 . To this end we need to set f n (·) := f (· − 1/n) Therefore the desired follows. Let us show the case p = 1. In this case it is sufficient to notice that for any where f is defined by (3.22), T K is defined by (3.23), (i) follows from a triangle inequality, (ii) follows from the fact that F is nondecreasing (and the same holds for the conditional expectations), (iii) follows from the definition of f and T k f , and (iv) holds by the fact that f ∈ [0, C] a.e. on R + × Ω, by the definition (3.22) of T K , and the fact that a conditional expectation is a contraction on L ∞ (so T K f ∈ [0, C] a.e. on R + ×Ω). Therefore (3.21) for p = 1 follows by the dominated convergence theorem and the case p = 1.
Proof of Theorem 3.22. The "if" part follows from Proposition 3.23. Let us show the "only if" part. As F is elementary predictable we may assume that J is finite, J = {1, . . . , n}, J is generated by all atoms, X is finite dimensional, and F has the following form Let μ Cox be as constructed in Example 3.21. Then we need to show that 25) where ν N is a compensator of N , N := N − ν N , E N denotes expectation in Ω N (i.e. the expectation taken for a fixed ω ∈ Ω, see Example 2.5), and ν j is a random measure on R + of the form In order to derive (3.25) we will use the fact that any random measure is Poisson after a certain time-change (see [56,Corollary 25.26]) and the decoupling inequality (3.11). The proof will be done in four steps. Step s. for any j = 1, . . . , n, and that 1 ≤ p ≤ 2. By the fact that any martingale has a càdlàg version (see Subsection 2.5) and by adding knots to the mesh we may assume that K is so big that (or the mesh is so small that) By [56,Corollary 25.26] the random measure χ defined on R + × Ω by is a standard Poisson random measure with a compensator Without loss of generality by an approximation argument we may assume that K in (3.24) is so large so that there exists T > 0 such that t 0 , . .
Moreover, by considering a smaller mash for any δ > 0 we can assume that K is so large that by Lemma 3.24, by predictability and continuity of the process t → ν([0, t)), and by (3.27) Therefore the integral on the left-hand side of (3.25) becomes As χ is a standard Poisson random measure, by (3.27), by adding some pieces of standard Poisson random measure within stopping times, and by using the fact that Poisson process is strong Markov and stationary, without loss of generality we may assume that there exists a standard Poisson random measure η on R + × J with a compensator measure ν η = ν χ such that and is copied from an independent from χ standard Poisson random measure. Then the integral above becomes as follows Note that L is a martingale with respect to an enlarged filtration F = (F t ) t≥0 of the following form L is a martingale with respect to F , but it can be decomposed into two parts L 1 and L 2 which are martingales in different filtrations, in the following way. First we introduce a stopping time where τ j s is as defined by (3.29). Then let us define for any t ≥ 0 which is a martingale with respect to the original filtration, and {j} is a σ-field depending on Ω and ⊗ does not mean a direct product, see Subsection 2.2. Note that L 1 and L 2 are martingales in different scales, so s. Next, by Novikov's inequalities (2.13), the fact that X can be assumed finite dimensional, the fact that F is uniformly bounded, the definition (3.33) of σ j k , and (3.30) On the other hand for a similar reason and the fact that ν η (·×{j}) is a standard Lebesgue measure on R + for any j = 1, . . . , n

35)
Local characteristics and tangency 589 where E η is defined by Example 2.5, ( * ) follows from the fact that F is uniformly bounded, (2.13), and the fact that the random measure constructed from . As we can choose K big enough (and δ small enough), it is sufficient to show that To this end first notice that analogously to (3.35) where N is defined as in (3.25). Next note that by Theorem 1.1 (see also the proof of Proposition 3.23), (3.36), and the fact that holds by Lemma 9.3, and ( * * ) holds for δ small enough by (3.30) e.g. analogously (3.34). Therefore (3.28), and hence (3.10), follows. This terminates the proof. Step In the case of a general 1 ≤ p < ∞ we will have exactly the same proof as in Step 1, but with applying more complicated Novikov inequalities (2.13) for the case p > 2. Step Assume now that ν is infinite but finite on finite intervals. Then by a standard time change argument (see [48,Theorems 10.27 and 10.28] or [32, Subsection 5.5]) we can assume that a.s.
which was considered in Step 2.
Step 4: ν is general, 1 ≤ p < ∞. If we have a general measure ν, then we make the following two tricks. First, instead of considering μ, we consider Indeed, though μ m is finite a.s., we can add to μ m another independent Poisson random measure εζ, ε > 0, where ζ is a standard Poisson random measure with a compensator ν ζ satisfying ν ζ ((s, t] × {j}) = t − s for all 0 ≤ s ≤ t and j ∈ J. Then by Step 3 we have that and (3.37) follows by letting ε → 0, by a triangle inequality, and by the fact that (see also Section B), and for the same reason and the fact that we can set μ m Cox to be μ Cox | Am (as they are equidistributed) Thus (3.10) follows as a limit of (3.37).
Indeed, let (Ω, F, P) and F be the original probability space and filtration, let (Ω, F, P) be the extended by μ Cox probability space, and let F = (F t ) t≥0 be such As F is elementary predictable (but the same can be proven for any strongly Pmeasurable F via exploiting an approximation argument, see Proposition 3.27 and Definition 3.28) and as J is countably generated, there exists an increasing for all A ∈ J m (by approximating F as it was done in Step 4 of the proof of Theorem 3. 22 we may assume that Eμ Cox [46,Theorem 3.3.2]. The fact that M keeps the same local characteristics (0, ν M ) can be shown by (3.38) via proving that μ has the same compensator after enlarging the probability space and filtration. Assume that μ has a different F-compensatorν. Then for any definition of a compensator), so a predictable finite variation

Then thanks to Example 3.21 and the fact that the distribution of a Cox process is uniquely determined by its compensator we have that there exist independent standard Poisson processes
where the right-hand side is very much in the spirit of γ-radonifying operators (see Subsection 2.11; see also [3,98] An example of such R ν in the case of martingale type r spaces was presented in [43,121]: [43,121] work only with Poisson random measures with the compensator of the form λ R+ ⊗ ν 0 , in the case of deterministic F these estimates can be generalized to a general compensated Poisson measure). Therefore (3.40) allows us to extend [43,121] to stochastic integrals with respect to general random measures for martingale type r Banach spaces with the UMD property.

Purely discontinuous quasi-left continuous martingales
The present subsection is devoted to the purely discontinuous quasi-left continuous case. Our goal is to show that any X-valued purely discontinuous quasi-left continuous martingale M coincides with x dμ M (·, x) (where μ M is defined by (2.14)), so that we can reduce this case to the one considered in Subsection 3.4. To this end we need to define an integral of a general predictable (i.e. not necessarily elementary predictable) process with respect to a random measure (as (t, x) → x, t ≥ 0, x ∈ X is not elementary predictable). Let us start with the following proposition.
is well defined and is a martingale. Moreover, Proof. First of all note that M is well defined by formula (3.41) as by Fubini's theorem F is a.s. B(R + )⊗J -measurable and a.s. integrable with respect to μ and ν (the a.s. integrability w.r.t. ν holds since E R+×J F dν = E R+×J F dμ < ∞, see (2.11)). Also notice that (3.42) follows directly from (3.41), from a triangle inequality, and from (2.11).
As F is strongly P-measurable, as F ∈ L 1 (Ω × R + × J, P ⊗ ν; X), and as step functions are dense in L 1 (Ω × R + × J, P, P ⊗ ν; X) (here we choose the measure ν so that P ⊗ ν is a measure on P), there exist elementary predictable processes (F n ) n≥1 , F n : R + × Ω × J → X for any n ≥ 1, such that For each n ≥ 1 let

I. S. Yaroslavtsev
Then M n is a martingale. On the other hand by (3.42) we have that and thus, as martingales form a closed subset of L 1 (Ω; D(R + , X)) (see Definition 2.2 and Theorem 2.9), M is a martingale as well.
Now we are ready to define an integral of a general process with respect to a random measure.
Definition 3.28. Let (J, J ) be a measurable space, μ be an integer-valued optional random measure on R + × J, ν be its compensator,μ := μ − ν. Let X be a Banach space. A general strongly P-measurable process F : R + × Ω × J → X is called to be integrable with respect toμ if for any increasing family (A n ) n≥1 of elements of P satisfying E An F dμ < ∞ for any n ≥ 1 and ∪ n≥1 A n = R + × Ω × J, we have that the processes An F dμ converge in L 1 (Ω; D(R + , X)) as n → ∞.
F is called to be locally integrable with respect toμ if there exists an increasing sequence of stopping times (τ n ) n≥1 such that τ n → ∞ as n → ∞ and A1 [0,τn] is integrable with respect toμ for any n ≥ 1.
This definition is very much in the spirit of Lebesgue integration (a function f can be integrable only if its restrictions f | Bn are integrable and the corresponding integrals converge as the restriction domains B n 's blow up) or vector-valued stochastic integration with respect to a Brownian motion, see [81].
Proof. We will separately prove the "if" and the "only if" parts.
The "only if " part. Let X be a UMD Banach space, M : R + × Ω → X be a purely discontinuous quasi-left continuous martingale with E sup t≥0 M t < ∞, and let (A n ) n≥1 be some increasing family from P satisfying the properties from Definition 3.28. For every n ≥ 1 define an X-valued martingale x dμ M , t ≥ 0.

595
We need to show that (M n ) n≥1 converges in L 1 (Ω; D(R + , X)) as n → ∞. Note that by the definition of M n we have that ΔM n t (ω) = ΔM t 1 An (t, ω, ΔM t (ω)) for a.e. ω ∈ Ω for any t ≥ 0, and thus, as M and M n are purely discontinuous (M n is purely discontinuous as it is an integral with respect to a random measure, see e.g. [52,§II.1d] or [116,118]), by [119, Subsection 6.1] we have that (see Subsection 2.11 for the definition of a γ-norm) also note that for any n ≥ 1 by Remark 3.29 we have that E sup t≥0 M n t < ∞. The "if " part. This part of the proof is based on the tricks from [118,Subsection 4.4]. Assume that X is not UMD. Our goal is to find such a purely discontinuous quasi-left continuous martingale M and such an increasing family of sets (A n ) n≥1 in P that An x dμ M < ∞ for any n ≥ 1, but x1 An dμ M diverges in L 1 (Ω; D(R + , X)).
Due to the formula [ for any X-valued discrete martingales f = (f n ) n≥0 and g = (g n ) n≥0 with g 0 = f 0 = 0 and with g n − g n−1 = ε n (f n − f n−1 ), n ≥ 1, (3.45) for any fixed {0, 1}-valued sequence (ε n ) n≥1 . As X is not UMD, (3.44) does not hold. Therefore there exists a Paley-Walsh martingale (f n ) n≥1 (see [18,46] why we can restrict to the Paley-Walsh case), i.e. a martingale (f n ) n≥0 such that there exists a sequence (r n ) n≥1 of Rademachers (see Definition 2.1) so that f n − f n−1 = r n φ n (r 1 , . . . , r n ) for some φ n : {−1, 1} n−1 → X for every n ≥ 1 and f 0 = 0, and a {0, 1}-valued sequence (ε n ) n≥1 , such that we have that E sup n f n = 1 and E sup n g n = ∞ for (g n ) n≥0 satisfying (3.45) and g 0 = 0. Let N 1 and N 2 be two independent standard Poisson processes (Note that in this case N 1 − N 2 has a zero compensator and thus it is a martingale). Let τ 0 = 0, and for each n ≥ 1 define Note that τ n < ∞ a.s. and that τ n → ∞ a.s. as n → ∞ (for the construction of the standard Poisson process we refer the reader e.g. to [56,61,102,103] or in any other standard probability textbook), and that as Poisson processes have strong Markov property, are i.i.d. random variables. Moreover, as N 1 and N 2 can have jumps of size 1 a.s., σ n ∈ {−1, 1} a.s., and as N 1 and N 2 are independent equidistributed, (σ n ) n≥1 are independent Rademachers. In particular, for a simplicity of the proof we identify (σ n ) n≥1 with (r n ) n≥1 . Now let us consider a martingale M : and the integral is defined in the Riemann-Stieltjes way (N 1 − N 2 is a.s. locally of finite variation). First of all, M is a local martingale since for any n ≥ 1 we have that Φ τn is bounded and takes values in a finite-dimensional subspace of X, so the stochastic integral M τn = Φ τn d(N 1 − N 2 ) is well defined by a classical finite dimensional theory (see [52,56]). Moreover, as M τn = f n a.s. and as M is a.s. a constant on [τ n−1 , τ n ) for any n ≥ 1, EM * = E sup n f n = 1, and thus by the dominated convergence theorem and by the fact that a conditional expectation is a contraction on L 1 (Ω; X) (see [46,Section 2.6]) so M is a martingale with EM * < ∞.
Since E sup n g n = ∞, there exists a sequence 0 = k 1 < . . . < k m < . . . such that E sup km<n≤km+1 g n − g km > 1 for each m ≥ 1. Set J = X. Define where ( * ) follows from the definition (2.14) of μ M and ( * * ) follows from the fact that M is a.s. a constant on [τ n−1 , τ n ) for any n ≥ 1 and from the definition of M . Therefore x1 Am is integrable with respect toμ M by Proposition 3.27. Let us now show that Am x dμ M does not converge in L 1 (Ω; D(R + , X)). It is sufficient to show that A2m−1 x dμ M − A2m−2 x dμ M is big enough for any m ≥ 1: As F is stochastically integrable with respect toμ 1 , M n is a Cauchy sequence in L 1 (Ω; D(R + , X)). By Theorem 3.22 and by the fact that μ 1 and μ 2 have the same compensator we have that for any m ≥ n ≥ 1 so N n is a Cauchy sequence in L 1 (Ω; D(R + , X)), and thus F is integrable with respect toμ 2 by Definition 3.28. Let us show that M := F dμ 1 and N := F dμ 2 are tangent, i.e. as M and N are purely discontinuous, we need to show that the compensators ν M and ν N of μ M and μ N respectively coincide. Fix a predictable set A ⊂ R + × Ω × X. Then for any t ≥ 0 we have that a.s.
(the latter can be infinite), so by the definition of a compensator we have that The same can be shown for ν N . Therefore, as A was arbitrary predictable, ν M and ν N coincide, so M and N are tangent.

Purely discontinuous martingales with accessible jumps
The present subsection is devoted to L p estimates for purely discontinuous martingales with accessible jumps and to how a decoupled tangent martingale in this case look like. First we will start with the following elementary proposition which will provide us with L p -bounds for tangent martingales. For the proof we will need the following technical lemma.
Proof. As M and N are tangent, ν M = ν N . In particular, since τ is predictable (and hence a process t → 1 τ (t) is predictable as well) we have that for any Borel set A ∈ X a.s.
where ( * ) follows from the fact that a.s.
Indeed, first of all the latter is a filtration by [56,Lemma 25.2]. Next notice that for any even n = 2, . . . , 2m Let us now show that for any purely discontinuous martingale with accessible jumps taking values in UMD Banach spaces there exists a decoupled tangent martingale.
Theorem 3.39. Let X be a UMD Banach space, M : R + × Ω → X be a purely discontinuous local martingale with accessible jumps. Then there exist an enlarged probability space (Ω, F, P) endowed with an enlarged filtration F = (F t ) t≥0 , and an F-adapted purely discontinuous local martingale N : R + × Ω → X with accessible jumps such that M is a local F-martingale with the same local characteristics, M and N are tangent and N (ω) is a martingale with independent increments and with the local characteristics (0, ν M (ω)) for a.e. ω ∈ Ω.
Proof. By Remark 2.10 and Theorem A.3 The proof will be based on the construction of a CI 5 tangent martingale difference sequence presented in the proof of [29, Proposition 6.1.5]. Let (τ n ) n≥1 be a sequence of predictable stopping times with disjoint graphs such that a.s. where B(X) is the Borel σ-algebra of X, and let for any t ≥ 0 a σ-algebra F t on Ω be generated by all the sets (F t sees x n if τ n ≤ t, and does not see otherwise. Note that ⊗ in S t ⊗F t does not mean the direct product of σ-algebras since S t by its definition (3.50) depends on Ω, but in this case ⊗ means that the corresponding σ-algebra is generated by products of sets of the form (3.49)). Let F := (F t ) t≥0 . As (X, B(X)) is a Polish space (see [33, pp. 344, 386]), by [33, Theorem 10.2.2] for any n ≥ 1 and for almost any ω ∈ Ω there exists a probability measure P n ω on X such that for any B ∈ B(X) (see (2.5) for the definition of Now let us construct a càdlàg process N : (Spoiler: this is going to be our decoupled tangent martingale). We need to show that such a process exists P-a.s. and that this is an F-martingale. For each First note that N m is an F-adapted process with values in X as for any fixed (3.50), so N m is Fadapted as a sum of F-adapted processes. Let us show that N m is a purely discontinuous martingale with accessible jumps. N m has accessible jumps as by the definition (3.54) of N m it jumps only at predictable stopping times {τ 1 , . . . , τ m } (which remain predictable stopping times with respect to the enlarged filtration F as they remain being announced by the same sequences of stopping times, see Subsection 2.4). Note that for any 1 ≤ n ≤ m we have that (here F τn− and S τn− are defined analogously F τn− through an announcing sequence as τ n is a predictable stopping time, see Subsection 2.4 and (3.55)) (Here ⊗ again is not a product of σ-algebras, but a σ-algebra generated by products of sets of the form familiar to (3.49)). We need to show that E(ΔN m τn |S τn− ⊗ F) = 0. It is sufficient to show that for P-almost any fixed ω ∈ Ω, E(ΔN m τn(ω) (ω)|S τn− ) = 0 because we have that for any R ∈ F and A × R ∈ S τn− ⊗ F (where A depends on Ω in a predictable way so that A × R has the form (3.49)) we have that so the first integral equals zero if A ΔN m τn d ⊗ n≥1 P ω n = 0 for a.e. ω ∈ Ω for any A ∈ S τn− . By the definition (3.50) of S t we have that for almost any fixed ω ∈ Ω so we have that (here we used the fact that (τ n ) n≥1 have a.s. disjoint graphs), so S τn is a.s. generated by two independent σ-algebras S τn− and ΔS τn (which are independent a.s. by the definition (3.52) of P), and hence as ΔN m τn is a.s. ΔS τn -measurable, E(ΔN m τn (ω)|S τn− ) = E X N (ΔN m τn )(ω). Finally note that ΔN m τn (ω) has P n ω as its distribution by the definition (3.52) of P and the definition (3.54) of N m , and the latter distribution has a.s. a mean zero by the definition (3.51) as X x dP n ω = E(ΔM τn |F τn− )(ω) = 0 for a.e. ω ∈ Ω. Therefore E(ΔN m τn |F τn− ) = 0, and hence N m is a martingale by [32,Subsection 5.3].
Let us now show that M is an F-martingale with the same local characteristics (0, ν M ). Fix n ≥ 1. Then for any Borel B ⊂ X and any F τn− -measurable bounded F : Ω → R we have that where E ω denotes expectation w.r.t. ⊗ n≥1 P n ω for each fixed ω ∈ Ω. As F is F τn− -measurable and as F τn− ⊂ S ∞ ⊗ F τn− (this follows due to that fact that of P n ω , and (iii) follows from the definition of P and the definition of N m . Let us show that N m is a decoupled tangent martingale to M m , i.e. that N m (ω) has independent mean-zero increments and local characteristics (0, ν M m (ω)) for a.e. ω ∈ Ω. This easily follows from the fact that for a.e. fixed ω ∈ Ω the process N m (ω) has fixed jumps at {τ 1 (ω), . . . , τ m (ω)} and for every 1 ≤ n ≤ m we have that ΔN m τn (ω) is ΔS τn(ω) -measurable; as S ∞ = σ(ΔS τn(ω) , n ≥ 1) = ⊗ n≥1 B(X), so (ΔN m τn (ω)) m n=1 are independent since (ΔS τn(ω) ) n≥1 are independent. The fact that N m (ω) has local characteristics (0, ν M m (ω)) follows from the construction of N m . Now let us show that N m converges as m → ∞, and that the limit coincides with the desired N which thus exists. For any m 2 ≥ m 1 ≥ 1 by (3.48) and by the fact that N m1 − N m2 is a decoupled tangent martingale to M m1 − M m2 (which can be shown analogously to the considerations above) we have that Thus martingales (N m ) m≥1 converge in L 1 (Ω; D(R + , X)) by (B.4). Let N be the limit. Note that by Theorem 2.9 N is an F-martingales. Let us show that N coincides with the desired N . For any n ≥ 1 we have that for ΔN τn defined by (3.53) (note that we still need to prove that N exists and that N = N ) and by the fact that ΔN τn = ΔN m τn for m ≥ n where ( * ) follows by a triangle inequality. For the same reason we have that Δ N τ = 0 a.s. on τ / ∈ {τ 1 , . . . , τ n , . . .} for any stopping time τ . Therefore N coincides with the desired N , so such N exists. N is a decoupled tangent martingale to M for the same reason as N m is a decoupled tangent martingale to M m for any m ≥ 1.

Remark 3.40. To sum up Theorem 3.39. Any purely discontinuous martingale M with accessible jumps and with values in a UMD Banach space has a tangent martingale N on an enlarged probability space with an enlarged filtration such that for a.e. ω from the original probability space N is a martingale with fixed jump times coinciding with the jumps times of M and with independent increments.
Remark 3.41. Note that N constructed in the proof of Theorem 3.39 has independent increments given ν M . Indeed, for a.e. fixed ν M we have that the set (τ n (·)) n≥1 is fixed, the distributions (P n ω ) n≥1 are fixed and mean zero, so (ΔN τn(·) ) n≥1 are independent mean-zero random variable. Consequently the desired independence follows from Corollary 2.7.
It remains to show that M and N are local F-martingales with F-local characteristics ([[M c ]], ν M ). Let us start with M . To this end recall that the new filtration F over the enlarged probability space (Ω, F, P) is generated by F, time-changed independent cylindrical Wiener process W H (see the proof of Theorem 3.18), the Cox process μ Cox (see Remark 3.25), and the filtration (S t (ω)) t≥0 defined by (3.50). Let F : Ω → R be any bounded F-measurable random variable. Then for any fixed t ≥ 0 (3.59) where N is a σ-algebra generated by independent sequence of standard Poisson processes (this sequence can be assumed finite thanks to Remark 3.25), and where ( * ) follows from the fact that F is independent of W H and N and the trick similar to the computations (3.56) and (3.57). Hence, as F was general, M is a local F-martingale.
In order to show that M preserves its local characteristics we notice by Remark 3.4 that [[M c ]] stays the same, the predictable jumps (τ n ) n≥1 remain predictable (hence M a has accessible jumps and the local characteristics (0, ν M a ) do not change by (3.56) and (3.57)), and M q does not change its local characteristics as analogously to Remark 3.25 with exploiting (3.59) instead of (3.39) μ M q has the same compensator ν M q , so M q has the same local characteristics so as N c is a local F-martingale, it is a local F-martingale. The fact that N has the local characteristics ([[M c ]], ν M ) follows in the same way as for M .

Uniqueness of a decoupled tangent martingale
This subsection is devoted to showing that a decoupled tangent local martingale, if exists, is unique up to the distribution. Proof. Suppose that N 1 and N 2 live on probability spaces (Ω 1 , F 1 , P 1 ) and (Ω 2 , F 2 , P 2 ) respectively, where both (Ω 1 , F 1 , P 1 ) and (Ω 2 , F 2 , P 2 ) are enlargements of (Ω, F, P) (see Definition 2.3). Then by Definition 3.3 for a.e. fixed ω ∈ Ω processes N 1 (ω) and N 2 (ω) are local martingales with independent increments and local characteristics ([[M c ]](ω), ν M (ω)). Thus N 1 (ω) and N 2 (ω) are equidistributed by Corollary 9.8, and thus N 1 and N 2 are equidistributed as we have that for any Borel set B ∈ D(R + , X) where P 1 ω and P 2 ω are as in Definition 2.3. This terminates the proof.

Independent increments given the local characteristics
In fact, we can make Definition 3.3 stronger by proving the following theorem. Proof. The theorem follows directly from the construction of a decoupled tangent local martingale presented in Theorem 3.18, 3.22, and 3.39, from Remark 3.20, 3.36, and 3.41, from that fact that we can consider an enlargement of (Ω, F, P) generated by W H , μ Cox , and P defined by (3.52), and from Corollary 2.7 on condtioinal independence with respect to a random variable.

Upper bounds and the decoupling property
As it was shown in Theorem 3.7, if X is UMD, then for any local martingale M and for a decoupled tangent local martingale N we have that for any 1 where ( * ) follows from Lemma 9.3 and the fact that N (ω) is a martingale with independent increments for a.e. ω ∈ Ω. But what if we are interested only in the upper bound of (4.1) (this is often the case, see Remark 6.5 on stochastic integration)? Can we have such estimates for non-UMD Banach spaces? Inequalities of such form have been discovered by Cox and Veraar in [25,26] (see also [24,46,73]) and they turn out to characterize the so-called decoupling property. Unlike the UMD property, Banach spaces with the decoupling property might not enjoy reflexivity. For example, L 1 spaces has the decoupling property. Moreover, quasi-Banach spaces can also satisfy (4.2) (e.g. L q for q ∈ (0, 1), see [26]).
The goal of the present section is to extend (4.2) to the continuous-times case. Of course for a general Banach space X with a decoupling property and for a general X-valued martingale we will not have a decoupled tangent local martingale thanks to Theorem 3.8, but nonetheless, we are able to provide a continuoustime analogue of (4.2) in some spacial cases when such a decouple tangent local martingale exists. Let us start with the continuous case which is an elementary consequence of [26,Theorem 5.4]. Recall that for any time change (τ s ) s≥0 we have the inverse time change (A t ) t≥0 defined by A t := inf{s ≥ 0 : τ s ≥ t}, and that a process is in γ loc (L 2 (R + ; H), X) if it is locally in γ(L 2 (R + ; H), X) (see Subsection 2.11).
Then M c has a decoupled tangent local martingale N c which has the following form:

an independent copy of W H and (A c t ) t≥0 is the time change inverse to τ c . Moreover, if this is the case then for any
Proof. First note that by [ Proof. For each k ≥ 1 define a stopping time We can find such (possibly infinite) For the same reason we have that for any > m ≥ 1 Therefore in order to show (4.8) it is sufficient to show that E N a,m T −N a T p → 0 as m → ∞. This follows directly from the fact that N a (ω) has independent increments, hence N a,m (ω) = E(N a (ω)|σ(N a,m (ω))) due to the construction of N a,m , and so the desired holds true by [ It remains to show that which follows from the fact that N c T (ω), N q T (ω), and N a T (ω) are independent mean-zero for a.e. ω ∈ Ω due to Definition 3.3 and Theorem 9.2.

Convex functions with moderate growth
A function φ : R + → R + is called to have a moderate growth if there exists α > 0 such that φ(2x) ≤ αφ(x) for any x ≥ 0. The goal of the present section is to show the following result about tangent martingales and convex functions with moderate growth which extends Theorem 1.1 to more general functions and to continuous-time martingales and also extends [58,Theorem 4.2] to infinite dimensions.
Theorem 5.1. Let X be a Banach space, φ : R + → R + be a convex function of moderate growth such that φ(0) = 0. Then X is UMD if and only if we have that for any tangent local martingales M, N : where M * := sup t≥0 M t and N * := sup t≥0 N t .
In order to prove the theorem we will need two components: the canonical decomposition and good-λ inequalities for each part of the canonical decomposition. Namely we will use the following lemma proven by Burkholder in [14, Lemma 7.1] (see also [18, pp. 88-90], [15, pp. 1000-1001], and [91, Section 4] for various forms of general good-λ inequalities).

Good-λ inequalities
Let us start with good-λ inequalities for tangent continuous and purely discontinuous quasi-left continuous martingales. The following good-λ inequalities for continuous tangent martingales follow from L p estimates (3.5) analogously good-λ inequalities presented in [14, Section 8 and 9]. Proposition 5.3. Let X be a UMD Banach space, M, N : R + × Ω → X be tangent continuous local martingales. Then we have that for any 1 < p < ∞, δ > 0, and β > 1 where M * := sup t≥0 M t and N * := sup t≥0 N t .
Let us now show good-λ inequalities for stochastic integrals with respect to a random measure. First we will need a definition of a conditionally symmetric martingale. Remark 5.5. Note that in the discrete case, i.e. when we have an X-valued discrete martingale difference sequence (d n ) n≥1 , the latter definition is equivalent to P(d n |F n−1 ) being symmetric a.s. for any n ≥ 1. Now let us state and prove the desired good-λ inequalities.
Proposition 5.6. Let X be a UMD Banach space, M and N be X-valued tangent purely discontinuous quasi-left continuous conditionally symmetric local martingales. Then for any δ > 0 and any β > δ + 1 we have that For the proof we will need the following elementary lemma.
where μ M and μ N are as defined by (2.14).
Proof. As M and N are conditionally symmetric and tangent, we may set that ν = ν M = ν N is the compensator for both μ M and μ N , and that ν(· × B) = ν(· × −B) a.s. for any Borel set B ∈ B(X). Now let These processes are local martingales by the fact that x is locally stochastically integrable with respect toμ M andμ N thanks to Theorem 3.30, therefore x1 A is also locally integrable with respect toμ M for any A ⊂ P by [119, Subsection [47,Theorem 9.4.1]. On the other hand, as ν is symmetric in x ∈ X and as the function 1 · >a (x)x1 [0,ρ] (s) is antisymmetric in x ∈ X, by the definition of ρ we have that

7.2] and γ-domination
so the desired follows for M . The same can be done for N .
The second part of the lemma follows from the fact that a.s. on [0, ρ] and the fact that by the considerations above a.s.

Proof of Theorem 5.1
First we will prove each case of the canonical decomposition separately, and then compile them using the following proposition.
Let us show φ,X of (5.3 This terminates the proof. Fix φ : R + → R + convex of moderate growth such that φ(0) = 0.  Finally, (r n d n ) n≥1 and (r n e n ) n≥1 are tangent martingale difference sequences with respect to an enlarged filtration F = (F n ) n≥1 which is generated by the original filtration (F n ) n≥1 and by Rademachers (d n ) n≥1 as for any n ≥ 1 and for any Borel set A ∈ B(X) = P(r n e n |F n−1 )(A), (5.5) where (i) follows from the fact that r n is independent of d n and F n−1 , (ii) follows from the fact that d n is independent of σ(r 1 , . . . , r n−1 ), (iii) holds as (d n ) n≥1 and (e n ) n≥1 are tangent, and finally (iv) holds as (i), (ii), and (iii) can analogously be shown for e n . Moreover, r n d n and r n e n are conditionally symmetric given F n−1 for any n ≥ 1, so we have that For the proof of the theorem we will need the following lemma.
Proof. By a standard restriction to finite dimensions argument (see e.g. the proof of [115,Theorem 3.3]) and by the fact that AM and AN ate tangent for any linear operator A ∈ L(X, Y ) (see Theorem A.1) we may assume that X is finite dimensional. Due to Theorem 3.9 we may assume that both M and N are purely discontinuous. Let M = M q + M a and N = N q + N a be the canonical decompositions of M and N . Then by (2.8) we have that a.s.
Thus in order to show (5.7) it is sufficient to prove that First notice that (5.9) follows from a standard discrete approximation of purely discontinuous martingales with accessible jumps (see e.g. the proof of Proposi- For each n ≥ 1, let (d k n ) n k=1 be a decoupled tangent sequence of (d k n ) n k=1 and (ẽ k n ) n k=1 be a decoupled tangent sequence of (e k n ) n k=1 . Then by [29,Lemma 2.3.3] we have that Let M q be a local martingale decoupled tangent to both M q and N q . As M q , N q , and M q have càdlàg trajectories (see Subsection 2.5), we have the following convergences where the latter follows from Theorem 10.3; thus by (5.10) we have that so (5.8) (and consequently (5.7)) follows.
Proof of Theorem 5.11. First we prove the conditional symmetric case, and then the general case.
Step 2: general case. First of all, it is sufficient to assume that N is a decoupled tangent martingale to M . Let N be another decoupled tangent martingale to M conditionally independent of N given F. Then M − N and N − N are tangent martingales which are conditionally symmetric, and thus where (i) holds by the fact that a conditional expectation is a contraction and by the fact that φ is convex, (ii) follows from Step 1, and (iii) follows by the fact that φ is convex of moderate growth and that N and N are conditionally independent given F and equidistributed.

s. for any t ≥ 0, then there is no need in conditional symmetry in the proof of Proposition 5.6, and hence there is no need in using Section 10 in order to prove Theorem 5.11 (see e.g. the proof of Proposition 3.23).
Let us eventually show Theorem 5.1. By Proposition 5.8 it is sufficient to show that The inequality (5.14) follows from Theorem 3.9 and 5.10, (5.15) follows from Theorem 3.9 and 5.11, and finally (5.16) holds by Theorem 3.9 and 5.9, and the approximation argument from the proof of Proposition 3.37 and B.1.

Not convex functions
What is of a big interest is whether it is possible to have an analogue of Theorem 5.1 for a general φ of moderate growth (e.g. φ(t) = √ t), as it was done in the conditionally symmetric case in [58,Theorem 4.1]. In our case this is possible due to the following theorem.
Theorem 5.14. Let X be a UMD Banach space, φ : R + → R + be an increasing function of moderate growth such that φ(0) = 0. Then for any tangent conditionally symmetric martingales M, N : R + × Ω → X we have that Proof of Theorem 5.14. First note that one can show Proposition 5.6 and Lemma 5.7 for general conditionally symmetric M and N (with modifying M and N by adding to them M c τ ∧ρ∧t −M c τ ∧σ∧ρ and N c τ ∧ρ∧t −N c τ ∧σ∧ρ respectively). Then thanks to (5.1) we get that for any λ > 0, δ > 0, and β > 1 + δ so by fixing p ≥ 1, δ = 2, and β > 4 we derive for some fixed C p,X > 0 by (5.7) (5.17) consequently in particular if φ(ex) ≤ αφ(x) for some α > 1 (hence φ(βx) ≤ β ln α+1 φ(x) for β big enough), which holds as φ is of moderate growth, then analogously to [58, p. 38] by using the fact that φ is increasing so (φ −1 ) is nonnegative a.s. on R + so the desired follows by choosing p > ln α + 2 and β big enough.
Unfortunately the author does not know whether Theorem 5.14 holds for general martingales. Nonetheless, the following upper estimate can be shown. Proof. Let N be another decoupled tangent local martingale to M which is conditionally independent of N given the local characteristics of M . Then by Theorem 5.14 we have that It remains to notice that P( (5.17), the fact that φ has a moderate growth, and due to the equidistribution of N and N , and to note that by the fact that N − N has increments which are independent symmetric given the local characteristics of M we have that thanks to (5.17) and [47, Proposition 6.1.12] where ( * ) follows from (5.17), the fact that and the fact that φ has a moderate growth.

Integration with respect to a general martingale
The present section is devoted to new estimates for stochastic integrals with values in UMD Banach spaces. These are so-called predictable estimates as we will have a predictable process on the right-hand side since this process depends only on the corresponding local characteristics and thus it is predictable. In particular, these estimates extend sharp bounds for a stochastic integral with respect to a cylindrical Brownian motion obtained by van Neerven, Veraar, and Weis in [81,82] (see also [106,108] for continuous martingale case). On the other hand, this section in some sense extends a recent work [32] by Dirksen and the author on stochastic integration in L q -spaces, though the latter publication provides precise formulas for the right-hand side of (6.1), i.e. formulas that do not depend on the decoupled tangent martingale or the corresponding Cox process, but only on ν M . We also wish to note that the obtained below estimates are very different from those proven in [119, Subsection 7.1]: estimates (6.1) are more in the spirit of works of Novikov [84], Burkholder [14], and Rosenthal [99], while [119, Subsection 7.1] is based on Burkholder-Davis-Gundy inequalities, which are similar to square function estimates (see e.g. also [109]).

Remark 6.2.
As on both the right-and the left-hand sides of (6.1) we have norms (strictly speaking, seminorms, but we can consider a quotient space and make these expressions norms), analogously to [119,Subsection 7.1] we can extend the definition of a stochastic integral to any strongly predictable Φ : Remark 6.4. Why expressions on the right-hand sides of (6.1) and (6.2) can be useful? First, if one fixes ω ∈ Ω, then these expressions become stochastic integrals with respect to martingales with independent increments, which it is easier to work with. Second, if we are in the quasi-left continuous setting (i.e. M a = 0 and we have only Poisson-like jumps), then we end up with γ-norms and the norms generated by Cox processes, which might be of γ-radonifying nature but with the Poisson distribution (see Remark 3.26).
Remark 6.5. Thanks to Theorem 4.6 both (6.2) and the upper bound of (6.1) hold true if X has the decoupling property.

Weak tangency versus tangency
The natural question is raising up while working with local characteristics in infinite dimensions: given a Banach space X (perhaps, not UMD) and an X-valued martingale M . Can we have results of the form (3.3) for more general Banach spaces by using a family of local characteristics ([ M, x * c ], ν M,x * ) x * ∈X * instead of local characteristics discovered in Section 3 (note that the latter even might not exist by Theorem 3.7)? And how do these weak local characteristics correspond to the those defined in Section 3? Let us answer these questions. First we will need the following definitions. Here we show that weak tangency coincides with tangency in the UMD case, so this approach cannot extend Theorem 3.7 in the UMD setting. For the proof we will need the following lemma.
is continuous for X * endowed with the weak * topology.
Proof. By a stopping time argument and by Remark 2.10 we may assume that E sup t≥0 M t < ∞. Let (x * n ) n≥1 be a weak * Cauchy sequence with the limit x * . By the definition of weak * convergence we have that x, x * n → x, x * for any x ∈ X. Thus by [56,Theorem 26.6] a.s.
so we have that and thus it is enough to show that [ M, x * − x * n ] t L 1/2 (Ω) → 0 as n → ∞, which follows from the fact that by Burkholder-Davis-Gundy inequalities [56,Theorem 26.12] [ M, this decomposition exists as M and N have local characteristics). First notice that for any t ≥ 0 and for any x * ∈ X * a.s.
where ( * ) follows from the fact that M c , x * = M, x * c and N c , x * = N, x * c a.s. (see [32,116,118] ]] t coincide a.s. on weak * dense subset of X * which can be assumed countable by the sequential Banach-Alaoglu theorem, and thus they coincide on the whole X * by Lemma 7.4. Now let us show that ν M = ν N a.s. Fix Borel sets B ⊂ X and A ⊂ R + . It is sufficient to show that a.s. As X is separable, the Borel σ-algebra of X is generated by cylinders (see e.g. [8, Section 2.1]), we may assume that B is a cylinder as well, i.e. there exist linear functions x * 1 , . . . , x * m ∈ X * and a Borel set B ∈ R m such that . , x * m ), (y n ) n≥1 be a dense sequence of Y . Then by the assumption of the theorem there exists Ω 0 ⊂ Ω of full measure such that on Ω 0 ν M,yn = ν N,yn , n ≥ 1, and as (y n ) n≥1 is dense in Y by a continuity argument we have that on Ω 0 ν M,y = ν N,y , y ∈ Y. (7.2) Let P : X → R m be such that P x = ( x, x * 1 , . . . , x, x * m ) ∈ R m for any x ∈ X. Then by (7.2), by Lemma A.2, and by the Cramér-Wold theorem (see e.g. [7,Theorem 29.4] and [5]) we have that on Ω 0 and thus (7.1) follows. Consequently, M and N have the same local characteristics, and thus they are tangent.
Assume now that inequalities of the form (3.3) hold for some Banach space X for some 1 ≤ p < ∞ for all weakly tangent martingales. Then in particular for any independent Brownian motions W and W and for any elementary predictable Φ : R + × Ω → X for martingales M := Φ dW and N := Φ d W we have that by (2.17) for any t ≥ 0 a.s.
so M and N are weakly tangent and thus by (3 which implies UMD e.g. by [24,36,46] and by good-λ inequalities (5.3).

Decoupled tangent martingales and the recoupling property
An interesting question is the following. Thanks to Theorem 3.8 we know that for any given UMD space X any X-valued local martingale has a decoupled tangent local martingale. Using Section 7 one can try to extend the notion of a decoupled tangent local martingale via exploiting weak local characteristics in Definition 3.3, i.e. for a general Banach space X a local martingale N defined on enlarged probability space and filtration is called decoupled tangent to a local martingale M if M is a local martingale with respect to the enlarged filtration having the same weak local characteristics ([ M, x * c ], ν M,x * ) x * ∈X * so that N (ω) is a local martingale with independent increments with weak local characteristics ([ M, x * c ](ω), ν M,x * (ω)) x * ∈X * for a.e. ω from the original probability space. For which Banach space X we can guarantee existence of such an object?
In order to answer this question we need the recoupling property, which is dual to the decoupling property (see Definition 4.1). Definition 8.1. Let X be a Banach space. Then X is said to have the recoupling property if for some 1 ≤ p < ∞, for any X-valued martingale difference sequence (d n ) n≥1 and for a decoupled tangent martingale difference sequence (e n ) n≥1 one has that (8.1) Let us first show the following elementary proposition demonstrating that we can assume any 1 ≤ p < ∞ in Definition 8.1.

Proposition 8.2.
Let X be a Banach space, (d n ) be an arbitrary X-valued martingale difference sequence, (e n ) be its decoupled tangent martingale sequence. Then the following are equivalent.
and (e n ).
and (e n ).
any (d n ) and (e n ). Recall that X is called a UMD + Banach space if for some (equivalently, for all) p ∈ [1, ∞), every martingale difference sequence (d n ) ∞ n=1 in L p (Ω; X), and every independent Rademacher sequence (r n ) ∞ n=1 one has that (see [24,26,37,46,107]). [46, pp. 498-500], [91,Section 4.2], and [37]) and hence any Banach space X with the recoupling property is supereflexive, has finite cotype (see [37,Theorem 3.2]) and nontrivial type (due to [47,Theorem 7.3.8] and the supereflexivity of X). It remains open whether the recoupling property implies UMD (note that the recoupling property is equivalent to UMD if X is a Banach lattice thanks to [59,Theorem 8.4]; see also the discussion in [38] and [46,Section O]). Nonetheless, one can show that the recoupling property is in fact equivalent to UMD + for general martingales, which is in some sense a dual result to [24,Theorem 6.6(iii)]. Proof. The "only if " part. Assume that X has the recoupling property. Let (d n ) ∞ n=1 be an X-valued martingale difference sequence, (e n ) ∞ n=1 be a corresponding decoupled tangent sequence on an enlarged filtration. Then by Defi-

Remark 8.3. The recoupling property immediately yields the UMD + property for Paley-Walsh and Gaussian martingales (see
Note that (d n − e n ) n≥1 is conditionally symmetric. By an approximation argument (by adding a sequence (εxr n ) for some x ∈ X, independent Rademacher sequence (r n ), and some small enough ε > 0) we may assume that P(d n −e n = 0) = 0. Moreover, for the same approximation argument we may assume that there exists x * ∈ X * such that d n −e n , x * = 0 a.s. Let r n := d n −e n , x * /| d n −e n , x * |, ξ n := (d n − e n )/r n . Let us show that r n is independent of σ(ξ n , F n−1 ). Indeed, for any A ⊂ X with x, x * > 0 for any x ∈ A and any B ∈ F n−1 by the conditional symmetry we have that Therefore by setting a new filtration (G n ) n≥1 := (σ(F n , ξ n+1 )) n≥1 and noticing that (r n ξ n ) is a martingale difference sequence w.r.t. this filtration we can deduce from (8.1), Example 2.28, and the fact that a product of two Rademachers is a Rademacher that for any independent sequence (r n ) n≥1 of Rademachers Finally, by applying a conditional expectation w.r.t. σ(F, (r n )) and Jensen's inequality we know that E ∞ n=1 r n d n p ≤ E ∞ n=1 r n (d n −e n ) p . By combining all the inequalities above (8.3) follows.
The "if " part. Let X be UMD + , (d n ) n≥1 be an X-valued martingale difference sequence, (e n ) n≥1 be a decoupled tangent sequence. Given the UMD + property we need to show (8.1) for some p ≥ 1. Without loss of generality we may assume that the filtration is generated by (d n ) and the enlarged filtration is generated by (d n ) and (e n ). First assume that (d n ) n≥1 is conditionally symmetric. In this case for any Borel A ⊂ X we have that and (e n ) are tangent. Next, for any fixed ω ∈ Ω, (e n (ω)) are independent, hence (e n ) is decoupled. Thus (8.1) follows from (8.5).
Now let (d n ) be general. Then for any sequence of independent Rademachers (r n ) by (8.3) and [47, Proposition 6.1.12] Let (e n ) be a decoupled tangent sequence to (d n ). Then (r n e n ) is a decoupled tangent sequence to (r n d n ) (see (5.5)), which is conditionally symmetric, and hence by the conditional symmetric case we get Remark 8.5. The same proof yields that the UMD − property (the property which is inverse to UMD + , see e.g. [46,Chapter 4]) implies the decoupling property. The converse statement can be shown for conditionally symmetric martingale difference sequences (which is a weaker form of [24, Theorem 6.6(iii)]), but unfortunately a similar technique seems to be not able to provide an extension to general martingales as UMD − constants heavily dominate the decoupling constants in the real-valued case (see the discussion in [26, pp. 346-348]). The equivalence of UMD − and decoupling remains unknown for the author.
The following theorem is the main result of the section.  Let us show that there exists a decoupled tangent martingale N . We will construct separately N c , N q , and N a , and then sum them up. For N c let us consider ] c ) is finite and locally integrable. Therefore analogously to the proof of Theorem 3.18 there a.s. exist an invertible time-change (τ s ) s≥0 with an inverse time-change (A t ) t≥0 (see the proof of Theorem 3.18), a Hilbert space H, Φ ∈ γ(L 2 (R + ; H), X) predictable with respect to (F τs ) s≥0 , and a cylindrical Wiener process W H such that for any x * ∈ X * a.s.
As Φ ∈ γ(L 2 (R + ; H), X), by [119, Subsection 3.2] (see also [81]) Φ is a.s. integrable with respect to an independent cylindrical Brownian motion W H (with   [119], γ( is a family of independent Gaussians which can be considered countable a.s. as M has countably many jumps a.s. Let μ Cox be a Cox process directed by ν,μ Cox = μ Cox − ν. Let us show that x is integrable with respect toμ Cox (ω) for a.e. ω ∈ Ω. To this end notice that by [119, Proposition 6.8 and Subsection 7.2] and by the fact thatμ Cox (ω) is a Poisson random measure (so it is a random measure with independent increments) x is integrable with respect toμ Cox (ω) if E Cox x γ(L 2 (R+×X;μCox(ω)),X) < ∞, which is a.s. satisfied as for any p ≥ 1 where (A n ) n≥1 are defined analogously to the proof of Theorem 9.2 for quasileft continuous jumps of M , (γ t ) t≥0 are independent Gaussians, a step nondecreasing function t → n(t) is a discretization so that (γ n(t) ) t≥0 includes finitely many Gaussians and n(t) → t as n → ∞, (i) and (v) x), but the filtration generated by (γ t ) t≥0 is not countably generated, so an approximation needed; such an approximation can be done analogously Section 10). Therefore t → N q t := [0,t]×X x dμ Cox is a well-defined purely discontinuous quasi-left continuous martingale which has independent increment for a.e. ω ∈ Ω and thanks to (8.9) Lemma 2.20 and [56,Proposition 25.4]). Let N a,m be constructed for any m ≥ 1 similarly to (3.54). Let us show that N a,m converges in strong L 1 (Ω; X) as m → 0, i.e. for any n ≥ m ≥ 1 Note that by its construction N a,n (ω) − N a,m (ω) has independent increments, hence by [119, Subsection 6.2] and Remark 8.3 where ( * ) follows from (8.1), [66,Lemma 6.3], and the fact that (γ k ΔN τ k ) n k=m+1 is a decoupled tangent sequence to (γ k ΔM τ k ) n k=m+1 (this follows analogously Theorem 3.39, where one needs to reorder (τ k ) n k=m+1 making it increasing as it was done in the proof of Proposition 3.37), and ( * * ) holds true similarly to [119,Theorem 7.14]. Thus N a := lim n→∞ N a,n is a well-defined purely discontinuous martingale with accessible jumps and has independent increments for a.e. fixed ω ∈ Ω due to its construction (see the proof of Theorem 3.39), and analogously ( * ) in (8.11) (which holds for any power p ≥ 1) and due to [  First of all notice that N by its definition is a decoupled tangent martingale to M on [0, 1 − 1/2 n ] for any n ≥ 1. But lim t 1 N t = n≥1 e n does not exists due to the construction, so it is not a local martingale and thus not a decoupled tangent local martingale to M . Assume that M has some decoupled tangent martingale N . Then by Remark 3.6 for each ω ∈ Ω we have that (ΔN t (ω)) t≥0 and (Δ N t (ω)) t≥0 are equidistributed and independent, so N and N are equidistributed, hence N is not a local martingale, so the desired holds true.

Independent increments
The present section is devoted to martingales with independent increments. As we will see below, in this case one could avoid the UMD assumption in order to show existence of local characteristics. Moreover, in Subsection 9.2 we will show that such martingales have a precise form in terms of stochastic integrals with respect to cylindrical Brownian motions and Poisson random measures. Recall that we will be talking about martingales with independent increments without the localization assumption which can be omitted due to Remark 3.5.

Weak local characteristics and independent increments
As it was originally shown by Grigelionis in [42] (see also a multidimensional version [52, p. 106]), a local martingale has independent increments if and only if its local characteristics are deterministic. Let us extend this result to infinite dimensions via using weak local characteristics. Proof. The "only if" part is simple and follows directly from the real-valued case [42] and the fact that if M has independent increments then M, x * has independent increments for any x * ∈ X * as well.
Let us show the "if" part. First we reduct to the finite dimensional case. By the Pettis measurability theorem [46,Theorem 1.1.20] we may assume that X is separable. Let (x n ) n≥1 be a dense sequence in X \ {0}, (x * n ) n≥1 be a norming sequence, i.e. x n , x * n = x n and x * n = 1 for any n ≥ 1 (such linear functionals exist by the Hahn-Banach theorem). For each m ≥ 1 define Y m := span(x * 1 , . . . x * m ) and let P m : Y m → X * be the corresponding inclusion operator. Then by the definition of (x n ) n≥1 and (x * n ) n≥1 we have that the Borel σ-algebra of X is generated by (x * n ) n≥1 (e.g. x in the unit ball of X if and only if | x, x * n | ≤ 1 for all n ≥ 1), and so by the definition of P m we have that M has independent increments if and only if P * m M has independent increments for any m ≥ 1. So we need to prove the theorem for any m ≥ 1, which is equivalent to proving it for finite dimensional case as P * m M takes values in a finite dimensional space ran(P * m ). Now let X be finite dimensional. Then the theorem follows from [52,Theorem II.4.15].

General form of a martingale with independent increments
Now we are going to show that any martingale with independent increments (with values in any Banach space) has local characteristics, so there is no need in weak local characteristics. Moreover, any such a martingale has a very specific form outlined in Theorem 9.2. Recall that a vector-valued stochastic integral of a deterministic function with respect to a compensated Poisson random measure was defined in Definition 2.24. Recall that ν na was defined in Lemma 3.15 since a measure is quasi-left continuous if and only if the corresponding compensator is non-atomic in time by Remark 3.16 (see also [57,Theorem 9.22]). In order to prove Theorem 9.2 we will use these lemmas.
Moreover, by conditional Jensen's inequality [46,Proposition 2.6.29], by a triangle inequality, by the fact that φ has moderate growth, and by the fact that M and M are equidistributed we have that Proof. The "only if" part of each of the statements is obvious. Let us show the "if" part. First let us start with (I). Assume that M is not continuous. Then there exists a stopping time τ such that P(ΔM τ = 0) > 0. Without loss of generality by multiplying M by a constant we may assume that P( ΔM τ > 1) > 0. Fix ε < 1/2. Then, as (x n ) n≥1 is dense in X, there exists n ≥ 1 such that x n > 1 − ε and for a ball B with centre in x n and radius ε we have that P(ΔM τ ∈ B) > 0 (such a ball exists as X can be covered by countably many such balls). Then n is not continuous and the desired follows. Now let us turn to (II). Assume that M is not purely discontinuous. By [116,Subsection 2.5] (see also [32,Subsection 5.2]) it is analogous to the fact that there exists a continuous uniformly bounded martingale N : R + × Ω → R such that N 0 = 0 and MN is not a martingale. Moreover, by exploiting the proof of [116, Proposition 2.10] we even can find such N that EM t N t = 0 for some t ≥ 0. On the other hand if M, x * n is purely discontinuous for any n ≥ 1, then by [116,Proposition 2.10] M, x * n N is a martingale starting in zero, so consequently EM t N t = 0 as (x * n ) n≥1 is a norming sequence, and thus M is purely discontinuous.
Let us show (III). Let τ be a predictable stopping time. Then it can be shown that ΔM τ = 0 a.s. analogously (I), so M is quasi-left continuous. (IV) follows similarly. Step 1. X is finite dimensional. First assume that X is finite dimensional. Then the existence of the canonical decomposition is guaranteed by Theo-  Remark 11.4) and by the fact that M , M c , M q , and M a have independent increments we have that for any t 0 < t 1 < . . . < t N , for any numbers (a n ) N n=1 , (b n ) N n=1 , and (c n ) N n=1 , and for any vectors ( Thus L is purely discontinuous with jumps of size ( ΔM tm , x * ) m≥1 at (t m ) m≥1 by the fact that purely discontinuous martingales with accessible jumps form a closed subspace of L 1 (Ω), see e.g. [116,Proposition 3.30] or [56], and hence M a is purely discontinuous with jumps of the size (ΔM tm ) m≥1 at (t m ) m≥1 , so it has accessible jumps.
Step 2. Part 2. Construction of M c . Let us now construct M c . By the Pettis measurability theorem [46, Theorem 1.1.20] X can be presumed separable. Let (x n ) n≥1 be a dense sequence in X. Let (x * n ) n≥1 be a norming sequence in X * , i.e. x * n = 1 and x * n , x n = x n for any n ≥ 1. For each n ≥ 1 let M n := M, x * n . Let M n = M n,c + M n,q + M n,a be the corresponding canonical decomposition. By a stopping time argument, by a rescaling argument, and by Lemma 9.3 we may assume that E sup t≥0 M t ≤ 1. Then by (2.9), [56,Theorem 26.12 and 26.14] we have that hence M c is a martingale. Let us show that it is continuous. As (x * n ) n≥1 is a norming sequence, by Lemma 9.4 it is sufficient to show that M c , x * n is continuous for any n ≥ 1, so it is enough to prove that M c , x * n = M n,c . First notice that by the construction of W H in Lemma 3.19 the latter depends only on (M n,c ) n≥1 . Next note that the families (M n,q ) n≥1 and (M n,a ) n≥1 are independent of (M n,c ) n≥1 which follows from Step 1 of the present proof (Step 1 proves the independence directly for (M n,q ) N n=1 , (M n,a ) N n=1 , and (M n,c ) N n=1 for any N ≥ 1, and the desired independence follows by letting N → ∞). Finally, we have that for any n ≥ 1 and for any t ≥ 0 a.s.
where ( * ) follows from the fact that W H may be assumed to depend only on (M n,c ) n≥1 and the fact that M n,q and M n,a are independent of (M n,c ) n≥1 . Now let us show that there exists Φ ∈ γ(L 2 (R + ; H), X) such that M c • τ c = Φ · W H . First notice that for any x * ∈ X * a martingale M c , x * • τ c is adapted with respect to the filtration G := (G s ) s≥1 generated by W H . Therefore by the martingale representation theorem (see [97, §V.3] for the case of finite dimensional H, the infinite dimensional case can be shown analogously) there exists a G-predictable process f x * : Indeed, as X can be assumed separable, the unit ball of X * is sequentially weak * compact by sequential Banach-Alaoglu theorem, so we may assume that (x * n ) n≥1 is weak * dense in the unit sphere of X * . So for a sequence (y m ) m≥1 ⊂ (x * n ) n≥1 weak * converging to x * we have that by Burkholder-Davis-Gundy inequalities [56,Theorem 26.12], by Lemma 9.3, and by the dominated convergence theorem so f x * is deterministic as the limit of f ym which are deterministic. Also note that by our assumption from the very beginning of the proof E M ∞ = E M T < ∞ for some fixed T > 0. Therefore as we have that M c and γ-radonifying by [119, Subsection 3.2] since is a covariation bilinear form of a Gaussian random variable M c T , where ( * ) follows from the fact that by Itô's isometry [27,Proposition 4.13] and by the definition Now in order to show that Φ · W H coincides with M c • τ c it is sufficient to notice that by (9.7) Φ * x * = f x * , so and thus the desired follows from [83, Theorem 6.1].
Step 2. Part 3. Construction of M q . Now let us show that M q := M − M c − M a is quasi-left continuous purely discontinuous and has the following form M q = x dμ M q = x d N νna for some Poisson random measure N νna with a compensator ν na . First notice that M q is purely discontinuous quasi-left continuous by Corollary 9.5 as we have that for (x * n ) n≥1 exploited in Step 2. Part 2 M c , x * n is the continuous part of M, x * n for any n ≥ 1. Moreover, M a , x * n is the purely discontinuous with accessible jumps part of M, x * n as M a collects all the deterministic-time jumps of M , and since by Theorem 9.1 ν M,x * n is deterministic for any n ≥ 1, its atomic part ν M,x * n a (which coincides with ν M,x * n a by Proposition 3.14 and Remark 3.16) has a deterministic support, which is a subset of (t m ) m≥1 presented in Step 2. Part 1 as if P(Δ M, x * n t = 0) > 0 for some t ≥ 0, then P(ΔM t = 0) > 0, so the jump times of M, x * n a are covered by and coincide with the jump times of M a , x * n , consequently M a , x * n is the purely discontinuous with accessible jumps part of M, x * n for any n ≥ 1, and thus M q is the purely discontinuous quasi-left continuous part of the canonical decomposition of M .
Next let us show that μ M q is a Poisson random measure with a compensator ν M q = ν na (the letter equality follows from Proposition 3.14, Lemma 3.15, and Remark 3.16). First note that M q is independent of M c and M a and that M q has independent increments. This follows from a standard finite dimensional argument (see the proof of Theorem 7.3), Step 1, and the Cramér-Wold theorem (see [7,Theorem 29.4]). Now let us fix disjoint cylindrical sets B 1 , . . . , B K ∈ B(X) (see the proof of Theorem 7.3) satisfying dist(B k , {0}) > ε for any k = 1, . . . , K for some fixed ε > 0. Then for any stopping time τ we have that (9.8) and the latter is locally finite if one chooses τ to be the time of nth jump of M q of value more than ε. Therefore we can define point processes L 1 , . . . , L K : for any k = 1, . . . , K for any t ≥ 0. But then by [56,Corollary 25.26] and Step 1 these processes are times-changed Poissons, where the time-changes are deterministic as processes ν na ([0, t] × B k ) are deterministic since ν na is so. Therefore μ M q | R+×X\B(0,ε) is a Poisson random measure with the compensator ν na | R+×X\B(0,ε) (here B(0, ε) ⊂ X is the ball in X with the radius ε and the centre in 0), and then μ M q is Poisson as we can send ε → 0 and use the fact that by (2.14) we have that μ M q (R + × {0}) = 0 a.s. Therefore we can set N νna := μ M q and N νna = μ M q .
Finally, let us prove that Recall that the definition of such an integrability was discussed in Subsection 2.9. Let us show that there exist an increasing family (A n ) n≥1 of elements of B(R + ) ⊗ B(X) such that ∪ n A n = R + × X, An x dν na < ∞ for any n ≥ 1, and An x d N νna converges in L 1 (Ω) to M q ∞ = M q T . For every k ∈ Z let B k := B(0, 2 k )\B(0, 2 k−1 ). By (9.8) and the discussion thereafter we have that Moreover, the process (9.9) is continuous as ν na is nonatomic in time. Thus for any n ≥ 1 there exists t k n such that [0,t k n ]×B k x dν na ≤ n2 −k . Without loss of generality we may assume that (t k n ) n≥1 is an increasing sequence. Moreover, we may assume that t k n → ∞ as n → ∞ for any k ≥ 1. For each n ≥ 1 let us set Then by the construction of (t k n ) n,k≥1 and by the fact that ν na (R + × {0}) = μ M q (R + × {0}) = 0 a.s. by (2.14) we have that An x dν na < ∞ by (9.9) and (9.10). Let ξ n := An x d N νna for every n ≥ 1 (see Remark 2.25). Let ξ := M q ∞ . By [46,Theorem 3.3.2] in order to show that ξ n → ξ in L 1 (Ω; X) it is sufficient to prove that ξ n = E(ξ|σ(N νna | An )), n ≥ 1. (9.11) Fix n ≥ 1. To this end it is enough to show that ξ n , x * = E( ξ, x * |σ(N νna | An )) for any x * ∈ X * . Fix x * ∈ X * . Then ξ, x * = R+×X x, x * d N νna (·, x) as by Burkholder-Davis-Gundy inequalities [56,Theorem 26.12] and by the dominated convergence theorem where A n ⊂ R + × X is the completion of A n . Therefore where ( * ) holds from the fact that N is a Poisson random measure so N νna | An and N νna | A n are independent, and the fact that E An x, x * d N νna (·, x) = 0. Therefore (9.11) holds true, and thus ξ n → ξ in L 1 (Ω; X) by the Itô-Nisio theorem [47, Theorem 6.4.1], so M q = x d N νna (·, x).
Step 3. Proving (9.1). Finally let us show (9.1). This estimates follow analogously finite dimensional case proven in Step 1, with exploiting the fact that M c , M q , and M a are independent by the Cramér-Wold theorem (see [7,Theorem 29.4]) and by Step 1.

The approach of Jacod, Kwapień, and Woyczyński
In the present section we discover the infinite dimensional analogue of the celebrated result of Jacod [50] and Kwapień and Woyczyński [64], which says that if one discretize a real-valued quasi-left continuous martingale M by creating a sequence d n = (d n k ) n k=1 = (M T k/n − M T (k−1)/n ) n k=1 , and if one considers a decoupled tangent martingale difference sequenced n = (d n k ) n k=1 to d n , thend n converges in distribution to a decoupled tangent martingale M . The goal of the present section is to extend this statement to UMD-valued general local martingales.
Before stating the main theorem of the section we will need the following definitions. First recall that D([0, T ], X) denotes the Skorokhod space of all Xvalued càdlàg functions on [0, T ] (see Definition 2.2). Throughout this section we will assume the Skorokhod space to be endowed with the Skorokhod topology (instead of the sup-norm topology, see Remark 10.9) which is generated by the Skorokhod metric which has the following form. Let F, G ∈ D([0, T ], X). Then where the infimum is taken over all nondecreasing functions λ : We refer the reader to [6,7,53,104,112] for further information on Skorokhod spaces. We refer the reader to [9,103] for further details on convergence in distribution and on weak convergence.
Let X be a Banach space, M : R + × Ω → X be a local martingale. Fix T > 0 and for each n ≥ 1 define Proof. First notice that the nth power X × · · · × X of X (endowed with the p n product norm for any 1 < p < ∞) is a UMD Banach space (see e.g. [46,Proposition 4.2.17]). Then it is sufficient to consider an X ×· · · ×X-valued local martingale (M 1 , . . . , M n ) and set M 1 , . . . , M n to be such that ( M 1 , . . . , M n ) is a decoupled tangent local martingale to (M 1 , . . . , M n ). Then the lemma follows from Theorem A.1 and the fact that is a bounded linear operator from X × · · · × X to X.
Assume the converse, i.e. that Ef ( M ) = lim n→∞ Ef ( M n ). Then we can identify our sequence with a subsequence such that for some universal constant C X > 0. For this m by the assumption of the theorem we can find n ≥ 1 such that Without loss of generality assume that n = 1 (the proof for n > 1 is analogous). Letμ =μ 1 , N t =μ([0, t]×{1})−t, t ≥ 0. Then W is a standard Brownian motion by [56,Theorem 18.3 and 18.4] and N is a standard compensated Poisson process by [56,Corollary 25.26], and for any 0 ≤ t 1 < . . . < t N , 0 ≤ s 1 < . . . < s N and for any numbers α 1 , . . . , α N and β 1 , . . . , β N we have that (here we set t 0 = s 0 = α 0 = β 0 = 0 and dα k : is a constant a.s., so by a stopping time argument and by [52,II.4.16] we have that  [119,Subsection 7.5] we may assume that X is finite dimensional. Let M = M c + M q + M a be the canonical decomposition. We will approximate each part of the canonical decomposition separately (we are allowed to do so by (2.9)). Our goal is to exploit Lemma 10.5 and approximate M in such a way that M n almost coincides with M for any n big enough.
Step 1: approximation of M c . By the proof of Theorem 3.18 we may assume that there exists an invertible time-change (τ s ) s≥0 with an inverse time-change (A t ) t≥0 , a separable Hilbert space H, an elementary (F τs ) s≥0 -predictable process Φ : R + × Ω → L(H, X), and an (F τs ) s≥0 -adapted cylindrical Brownian motion W H such that M c • τ = Φ · W H , or, in other words, M c t = At 0 Φ dW H . By an approximation argument and the definition of a stochastic integral (see Subsection 2.10) we may assume that Φ is elementary (F τs ) s≥0 -predictable with respect to the mesh T for some fixed big natural number N 0 . Moreover, for N 0 big enough we can approximate M c by a continuous martingale (adapted to another filtration) M c,N0 in the following way.
By a stopping time argument and by the fact that A is continuous we may assume that A T ≤ C a.s for some fixed C > 0. Let W H be a cylindrical Brownian motion such that (Though the limit here cannot be considered literally as Φ•A is step with respect to the mesh (10.8) for some fixed N 0 , but not for any big N 0 , we still can send N 0 to infinity, as for any N N 0 we have that Φ for any t ≥ 0. Then where ( * ) holds by (2.18), the fact that X is finite dimensional, and the fact that Φ is elementary predictable, and hence bounded, while ( * * ) follows from Lemma 3.24. Moreover, if for any t ≥ 0 we set b c,N0 (t) Fix ε > 0. Let us show that for N 0 big enough we have that for a decoupled tangent martingale difference sequenceẽ N0 := (ẽ N0 k ) N0 k=1 the following holds true Then similarly to (10.9) and (10.10) we have that for any N 0 big enough (here N c,N0 and N q,N0 are defined analogously to (10.14)) To this end notice that we can choose T and N 0 big enough so that T/N 0 δ, and hence  [10,112] for the definition)? Then we will still have convergence in distribution in Theorem 10.4 as J 1 as a topology is stronger than any one of the aforementioned (see e.g. [112,Subsection 11.5.2]).

Exponential formula
In the present section we are going to provide another elementary characterization of local characteristics. Namely, we will be generalizing a Lévy-Khinchintype result for general martingales which is of the form [52, Theorem II.2.47] (see also [49]). First recall that for a given predictable stopping time τ a process V is called a local martingale on [0, τ) if V τn is a local martingale for any n ≥ 1 and for any announcing sequence (τ n ) n≥1 of τ (see Subsection 2.4 and [52, Definition II.2.46]). Then the main result of the section is as follows. such that for a process G(x * ) : R + × Ω → R defined by G t (x * ) = E(A(x * )) t := e At(x * ) Π 0≤s≤t (1 + ΔA s (x * ))e −ΔAs(x * ) , t ≥ 0, (11.2) and for a predictable stopping time is a local martingale on [0, τ G(x * ) ).
Why is Theorem 11.1 connected to the Lévy-Khinchin formula? Assume for a moment that M is quasi-left continuous. Then ν M is non-atomic in time, so A does not have jumps and G(x * ) = e A(x * ) for any x * ∈ X * , and thus by Theorem 11.1 we have that τ G(x * ) = ∞, hence which is the Lévy-Khinchin formula (see e.g. [52,102]). Let us shortly recall to the reader the idea of the proof of Theorem 11.1 in the real-valued setting (for the full proof we refer the reader to [52,§II.2d]). We start with proving the "only if" part, i.e. first we show that (11.3) is a local martingale given the corresponding local characteristics. 2) we have that ν N ν M on a set of positive probability, as if we assume the converse, then for the sets C n = 2 n B \ 2 n−1 B, −∞ < n < ∞, where B ∈ X is the unit ball, we have that C n = 2C n−1 , and hence by (12.2) for any −∞ < n < ∞ so ν M is infinite (as C n 's are disjoint, ∪C n = X \{0}, and as ν M = 0, there exists n and t such that ν M ([0, t] × C n ) > 0), which contradicts our assumption.

Proposition 12.3. Characteristic subordination does not imply weak differential subordination.
Proof. It is sufficient to consider two independent compensated standard Poisson processes N 1 and N 2 , as they are characteristically subordinate to each other (because they have the same local characteristics), but they are not weakly differentially subordinate to each other as they have jumps at different times a.s., i.e. Δ N 1 = 0 ⇒ Δ N 2 = 0 and Δ N 2 = 0 ⇒ Δ N 1 = 0 a.s. for any t ≥ 0. Let us now formulate the main theorem of the present section. Theorem 12.5. Let X be a Banach space. Then X is UMD if and only if for any 1 ≤ p < ∞ and for any local martingales M, N : R + × Ω → X such that N is characteristicly subordinate to M one has that The proof of the theorem is based on the canonical decomposition (see Subsection 2.7) and treating each case of the canonical decomposition separately. Therefore we will need the following propositions. Let us now show (12.3). Fix ω ∈ Ω such that ν (ω) ≤ ν(ω). Then both μ Cox and μ Cox are time changed Poisson. Let ν = ν − ν , μ Cox be the Cox process directed by ν . As ω is fixed, μ Cox and μ Cox are independent and μ Cox + μ Cox has the same compensator and hence coincides in distribution with μ Cox so we can set μ Cox = μ Cox + μ Cox ,μ Cox =μ Cox +μ Cox . Therefore (12.3) follows from the fact that for a fixed ω ∈ Ω the process F is deterministic, the fact that a conditional expectation operator is a contraction (see [46, Section 2.6]), and as μ Cox and μ Cox are independent for any fixed ω ∈ Ω.
We will also need the following proposition which is some sense extends stochastic domination inequality [96, Theorem 2] (see also [79]). Proposition 12.7. Let X be a Banach space, (ξ n ) N n=1 and (ξ n ) N n=1 be independent X-valued symmetric random variable such that for any Borel set A ⊂ X \ {0} and for any n = 1, . . . , N one has that P(ξ n ∈ A) ≤ P(ξ n ∈ A). Then for any convex symmetric function φ : X → R + one has that Eφ N n=1 ξ n ≤ Eφ r n ξ n , (12.5) where (r n ) N n=1 is an independent sequence of i.i.d. Rademachers (see Definition 2.1). Thus it is sufficient to show (12.5). By an approximation argument we may assume that (ξ n ) N n=1 and (ξ n ) N n=1 take finitely many values. By the assumption of the proposition we have that for any n = 1, . . . , N the random variable ξ n has the same distribution as η n (ξ n )ξ n , where for any x ∈ X \ {0} we define a random variable η n (x) on an independent probability space ( Ω, F, P) to be such that η n (x) ∈ {0, 1} a.s. and Eη n (x) = P(ξ n =x) P(ξn=x) , where we set 0 0 := 0. Fix ω ∈ Ω and ω ∈ Ω. Then in order to show (12.5) it remains to prove that E r φ N n=1 r n η n (x)( ω)ξ n (ω) ≤ Eφ N n=1 r n ξ n (ω) , and as all the coefficients (η n (x)( ω)) N n=1 are either 0 or 1, the latter follows from Jensen's inequality (see [46,Proposition 2.6.29]) as N n=1 r n η n (x)( ω)ξ n (ω) is just a conditional expectation of N n=1 r n ξ n (ω) given σ(r n η n (x)( ω)) N n=1 . Proof of Theorem 12.5. Without loss of generality (as N 0 ≤ M 0 , see [115,Lemma 3.6]) we can set M 0 = N 0 = 0. Let M = M c + M q + M a and N = N c +N q +N a be the canonical decompositions. Note that due to Definition 12.1 and Subsection 3.2 we have that ν N q ,x * ≤ ν M q ,x * and ν N a ,x * ≤ ν M a ,x * a.s. for any x * ∈ X * , so N i is characteristically subordinate to M i for any i ∈ {c, q, a}. By (2.9) it is sufficient to show that First of all, (12.6) follows from (12.1). (12.7) follows from Proposition 12.6, the fact that M q = x dμ M q and N q = x dμ N q by Theorem 3.30, and the fact that N q is characteristically subordinate to M q . Finally, (12.8) follows from a standard approximation argument (see e.g. Proposition B.1), the fact that any purely discontinuous martingale with finitely many predictable jumps has a discrete representation (see e.g. the proof of Proposition 3.37), Proposition 12.7, the construction of a decoupled tangent martingale from the proof of Theorem 3.39, the symmetrization argument (see the proof of Theorem 5.9), and the fact that N a is characteristically subordinate to M a .

Characteristic domination
We can straighten characteristic subordination in the following way. Let X be a Banach space, M and N be X-valued martingales. For the proof we will need the following proposition.  Proof. Without loss of generality we may assume that t → F (t, ·, ·) is a constant a.e. on Ω × J as otherwise we just approximate F by a step F 0 -measurable function and apply the whole proof below for each step of F separately. The proposition follows analogously Proposition 12.6, but then we need to show (12.3) in a difference way. Fix ω ∈ Ω such that ν (R + × A) ≤ ν(R + × A) < ∞, A ∈ J .