Joint cumulants for natural independence

Many kinds of independence have been defined in non-commutative probability theory. Natural independence is an important class of independence; this class consists of five independences (tensor, free, Boolean, monotone and anti-monotone ones). In the present paper, a unified treatment of joint cumulants is introduced for natural independence. The way we define joint cumulants enables us not only to find the monotone joint cumulants but also to give a new characterization of joint cumulants for other kinds of natural independence, i.e., tensor, free and Boolean independences. We also investigate relations between generating functions of moments and monotone cumulants. We find a natural extension of the Muraki formula, which describes the sum of monotone independent random variables, to the multivariate case.


Introduction
Many kinds of independence are known in non-commutative probability theory.The most important example is the usual independence in probability theory, naturally extended to the non-commutative case.This is called tensor independence.Free independence is another famous example [17,18] and there are many researches on it (see [19] for early results).After the appearance of free independence, Boolean [16] and monotone independence [8] were found as other interesting examples of independence.To classify these independences, Speicher defined in [15] universal independence which satisfies some nice properties such as associativity of independence.After that, Schürmann and Ben Ghorbal formulated the universal independence in a categorical setting in [3].In [9] Muraki defined quasi-universal independence which allows non-commutativity of independence by replacing partitions in the definition of universal independence by ordered partitions.Later Muraki introduced natural independence in [10] as a generalization of the paper [3].He proved that there are only five kinds of natural independence: tensor, free, Boolean, monotone and anti-monotone independences.Since essential difference does not appear between monotone and anti-monotone independences for the purpose of this paper, we do not consider anti-monotone independence.
Let (A, ϕ) be an algebraic probability space, i.e., a pair of a unital * -algebra and a state on it.Let A λ be * -subalgebras, where λ ∈ Λ are indices.The above mentioned four independences are defined as rules to calculate moments ϕ(X 1 • • • X n ) for TH is supported by Grant-in-Aid for JSPS Research Fellows.

where
− → i∈V X i is the product of X i , i ∈ V in the same order as they appear in X 1 • • • X n .
(2) Free independence [17]: We assume all A λ contain the unit of A.
Independence for subsets S λ ⊂ A is defined by taking the algebras A λ generated by S λ (without the unit of A in the case of monotone or Boolean independence).
Many probabilistic notions have been introduced for each kind of independence.In particular, analogues of cumulants are a central topic in this field.In the usual probability theory, cumulants are extensively used in the study such as the correlation function of a stochastic process.When more than one random variables are concerned, cumulants for a single random variable are not adequate and their extension to the multivariate case is required.Cumulants for the multivariate case is called joint cumulants or sometimes multivariate cumulants.In free probability theory, Voiculescu introduced free cumulants in [17,18] for a single random variable as an analogy of the cumulants in probability theory.Later Speicher defined free cumulants for the multivariate case [14].Speicher also clarified that non-crossing partitions appear in the relation between moments and free cumulants.The reader is referred to [11] for further references.Boolean cumulants were introduced in [16] in the single variable case and seemingly in [7] in the multivariate case.
Lehner unified many kinds of cumulants in non-commutative probability theory in terms of Good's formula.A crucial idea was a very general notion of independence called an exchangeability system [7].Monotone cumulants however cannot be defined in Lehner's approach.This is because monotone independence is noncommutative: if X and Y are monotone independent, then Y and X are not necessarily monotone independent.Therefore, the concept of "mutual independence of random variables" fails to hold.In spite of this, we found a way to define monotone cumulants uniquely for a single variable in [6].In the present paper, we generalize the method to define joint cumulants for monotone independence.
For tensor, free and Boolean cumulants, the following properties are considered to be basic.

(MK1) Multilinearity: K
(MK2) Polynomiality: There exists a polynomial P n such that there exist nonempty, disjoint subsets Cumulants for a single variable can be defined from joint cumulants: Clearly the additivity of cumulants for a single variable follows from the property (MK3): The additivity of monotone cumulants for a single variable does not hold because of the non-commutativity of monotone independence.Instead, we proved in [6] that monotone cumulants for a single variable satisfy that The notion of a dot operation is important throughout this paper.This notion was used in the classical umbral calculus [12].Section 2 is devoted to the definition of the dot operation associated to each notion of independence.
In Section 3 we define joint cumulants for natural independence in a unified way along an idea similar to [6].The new notion here is monotone joint cumulants denoted as K M n .The property (MK3) however does not hold for the reason above.Alternatively, it is expected that (MK3) holds for identically distributed random variables in view of the single-variable case.This is, however, not the case; as we shall see later, K M 3 (X, Y, X) = 0 for monotone independent, identically distributed X and Y .To solve this problem, we generalize the condition (MK3) in Section 3. We can prove the uniqueness of joint cumulants under the generalized condition.
Then we prove the moment-cumulant formulae for natural independences in Section 4 and Section 5.The formulae for universal independences (tensor, free, Boolean) are known facts, but our proof relates the highest coefficients and the moment-cumulant formulae.This proof is however not applicable to the monotone case and monotone moment-cumulant formula is proved in a more direct way.
In Section 6 we clarify the relation of generating functions for monotone independence.We need to introduce a parameter t which arises naturally from the dot operation.This parameter can be understood to be a parameter of a formal convolution semigroup.

Dot operation
We used in [6] the dot operation associated to a given notion of independence.This is also crucial in the definition of joint cumulants for natural independence, that is, tensor, free, Boolean and monotone ones.Definition 2.1.We fix a notion of independence among tensor, free, Boolean and monotone.Let (A, ϕ) be an algebraic probability space.We take copies {X (j) } j≥1 in an algebraic probability space ( A, ϕ) for every X ∈ A such that (1) X → X (j) is a * -homomorphism from A to A for each j ≥ 1; (2) ϕ(X (3) the subalgebras A (j) := {X (j) } X∈A , j ≥ 1 are independent.
Then we define the dot operation N.X by for X ∈ A and a natural number N ≥ 0. We understand that 0.X = 0. Similarly we can iterate the dot operation more than once; for instance N.(M.X) can be defined (in a suitable space).
Remark 2.2.(1) The notation N.X is inspired from "the classical umbral calculus" [12].Indeed, this notion can be used to develop some kind of umbral calculus in the context of quantum probability.
(2) In many cases, we denote ϕ by ϕ for simplicity.
We can explicitly construct the above copies as follows.Let ⋆ be any one of the natural products of states (tensor, free, Boolean and monotone) on the free product of algebras and Λ := For an algebraic probability space (A, ϕ), we prepare copies {(A λ , ϕ λ )} λ∈Λ of it, i.e., (A λ , ϕ λ ) = (A, ϕ) for any λ ∈ Λ.Let us define a free product of algebras A := * λ∈Λ A λ and a natural product of states ϕ , where X λ is equal to X as an element of A = A λ .We denote by the same symbol (•) which can be extended to a * -homomorphism on A. Then iteration of dot operations can be realized in this space.For instance, N.(M.X) is defined as Remark 2.3.While tensor, free and Boolean independences provide exchangeability systems, monotone independence does not.However, we can extend an exchangeability system to include monotone independence.More precisely, an exchangeability system for an algebraic probability space (A, ϕ) consists of copies {X (i) } i≥1 of random variables X ∈ A such that, for arbitrary random variables ) under any permutation σ of N. Let us consider a weaker invariance that the joint moment is invariant under any order-preserving permutation σ, i.e., a permutation σ of N such that i < j implies σ(i) < σ(j).Then the copies in Definition 2.1 satisfy this weaker invariance for monotone independence as well as for the other three independences.
Proposition 2.4.(Associativity of dot operation).We fix a notion of independence among the four.Then the dot operation satisfies that } n i=1 are independent for each j and {X } n i=1 are independent.Since natural independence is associative, the random variables in (2.2) satisfy a stronger condition of independence than those in (2.1).By the way, the condition of independence in (2.1) is enough to calculate the expectation only by sums and products of joint moments of

Generalized cumulants
The following properties are basic for joint cumulants in tensor, free and Boolean independences.
(MK1) Multilinearity: K n : A n → C is multilinear.(MK2) Polynomiality: There exists a polynomial P n such that there exist nonempty, disjoint subsets Monotone cumulants do not satisfy (MK3), even if X i 's are identically distributed.For instance, ) if X and Y are monotone independent (see Example 5.4 in Section 5).Instead we consider the following property.
The terminology of extensivity is taken from the property of Boltzmann entropy.
In the tensor, free and Boolean cases, it is well known that there exist cumulants which satisfy (MK1), (MK2) and (MK3), and hence generalized cumulants exist obviously.Here we discuss the uniqueness of generalized cumulants for all natural independences, including monotone independence.Theorem 3.1.For any one of tensor, free, Boolean and monotone independences, joint cumulants satisfying (MK1), (MK2) and (MK3') are unique.
Proof.We fix a notion of independence.Let {K n } be two families of cumulants with possibly different polynomials in the conditions (MK1), (MK2) and (MK3').By the recursive use of (MK2), ϕ(X p 's, and also as another polynomial of K (2) p 's: It follows from (MK1) that these polynomials Q (1) and Q (2) have no constant terms or linear terms with respect to p 's and K p 's satisfy (MK3').The coefficients of N in the above two lines must be the same.Therefore, K n for any n.The above theorem implies that generalized cumulants coincide with the usual cumulants in tensor, free and Boolean independences since (MK3') is weaker than (MK3).This is nothing but a new characterization of those cumulants.
The existence of cumulants is not trivial.A key fact is the following.
Proposition 3.2.For tensor, free, Boolean and monotone independence, Proof.First we notice that there exists a polynomial S n (depending on the choice of independence) for any n ≥ 1 such that if {X i } n i=1 and {Y j } n j=1 are independent, i } j≥1 be copies of X i appearing in Definition 2.1.We prove the theorem by induction on n.The claim is obvious for n = 1 since the expectation is linear.We assume that the claim is the case for n ≤ k.We replace , respectively.Then one has The right hand side is a polynomial of L by assumption.Therefore, the sum is also a polynomial of N without a constant.Definition 3.3.We define the n-th monotone (resp.tensor, free, Boolean) cumulant It is easy to see from the proof of Proposition 3.2 that the multilinearity (MK1) and polynomiality (MK2) hold.The extensivity (MK3') comes from the associative law of the dot operation as follows.
Proof.The idea is the same as in [6].We recall that the dot operation is associative: ).
We know that K T , K F and K B are no other than the usual tensor, free and Boolean cumulants, respectively, because of Theorem 3.1.Therefore, it is obvious that the property (MK3) holds.However, we can also prove (MK3) directly on the basis of Definition 3.3 as follows.
Proof.We prove the claim for tensor independence; the other cases can be proved in the same way.Let (A i , ϕ i ) be algebraic probability spaces for i = 1, 2 and (A 3 , ϕ 3 ) be defined by i } k≥1 ) be the tensor exchangeability system constructed in [7].Namely, let {(A ⊂ A 3 be the natural inclusion.We shall prove that A 1 and A 2 are tensor independent in ( A 3 , ϕ 3 ).This follows from the equality of states under the natural isomorphism This is because the tensor product of states is commutative.Now we take since the sets {N.X i ; i ∈ I} and {N.X i ; i ∈ J} are independent.The definition of cumulants and the property (MK3') imply that the left hand side contains the term N K Corollary 3.6.For any one of tensor, free and Boolean independences, cumulants satisfying (MK1), (MK2) and (MK3) uniquely exist.

New look at moment-cumulant formulae for universal independences
Lehner proved in [7] the moment-cumulant formulae in a unified way for tensor, free and Boolean independence via Good's formula.Therefore, one may naturally expect that the moment-cumulant formulae can also be proved on the basis of Definition 3.3.In this section, the crucial concept is universal independence or a universal product introduced by Speicher in [15].He proved that there are only three kind of universal independence, i.e., tensor, free and Boolean ones.
We introduce preparatory notations and concepts.π is said to be a partition of The number k of elements of π is denoted as |π|.A partition π is said to be crossing if there are blocks V, W ∈ π such that elements a, c V and b, d ∈ W exist satisfying a < b < c < d. π is said to be non-crossing if it is not crossing.Moreover, a non-crossing partition π is called an interval partition if there are natural numbers 0 The sets of partitions, non-crossing partitions and interval partitions are respectively denoted as P(n), N C(n) and I(n).
A partial ordering can be defined on P(n).For partitions π and σ, σ ≤ π means that for any block V ∈ σ, there exists a block W ∈ π such that V ⊂ W .The partition consisting of one block {1, • • • , n} is larger than any other partition.
For random variables {X i } n i=1 and a subset We use the same notation for multilinear functionals: for multilinear functionals T p : A p → C (1 ≤ p ≤ n) and the subset W above, we define Given a family (A i , ϕ i ) and a partition π when X i and X j are in the same A k if i and j are in the same block of π.Consider a finer partition Let a product of states on (unital) algebras (A 1 , ϕ 1 ), (A 2 , ϕ 2 ) → (A 1 * A 2 , ϕ 1 ⋆ ϕ 2 ) be given, where * denotes the free product (with identification of units in the case of unital algebras).
Definition 4.1.The product ⋆ is called a universal product if it satisfies the following properties.
We give a new proof of the moment-cumulant formulae obtained in the literature.The proof below makes it clear how a partition structure appears in a momentcumulant formula.The following lemma is a simple consequence of the condition (2) of a universal product and (MK2).Lemma 4.2.Let ⋆ be a universal product, i.e., the tensor, free or Boolean product.Then there exist d(π Theorem 4.3.Let c(π; σ) be the universal coefficients for a given universal independence.Let d(π) be as in Lemma 4.2.Then d(π) = c(π; π).
+ a polynomial of N with degree more than |π|.
On the other hand, Lemma 4.2 implies that We used (MK3), or weaker, (MK3') in the second line.Then, by (MK3), which is stronger than (MK3'), + a polynomial of N with degree more than |π|.
We have used the vanishing property (MK3) of joint cumulants, not only (MK3'), for universal independence.Therefore, we cannot apply the above proof to monotone independence.We prove a moment-cumulant formula for monotone independence in the next section.
The highest coefficients for tensor, free and Boolean products are known as follows.
The above result, combined with Theorem 4.3, completes the unified proof for moment-cumulant formulae for universal products.Namely, we obtain

The monotone moment-cumulant formula
We call a subset V ⊂ {1, • • • , n} a block of interval type if there exist i, j, We denote by IB (n) the set of all blocks of interval type.
Let V be a subset of {1, • • • , n} written as The figure used in Theorem 6.1 is helpful to understand the situation.
Under the above notation, we can prove the following.
Proof.The subsets V j play roles of choosing positions of Y i 's.Then the claim follows immediately.

Let us define a multilinear functional ϕ
Since this is a polynomial of N , we can replace N ∈ N by t ∈ R and then obtain a multilinear functional ϕ t : Then the following is immediate from Proposition 5.1.
Corollary 5.2.We have the following recurrent differential equations. (1 ). Proof.We replace X i and Y i in Proposition 5.1 by N.X i and (N + M ).X i − N.X i respectively.We notice that {N.X i } n i=1 and {(N +M ).X i −N.X i } n i=1 are monotone independent and that (N + M ).X i − N.X i is identically distributed to M.X i .We replace N by t and M by s and then the equality holds.The equations ( 1) and ( 2) follows from respectively the derivation d dt | t=0 and d ds | s=0 .We note that the coefficient of s appears only when V c ∈ IB(n) and therefore we obtain (2) by replacing V c by V .
Example 5.4.We show the monotone cumulants up to the forth order.
We define a generating function of the joint moments of X First we show the following "multivariate Muraki formula" for generating functions.Theorem 6.1.For any X the both hands sides.In the left hand side, it was calculated in Proposition 5.1.The right hand side is expanded as where the summation is understood to be M Y for k = 0.The question is when the term ) can be described by a block V and then the other blocks (V i ) as in Fig. 1 interpolate (j 1 , • • • , j k ).From Proposition 5.1, the coefficients of the both hands sides coincide.
It is worthy to compare Theorem 6.3(2) with the relation in free probability.Let R X (z 1 , • • • , z r ) be the generating function of free cumulants Then it is known that (6.1) The reader is referred to Corollary 16.16 in [11].The above relation can also be expressed as This is the basic relation of a monotone convolution semigroup, first obtained in [8].
Actually, a motivation of the paper [6] was the observation that the coefficients of A X (z) had nice properties as cumulants.For instance, the arcsine law with mean 0 and variance 1 is characterized by A X (z) = − 1 z , or equivalently, K M 1 (X) = 0, K M 2 (X) = 1, K M n (X) = 0 for n ≥ 3. Therefore, the problem was how to define cumulants for all probability measures.We can say that we defined monotone cumulants so that (6.2) holds.In a recent paper [5], another way is presented to define monotone cumulants and their generalization on the basis of the differential equation (6.2).However, it is difficult to generalize the method in [5] to the multivariate case.In this sense, the present method has advantage.Theorem 6.3 extends (6.2) to the multivariate case.
As is explained in the above, t means a parameter of a "formal" convolution semigroup.Let us focus on this point more.Let X be bounded and self-adjoint for simplicity.Then M X (t; z) may not be a moment generating function of a probability measure for general t ≥ 0 and X.More precisely, M X (t; z) becomes a moment generating function of a probability measure for any t ≥ 0 if and only if the probability distribution of X is monotone infinitely divisible.
The reader might wonder if there is a relation between the moment and cumulant generating functions without the use of t.For instance, one does not need the parameter t in free probability theory [18].In this case the cumulant generating function K X is called an R-transform and is denote by R X .The basic relation is given by M X (z) = 1 + R X (zM X (z)).Therefore, R X can be expressed by using the inverse function of zM X (z).However, such a relation does not exist for monotone cumulants because of the difficulty of the correspondence between a holomorphic map and its vector field [1,2,4].
In spite of the above, we can also understand this difficulty in a positive way since the use of the parameter t indicates a new insight into relationship between independence and differential equations.

Definition 1 . 1 .
(1) Tensor independence: [6]similar to the differential equation in Theorem 6.3(2).Remark 6.4.In the previous paper[6], we did not mention the relation between generating functions and cumulants.Now we explain the relation in detail.The differential equation becomes ∂ ∂t M X (t; z) = M X (t; z)K M X (zM X (t; z)) in the one variable case.If we use A X (z) := −zK M X ( 1 z ) and the reciprocal Cauchy transform H X (t; z) = z X (t; z) = A X (H X (t; z)).