Multi-dimensional Gaussian fluctuations on the Poisson space

We study multi-dimensional normal approximations on the Poisson space by means of Malliavin calculus, Stein's method and probabilistic interpolations. Our results yield new multi-dimensional central limit theorems for multiple integrals with respect to Poisson measures -- thus significantly extending previous works by Peccati, Sol\'e, Taqqu and Utzet. Several explicit examples (including in particular vectors of linear and non-linear functionals of Ornstein-Uhlenbeck L\'evy processes) are discussed in detail.


Introduction
Let (Z, Z, µ) be a measure space such that Z is a Borel space and µ is a σ-finite nonatomic Borel measure. We set Z µ = {B ∈ Z : µ(B) < ∞}. In what follows, we writê N = {N (B) : B ∈ Z µ } to indicate a compensated Poisson measure on (Z, Z) with control µ. In other words,N is a collection of random variables defined on some probability space (Ω, F, P), indexed by the elements of Z µ and such that: (i) for every B, C ∈ Z µ such that B ∩ C = ∅, the random variablesN (B) andN(C) are independent; (ii) for every B ∈ Z µ , N (B) (law) = N (B) − µ(B), where N (B) is a Poisson random variable with paremeter µ(B). A random measure verifying property (i) is customarily called "completely random" or, equivalently, "independently scattered" (see e.g. [24]). Now fix d ≥ 2, let F = (F 1 , . . . , F d ) ⊂ L 2 (σ(N ), P) be a vector of square-integrable functionals ofN , and let X = (X 1 , . . . , X d ) be a centered Gaussian vector. The aim of this paper is to develop several techniques, allowing to assess quantities of the type where H is a suitable class of real-valued test functions on R d . As discussed below, our principal aim is the derivation of explicit upper bounds in multi-dimensional Central limit theorems (CLTs) involving vectors of general functionals ofN . Our techniques rely on a powerful combination of Malliavin calculus (in a form close to Nualart and Vives [14]), Stein's method for multivariate normal approximations (see e.g. [4,10,22] and the references therein), as well as some interpolation techniques reminiscent of Talagrand's "smart path method" (see [25], and also [3,9]). As such, our findings can be seen as substantial extensions of the results and techniques developed e.g. in [8,10,17], where Stein's method for normal approximation is successfully combined with infinite-dimensional stochastic analytic procedures (in particular, with infinite-dimensional integration by parts formulae).
The main findings of the present paper are the following: (I) We shall use both Stein's method and interpolation procedures in order to obtain explicit upper bounds for distances such as (1). Our bounds will involve Malliavin derivatives and infinite-dimensional Ornstein-Uhlenbeck operators. A careful use of interpolation techniques also allows to consider Gaussian vectors with a non-positive definite covariance matrix. As seen below, our estimates are the exact Poisson counterpart of the bounds deduced in a Gaussian framework in Nourdin, Peccati and Réveillac [10] and Nourdin, Peccati and Reinert [9].
(II) The results at point (I) are applied in order to derive explicit sufficient conditions for multivariate CLTs involving vectors of multiple Wiener-Itô integrals with respect toN . These results extend to arbitrary orders of integration and arbitrary dimensions the CLTs deduced by Peccati and Taqqu [18] in the case of single and double Poisson integrals (note that the techniques developed in [18] are based on decoupling). Moreover, our findings partially generalize to a Poisson framework the main result by Peccati and Tudor [19], where it is proved that, on a Gaussian Wiener chaos (and under adequate conditions), componentwise convergence to a Gaussian vector is always equivalent to joint convergence. (See also [10].) As demonstrated in Section 6, this property is particularly useful for applications.
The rest of the paper is organized as follows. In Section 2 we discuss some preliminaries, including basic notions of stochastic analysis on the Poisson space and Stein's method for multi-dimensional normal approximations. In Section 3, we use Malliavin-Stein techniques to deduce explicit upper bounds for the Gaussian approximation of a vector of functionals of a Poisson measure. In Section 4, we use an interpolation method (close to the one developed in [9]) to deduce some variants of the inequalities of Section 3. Section 5 is devoted to CLTs for vectors of multiple Wiener-Itô integrals. Section 6 focuses on examples, involving in particular functionals of Ornstein-Uhlenbeck Lévy processes. An Appendix (Section 7) provides the precise definitions and main properties of the Malliavin operators that are used throughout the paper.

Poisson measures
As in the previous section, (Z, Z, µ) is a Borel measure space, andN is a Poisson measure on Z with control µ.
Remark 2.1 Due to the assumptions on the space (Z, Z, µ), we can always set (Ω, F, P) and N to be such that where δ z denotes the Dirac mass at z, andN is the compensated canonical mapping (see e.g. [20] for more details). For the rest of the paper, we assume that Ω andN have this form. Moreover, the σ-field F is supposed to be the P-completion of the σ-field generated bŷ N .
Throughout the paper, the symbol L 2 (µ) is shorthand for L 2 (Z, Z, µ). For n ≥ 2, we write L 2 (µ n ) and L 2 s (µ n ), respectively, to indicate the space of real-valued functions on Z n which are square-integrable with respect to the product measure µ n , and the subspace of L 2 (µ n ) composed of symmetric functions. Also, we adopt the convention L 2 (µ) = L 2 s (µ) = L 2 (µ 1 ) = L 2 s (µ 1 ) and use the following standard notation: for every n ≥ 1 and every f, g ∈ L 2 (µ n ), For every f ∈ L 2 (µ n ), we denote by f the canonical symmetrization of f , that is, where σ runs over the n! permutations of the set {1, . . . , n}. Note that, e.g. by Jensen's inequality, For every f ∈ L 2 s (µ n ), n ≥ 1, and every fixed z ∈ Z, we write f (z, ·) to indicate the function defined on Z n−1 given by (z 1 , . . . , z n−1 ) → f (z, z 1 , . . . , z n−1 ). Accordingly, f (z, ·) stands for the symmetrization of the function f (z, ·) (in (n − 1) variables). Note that, if n = 1, then to indicate the Wiener-Itô integral of h with respect toN . For every n ≥ 2 and every f ∈ L 2 (µ n ), we denote by I n (f ) the multiple Wiener-Itô integral, of order n, of f with respect toN . We also set I n (f ) = I n (f ), for every f ∈ L 2 (µ n ), and I 0 (C) = C for every constant C.
The reader is referred e.g. to Privault [21] for a complete discussion of multiple Wiener-Itô integrals and their properties (including the forthcoming Proposition 2.3 and Proposition 2.4) -see also [14,24].

Proposition 2.3
The following properties hold for every n, m ≥ 1, every f ∈ L 2 s (µ n ) and every g ∈ L 2 s (µ m ): The Hilbert space composed of the random variables with the form I n (f ), where n ≥ 1 and f ∈ L 2 s (µ n ), is called the nth Wiener chaos associated with the Poisson measureN . The following well-known chaotic representation property is essential in this paper. Proposition 2.4 (Chaotic decomposition) Every random variable F ∈ L 2 (F, P) = L 2 (P) admits a (unique) chaotic decomposition of the type where the series converges in L 2 (P) and, for each n ≥ 1, the kernel f n is an element of L 2 s (µ n ).

Malliavin operators
For the rest of the paper, we shall use definitions and results related to Malliavin-type operators defined on the space of functionals of the Poisson measureN . Our formalism is analogous to the one introduced by Nualart and Vives [14]. In particular, we shall denote by D, δ, L and L −1 , respectively, the Malliavin derivative, the divergence operator, the Ornstein-Uhlenbeck generator and its pseudo-inverse. The domains of D, δ and L are written domD, domδ and domL. The domain of L −1 is given by the subclass of L 2 (P) composed of centered random variables, denoted by L 2 0 (P). Albeit these objects are fairly standard, for the convenience of the reader we have collected some crucial definitions and results in the Appendix (see Section 7). Here, we just recall that, since the underlying probability space Ω is assumed to be the collection of discrete measures described in Remark 2.1, then one can meaningfully define the random variable ω → F z (ω) = F (ω + δ z ), ω ∈ Ω, for every given random variable F and every z ∈ Z, where δ z is the Dirac mass at z. One can therefore prove that the following neat representation of D as a difference operator is in order.
A proof of Lemma 2.5 can be found e.g. in [14,17]. Also, we will often need the forthcoming Lemma 2.6, whose proof can be found in [17] (it is a direct consequence of the definitions of the operators D, δ and L). Lemma 2.6 One has that F ∈ domL if and only if F ∈ domD and DF ∈ domδ, and in this case δDF = −LF.
Remark 2.7 For every F ∈ L 2 0 (P), it holds that L −1 F ∈ domL, and consequently
By using the Cauchy-Schwarz inequality, one sees immediately that f ⋆ r r g is square-integrable for any choice of r = 0, . . . , p ∧ q , and every f ∈ L 2 s (µ p ), g ∈ L 2 s (µ q ).
As e.g. in [17,Theorem 4.2], we will sometimes need to work under some specific regularity assumptions for the kernels that are the object of our study.

The kernel f is said to satisfy
2. The kernel f is said to satisfy Assumption B, if every contraction of the type is well-defined and finite for every r = 1, ..., p, every l = 1, ..., r and every (z 1 , ..., z 2p−r−l ) ∈ Z 2p−r−l .
The following statement will be used in order to deduce the multivariate CLT stated in Theorem 5.7. The proof is left to the reader: it is a consequence of the Cauchy-Schwarz inequality and of the Fubini theorem (in particular, Assumption A is needed in order to implicitly apply a Fubini argument -see step (S4) in the proof of Theorem 4.2 in [17] for an analogous use of this assumption). Lemma 2.9 Fix integers p, q ≥ 1, as well as kernels f ∈ L 2 s (µ p ) and g ∈ L 2 s (µ q ) satisfying Assumption A in Definition 2.8. Then, for any integers s, t satisfying 1 ≤ s ≤ t ≤ p ∧ q, one has that f ⋆ s t g ∈ L 2 (µ p+q−t−s ), and moreover Remark 2.10 2. One should also note that, for every 1 ≤ p ≤ q and every r = 1, ..., p, for every f ∈ L 2 s (µ p ) and every g ∈ L 2 s (µ q ), not necessarily verifying Assumption A. Observe that the integral on the RHS of (5) is well-defined, since f ⋆ p−r p f ≥ 0 and g ⋆ q−r q g ≥ 0.
To conclude the section, we present an important product formula for Poisson multiple integrals (see e.g. [6,23] for a proof).

Stein's method: measuring the distance between random vectors
We write g ∈ C k (R d ) if the function g : R d → R admits continuous partial derivatives up to the order k. 2. The operator norm of a d × d real matrix A is given by 3. For every function g : R d → R, let where · R d is the usual Euclidian norm on R d . If g ∈ C 1 (R d ), we also write where Hess g(z) stands for the Hessian matrix of g evaluated at a point z.
4. For a positive integer k and a function g ∈ C k (R d ) , we set In particular, by specializing this definition to g (2) = g ′′ and g (3) = g ′′′ , we obtain Remark 2.13 1. The norm g Lip is written M 1 (g) in [4].
Definition 2.14 The distance d 2 between the laws of two R d -valued random vectors X and Y such that E X R d , E Y R d < ∞, written d 2 (X, Y ), is given by where H indicates the collection of all functions g ∈ C 2 (R d ) such that g Lip ≤ 1 and M 2 (g) ≤ 1.

Definition 2.15
The distance d 3 between the laws of two R d -valued random vectors X and , is given by where H indicates the collection of all functions g ∈ C 3 (R d ) such that g ′′ ∞ ≤ 1 and g ′′′ ∞ ≤ 1.

Remark 2.16
The distances d 2 and d 3 are related, respectively, to the estimates of Section 3 and Section 4. Let j = 2, 3. It is easily seen that, if d j (F n , F ) → 0, where F n , F are random vectors in R d , then necessarily F n converges in distribution to F . It will also become clear later on that, in the definition of d 2 and d 3 , the choice of the constant 1 as a bound for g Lip , M 2 (g), g ′′ ∞ , g ′′′ ∞ is arbitrary and immaterial for the derivation of our main results (indeed, we defined d 2 and d 3 in order to obtain bounds as simple as possible). See the two tables in Section 4.2 for a list of available bounds involving more general test functions.
The following result is a d-dimensional version of Stein's Lemma; analogous statements can be found in [4,10,22] -see also Barbour [1] and Götze [5], in connection with the so-called "generator approach" to Stein's method. As anticipated, Stein's Lemma will be used to deduce an explicit bound on the distance d 2 between the law of a vector of functionals ofN and the law of a Gaussian vector. To this end, we need the two estimates (7) (which is proved in [10]) and (8) (which is new).
From now on, given a d × d nonnegative definite matrix C, we write N d (0, C) to indicate the law of a centered d-dimensional Gaussian vector with covariance C.

Lemma 2.17 (Stein's Lemma and estimates) Fix an integer d ≥ 2 and let
2. Assume in addition that C is positive definite and consider a Gaussian random vector X ∼ N d (0, C). Let g : R d → R belong to C 2 (R d ) with first and second bounded derivatives. Then, the function U 0 (g) defined by is a solution to the following partial differential equation (with unknown function f ): Moreover, one has that sup x∈R d and Proof. We shall only show relation (8), as the proof of the remaining points in the statement can be found in [10]. Since C is a positive definite matrix, there exists a non-singular symmetric matrix A such that , the function h solves the Stein's equation where Y ∼ N d (0, I d ) and ∆ is the Laplacian. On the one hand, as Hess g A (x) = A Hess g(Ax)A (recall that A is symmetric), we have where the inequality above follows from the well-known relation it is easily seen that

Upper bounds obtained by Malliavin-Stein methods
We will now deduce one of the main findings of the present paper, namely Theorem 3.3. This result allows to estimate the distance between the law of a vector of Poisson functionals and the law of a Gaussian vector, by combining the multi-dimensional Stein's Lemma 2.17 with the algebra of the Malliavin operators. Note that, in this section, all Gaussian vectors are supposed to have a positive definite covariance matrix. We start by proving a technical lemma, which is a crucial element in most of our proofs.
where the mappings R ij satisfy Proof. By the multivariate Taylor theorem and Lemma 2.5, where the term R represents the residue: and the mapping (y 1 , y 2 ) → R ij (y 1 , y 2 ) verifies (9).
Remark 3.2 Lemma 3.1 is the Poisson counterpart of the multi-dimensional "chain rule" verified by the Malliavin derivative on a Gaussian space (see [8,10]). Notice that the term R does not appear in the Gaussian framework.
The following result uses the two Lemmas 2.17 and 3.1, in order to compute explicit bounds on the distance between the laws of a vector of Poisson functionals and the law of a Gaussian vector.

Theorem 3.3 (Malliavin-Stein inequalities on the Poisson space)
Proof. If either one of the expectations in (10) and (11) are infinite, there is nothing to prove: we shall therefore work under the assumption that both expressions (10)-(11) are finite. By the definition of the distance d 2 , and by using an interpolation argument (identical to the one used at the beginning of the proof of Theorem 4 in [4]), we need only show the following inequality: for any g ∈ C ∞ (R d ) with first and second bounded derivatives, such that g Lip ≤ A and M 2 (g) ≤ B. To prove (12), we use Point (ii) in Lemma 2.17 to deduce that It follows that Note that (7) implies that from which we deduce the desired conclusion. Now recall that, for a random variable F =N (h) = I 1 (h) in the first Wiener chaos ofN , one has that DF = h and L −1 F = −F . By virtue of Remark 2.16, we immediately deduce the following consequence of Theorem 3.3.
Corollary 3.4 For a fixed d ≥ 2, let X ∼ N d (0, C), with C positive definite, and let F n = (F n,1 , ..., F n,d ) = (N (h n,1 ), ...,N (h n,d )), n ≥ 1, be a collection of d-dimensional random vectors living in the first Wiener chaos ofN . Call K n the covariance matrix of F n , that is: In particular, if (as n → ∞ and for every i, j = 1, ..., d), then d 2 (F n , X) → 0 and F n converges in distribution to X.
Remark 3.5 1. The conclusion of Corollary 3.4 is by no means trivial. Indeed, apart from the requirement on the asymptotic behavior of covariances, the statement of Corollary 3.4 does not contain any assumption on the joint distribution of the components of the random vectors F n . We will see in Section 5 that analogous results can be deduced for vectors of multiple integrals of arbitrary orders. We will also see in Corollary 4.3 that one can relax the assumption that C is positive definite.
2. The inequality appearing in the statement of Corollary 3.4 should also be compared with the following result, proved in [10], yielding a bound on the Wasserstein distance between the laws of two Gaussian vectors of dimension where K and C are two positive definite covariance matrices. Then, and d W denotes the Wasserstein distance between the laws of random variables with values in R d .

Main estimates
In this section, we deduce an alternate upper bound (similar to the ones proved in the previous section) by adopting an approach based on interpolations. We first prove a result involving Malliavin operators.
As anticipated, we will now use an interpolation technique inspired by the so-called "smart path method", which is sometimes used in the framework of approximation results for spin glasses (see [25]). Note that the computations developed below are very close to the ones used in the proof of Theorem 7.2 in [9].
Proof. We will work under the assumption that both expectations in (15) and (16) are finite. By the definition of distance d 3 , we need only to show the following inequality: for any φ ∈ C 3 (R d ) with second and third bounded derivatives. Without loss of generality, we may assume that F and X are independent. For t ∈ [0, 1], we set We have immediately Indeed, due to the assumptions on φ, the function t → Ψ(t) is differentiable on (0, 1), and one has also On the one hand, we have On the other hand, We now write φ t,b i (·) to indicate the function on R d defined by By using Lemma 4.1, we deduce that Thus, Putting the estimates on A and B together, we infer We notice that and also To conclude, we can apply inequality (17) and deduce the estimates thus concluding the proof.
The following statement is a direct consequence of Theorem 4.2, as well as a natural generalization of Corollary 3.4.

Corollary 4.3
For a fixed d ≥ 2, let X ∼ N d (0, C), with C a generic covariance matrix. Let F n = (F n,1 , ..., F n,d ) = (N (h n,1 ), ...,N (h n,d )), n ≥ 1, be a collection of d-dimensional random vectors in the first Wiener chaos ofN , and denote by K n the covariance matrix of F n . Then, In particular, if relation (13) is verified for every i, j = 1, ..., d (as n → ∞), then d 3 (F n , X) → 0 and F n converges in distribution to X.

Stein's method versus smart paths: two tables
In the two tables below, we compare the estimations obtained by the Malliavin-Stein method with those deduced by interpolation techniques, both in a Gaussian and Poisson setting. Note that the test functions considered below have (partial) derivatives that are not necessarily bounded by 1 (as it is indeed the case in the definition of the distances d 2 and d 3 ) so that the L ∞ norms of various derivatives appear in the estimates. In both tables, d ≥ 2 is a given positive integer. We write (G, G 1 , . . . , G d ) to indicate a vector of centered Malliavin differentiable functionals of an isonormal Gaussian process over some separable real Hilbert space H (see [11] for definitions). We write (F, F 1 , ..., F d ) to indicate a vector of centered functionals ofN , each belonging to domD. The symbols D and L −1 stand for the Malliavin derivative and the inverse of the Ornstein-Uhlenbeck generator: plainly, both are to be regarded as defined either on a Gaussian space or on a Poisson space, according to the framework. We also consider the following Gaussian random elements: X ∼ N (0, 1), X C ∼ N d (0, C) and X M ∼ N d (0, M ), where C is a d × d positive definite covariance matrix and M is a d × d covariance matrix (not necessarily positive definite).
In Table 1, we present all estimates on distances involving Malliavin differentiable random variables (in both cases of an underlying Gaussian and Poisson space), that have been obtained by means of Malliavin-Stein techniques. These results are taken from: [8] (Line 1), [10] (Line 2), [17] (Line 3) and Theorem 3.3 and its proof (Line 4).
In Table 2, we list the parallel results obtained by interpolation methods. The bounds involving functionals of a Gaussian process come from [9], whereas those for Poisson functionals are taken from Theorem 4.2 and its proof.
Observe that: • in contrast to the Malliavin-Stein method, the covariance matrix M is not required to be positive definite when using the interpolation technique, • in general, the interpolation technique requires more regularity on test functions than the Malliavin-Stein method.

CLTs for Poisson multiple integrals
In this section, we study the Gaussian approximation of vectors of Poisson multiple stochastic integrals by an application of Theorem 3.3 and Theorem 4.2. To this end, we shall explicitly evaluate the quantities appearing in formulae (10)- (11) and (15)- (16).
Remark 5.1 (Regularity conventions) From now on, every kernel f ∈ L 2 s (µ p ) is supposed to verify both Assumptions A and B of Definition 2.8. As before, given f ∈ L 2 s (µ p ), and for a fixed z ∈ Z, we write f (z, ·) to indicate the function defined on Z p−1 as (z 1 , . . . , z p−1 ) → f (z, z 1 , . . . , z p−1 ). The following convention will be also in order: given a vector of kernels (f 1 , ..., f d ) such that f i ∈ L 2 s (µ p i ), i = 1, ..., d, we will implicitly set for every z ∈ Z belonging to the exceptional set (of µ measure 0) such that for at least one pair (i, j) and some r = 0, ..., p i ∧ p j − 1 and l = 0, ..., r. See Point 3 of Remark 2.10.

5.1
The operators G p,q k and G p,q k Fix integers p, q ≥ 0 and |q − p| ≤ k ≤ p + q, consider two kernels f ∈ L 2 s (µ p ) and g ∈ L 2 s (µ q ), and recall the multiplication formula (6). We will now introduce an operator G p,q k , transforming the function f , of p variables, and the function g, of q variables, into a "hybrid" function G p,q k (f, g), of k variables. More precisely, for p, q, k as above, we define the function (z 1 , . . . , z k ) → G p,q k (f, g)(z 1 , . . . , z k ), from Z k into R, as follows: where the tilde ∼ means symmetrization, and the star contractions are defined in formula (4) and the subsequent discussion. Observe the following three special cases: (i) when p = q = k = 0, then f and g are both real constants, and G 0,0 0 (f, g) = f × g, (ii) when p = q ≥ 1 and k = 0, then G p,p 0 (f, g) = p! f, g L 2 (µ p ) , (iii) when p = k = 0 and q > 0 (then, f is a constant), G 0,p 0 (f, g)(z 1 , ..., z q ) = f × g(z 1 , ..., z q ). By using this notation, (6) becomes The advantage of representation (19) (as opposed to (6)) is that the RHS of (19) is an orthogonal sum, a feature that will greatly simplify our forthcoming computations.
For two functions f ∈ L 2 s (µ p ) and g ∈ L 2 s (µ q ), we define the function (z 1 , . . . , z k ) → G p,q k (f, g)(z 1 , . . . , z k ), from Z k into R, as follows: or, more precisely, Note that the implicit use of a Fubini theorem in the equality (20) is justified by Assumption B -see again Point 3 of Remark 2.10.
The following technical lemma will be applied in the next subsection.
Lemma 5.2 Consider three positive integers p, q, k such that p, q ≥ 1 and |q − p| For any two kernels f ∈ L 2 s (µ p ) and g ∈ L 2 s (µ q ), both verifying Assumptions A and B, we have where s(t, k) = p + q − k − t for t = 1, . . . , p ∧ q. Also, C is the constant given by Proof. We rewrite the sum in (20) as x 2 i has been used in the above deduction.

Some technical estimates
As anticipated, in order to prove the multivariate CLTs of the forthcoming Section 5.3, we need to establish explicit bounds on the quantities appearing in (10)-(11) and (15)-(16), in the special case of chaotic random variables.

Definition 5.3 Let the integers
Remark 5.4 By using (18), one sees that (23) is implied by the following stronger condition: for every k = |q − p| ∨ 1, . . . , p + q − 2, and every (r, l) satisfying p + q − 2 − r − l = k, one has One can easily write down sufficient conditions, on f and g, ensuring that (24)  , and let F = I p (f ) and G = I q (g) be such that the kernels f ∈ L 2 s (µ p ) and g ∈ L 2 s (µ q ) verify Assumptions A, B and C. If where s(t, k) = p + q − k − t for t = 1, . . . , p ∧ q. Finally, the constant C is given by Proof. We select two versions of the derivatives D z F = pI p−1 (f (z, ·)) and D z G = qI q−1 (g(z, ·)), in such a way that the conventions pointed out in Remark 5.1 are satisfied. By using the definition of L −1 and (19), we have Notice that for i = j, the two random variables Z µ(dz)I i (G p−1,q−1 i (f (z, ·), g(z, ·)) and Z µ(dz)I j (G p−1,q−1 j (f (z, ·), g(z, ·))) are orthogonal in L 2 (P). It follows that , g(z, ·))) 2 for p = q, and, for p = q, We shall now assess the expectations appearing on the RHS of (25) and (26). To do this, fix an integer k and use the Cauchy-Schwartz inequality together with (23) to deduce that Relation (27) justifies the use of a Fubini theorem, and we can consequently infer that The remaining estimates in the statement follow (in order) from Lemma 5.2 and Lemma 2.9, as well as from the fact that f L 2 (µ n ) ≤ f L 2 (µ n ) , for all n ≥ 2.
The next statement will be used in the subsequent section.
) be a vector of Poisson functionals, such that the kernels f j verify Assumptions A and B. Then, by noting q * := min{q 1 , ..., q d }, Proof. One has that To conclude, use the inequality which is proved in [17,Theorem 4.2] (see in particular formulae (4.13) and (4.18) therein).

Central limit theorems with contraction conditions
We will now deduce the announced CLTs for sequences of vectors of the type As already discussed, our results should be compared with other central limit results for multiple stochastic integrals in a Gaussian or Poisson setting -see e.g. [8,10,12,13,18,19].
The following statement, which is a genuine multi-dimensional generalization of Theorem 5.1 in [17], is indeed one of the main achievements of the present article.
Then, F (n) converges to X in distribution as n → ∞. The speed of convergence can be assessed by combining the estimates of Proposition 5.5 and Proposition 5.6 either with Theorem 3.3 (when C is positive definite) or with Theorem 4.2 (when C is merely nonnegative definite).
Proof. By Theorem 4.2, so that we need only show that, under the assumptions in the statement, both (30) and (31) tend to 0 as n → ∞. That (30) tends to 0 is a direct consequence of the estimates in Proposition 5.5, whereas Proposition 5.6 shows that (31) converges to 0. This concludes the proof.
If the matrix C is positive definite, then one could alternatively use Theorem 3.3 instead of Theorem 4.2.
Remark 5.8 Apart from the asymptotic behavior of the covariances (29) and the presence of Assumption C, the statement of Theorem 5.7 does not contain any requirements on the joint distribution of the components of F (n) . Besides the technical requirements in Condition 1 and Condition 2, the joint convergence of the random vectors F (n) only relies on the 'onedimensional' Conditions 3 and 4, which are the same as condition (II) and (III) in the statement of Theorem 5.1 in [17]. See also Remark 3.5.

Examples
In what follows, we provide several explicit applications of the main estimates proved in the paper. In particular: • Section 6.1 focuses on vectors of single and double integrals.
• Section 6.2 deals with three examples of continuous-time functionals of Ornstein-Uhlenbeck Lévy processes.
The proof, which is based on a direct computation of the general bounds proved in Theorem 3.3, serves as a further illustration (in a simpler setting) of the techniques used throughout the paper. Some of its applications will be illustrated in Section 6.2.
Proposition 6.1 Fix integers n, m ≥ 1, let d = n + m, and let C be a d × d nonnegative definite matrix. Let X ∼ N d (0, C). Assume that the vector in (32) is such that are well defined and finite for every value of their arguments and for every 1 ≤ i 1 , i 2 ≤ n, (d) every pair (h i , h j ) verifies Assumption C, that in this case is equivalent to requiring that Then, Proof. Assumptions 1 and 2 in the statement ensure that each integral appearing in the proof is well-defined, and that the use of Fubini arguments is justified. In view of Theorem 4.2, our strategy is to study the quantities in line (15) and line (16) separately. On the one hand, we know that: for 1 ≤ i ≤ m, 1 ≤ j ≤ n, Then, for any given constant a, we have: where S 1 , S 2 , S 3 are defined as in the statement of proposition.
On the other hand, As the following inequality holds for all positive reals a, b: By applying the Cauchy-Schwarz inequality, one infers that We have We will now apply Lemma 2.9 to further assess some of the summands appearing the definition of S 2 ,S 3 . Indeed, by using the equality g (k) Consequently, Remark 6.2 If the matrix C is positive definite, then we have The following result can be proved by means of Proposition 6.1, or as a particular case of Theorem 5.7.
Assume that where for all k, the kernels g n satisfy respectively the technical Conditions 1 and 2 in Proposition 6.1 . Assume moreover that the following conditions hold for each k ≥ 1: or equivalently lim k→∞ g (k) 2. For every i = 1, . . . , m and every j = 1, . . . , n, one has the following conditions are satisfied as k → ∞: Then F (k) → X in law, as k → ∞. An explicit bound on the speed of convergence in the distance d 3 is provided by Proposition 6.1.

Vector of functionals of Ornstein-Uhlenbeck processes
In this section, we study CLTs for some functionals of Ornstein-Uhlenbeck Lévy process. These processes have been intensively studied in recent years, and applied to various domains such as e.g. mathematical finance (see [15]) and non-parametric Bayesian survival analysis (see e.g. [2,16] We denote byN a centered Poisson measure over R × R, with control measure given by ν(du), where ν(·) is positive, non-atomic and σ-finite. For all positive real number λ, we define the stationary Ornstein-Uhlenbeck Lévy process with parameter λ as ). We make the following technical assumptions on the measure ν: R u j ν(du) < ∞ for j = 2, 3, 4, 6, and R u 2 ν(du) = 1, to ensure among other things that Y λ t is well-defined. These assumptions yield in particular that We shall obtain Central Limit Theorems for three kind of functionals of Ornstein-Uhlenbeck Lévy processes. In particular, each of the forthcoming examples corresponds to a "realized empirical moment" (in continuous time) associated with Y λ , namely: Example 1 corresponds to an asymptotic study of the mean, Example 2 concerns second moments, whereas Example 3 focuses on joint second moments of shifted processes.
Observe that all kernels considered in the rest of this section automatically satisfy our Assumptions A, B and C.

Example 1 (Empirical Means)
We first recall the definition of Wasserstein distance. Definition 6.4 The Wasserstein distance between the laws of two R d -valued random vectors X and Y with E X R d ,E Y R d < ∞, written d w (X, Y ), is given by where H indicates the collection of all functions g ∈ C 1 (R d ) such that g Lip ≤ 1.
We define the functional A(T, λ) by A(T, λ) = 1 √ T T 0 Y λ t dt. We recall the following limit theorem for A(T, λ) , taken from Example 3.6 in [17].
Theorem 6.5 As T → ∞, N (0, 1), and there exists a constant 0 < α(λ) < ∞, independent of T and such that Here, we present a multi-dimensional generalization of the above result.
where X B is a centered d-dimensional Gaussian vector with covariance matrix B = (B ij ) d×d , with B ij = 2/ λ i λ j , 1 ≤ i, j ≤ d. Moreover, there exists a constant 0 < α = α(λ) = α(λ 1 , . . . , λ d ) < ∞, independent of T and such that By applying the multiplication formula (6) and a Fubini argument, we deduce that Q(T, λ) = I 1 ( √ T H ⋆ λ,T ) + I 2 ( √ T H λ,T ), which is the sum of a single and a double Wiener-Itô integral. Instead of deducing the convergence for (Q (T, λ 1 ), . . . , Q(T, λ d )), we prove the stronger result: as T → ∞. Here, X D is a centered 2d-dimensional Gaussian vector with covariance matrix D defined as: We prove (35) in two steps (by using Corollary 6.3). Firstly, we aim at verifying Indeed, by standard calculations, we have Secondly, we use the fact that for λ = λ 1 , . . . , λ d , the following asymptotic relations holds as T → ∞: ĥ * ,λ

Appendix: Malliavin operators on the Poisson space
We now define some Malliavin-type operators associated with a Poisson measureN , on the Borel space (Z, Z), with non-atomic control measure µ. We follow the work by Nualart and Vives [14], which is in turn based on the classic definition of Malliavin operators on the Gaussian space (see e.g. [7,11]).
(I) The derivative operator D.
For every F ∈ L 2 (P), the derivative of F , DF is defined as an element of L 2 (P; L 2 (µ)), that is, of the space of the jointly measurable random functions u : Ω × Z → R such that E Z u 2 z µ(dz) < ∞.
Definition 7.1 1. The domain of the derivative operator D, written domD, is the set of all random variables F ∈ L 2 (P) admitting a chaotic decomposition (1) such that k≥1 kk! f k 2 L 2 (µ k ) < ∞, 2. For any F ∈ domD, the random function z → D z F is defined by (II) The divergence operator δ. Thanks to the chaotic representation property ofN , every random function u ∈ L 2 (P, L 2 (µ)) admits a unique representation of the type where the kernel f k is a function of k + 1 variables, and f k (z, ·) is an element of L 2 s (µ k ). The divergence operator δ(u) maps a random function u in its domain to an element of L 2 (P).

Definition 7.2
1. The domain of the divergence operator, denoted by domδ, is the collection of all u ∈ L 2 (P, L 2 (µ)) having the above chaotic expansion (38) satisfied the condition: 2. For u ∈ domδ, the random variable δ(u) is given by wheref k is the canonical symmetrization of the k + 1 variables function f k .
As made clear in the following statement, the operator δ is indeed the adjoint operator of D.