Inequalities for permanental processes

Permanental processes are a natural extension of the definition of squared Gaussian processes. Each one-dimensional marginal of a permanental process is a squared Gaussian variable, but there is not always a Gaussian structure for the entire process. The interest to better know them is highly motivated by the connection established by Eisenbaum and Kaspi, between the infinitely divisible permanental processes and the local times of Markov processes. Unfortunately the lack of Gaussian structure for general permanental processes makes their behavior hard to handle. We present here an analogue for infinitely divisible permanental vectors, of some well-known inequalities for Gaussian vectors.


Introduction
A real-valued positive vector (ψ i , 1 ≤ i ≤ n) is a permanental vector if its Laplace transform satisfies for every (α 1 , α 2 , ..., α n where I is the n × n-identity matrix, α is the diagonal matrix diag(α i ) 1≤i≤n , G = (G(i, j)) 1≤i,j≤n and β is a fixed positive number.Such a vector (ψ i , 1 ≤ i ≤ n) is a permanental vector with kernel (G(i, j), 1 ≤ i, j ≤ n) and index β.
Necessary and sufficient conditions for the existence of permanental vectors have been established by Vere-Jones [14].
The recent extension of Dynkin isomorphism theorem [5] (reminded at the beginning of Section 2) to non necessarily symmetric Markov processes suggests that the path behavior of local times of Markov processes should be closely related to the path behavior of infinitely divisible permanental processes.The problem is that permanental processes are new objects of study.The original version of Dynkin isomorphism theorem connects local times of symmetric Markov processes to squared Gaussian processes.The successful uses of this identity (see [1], [12] or [3]) are mostly based on inequalities specific to Gaussian vectors such as Slepian Lemma, Sudakov inequality, or concentration inequalities.Hence the preliminary question to face, in order to exploit the extended Dynkin isomorphism theorem, seems to be the existence of analogous inequalities for permanental vectors.
Here we provide some answers to this first question.We establish in Section 2 a tool (Lemma 2.2) to stochastically compare permanental vectors with index 1/4.The choice of the index is due to technical reasons (see Lemma 2.1), but one notes that infinitely divisible permanental processes are related to local times independently of their indexes.The obtained tool allows then to present in Section 3, inequalities analoguous to Slepian lemma for infinitely divisible permanental vectors and a weak version of Sudakov inequality in Section 4. In Section 5, some concentration inequalities are proved.

A tool
We will use the extension of Dynkin's isomorphism Theorem [4] to non necessarily symmetric Markov process established in [5].Consider a transient Markov process X with state space E and Green function g = (g(x, y), (x, y) ∈ E × E).We have shown that there exists a permanental process (φ x , x ∈ E), independent of X, with kernel g and index 2.We have proved that infinite divisibility characterizes the permanental processes admitting the Green function of a Markov process for kernel.Let a and b be elements of E. Denote by (L ab  x , x ∈ E) the process of the total accumulated local times of X conditionned to start at a and killed at its last visit to b.Then the process (L aa x + 1 2 φ x , x ∈ E) has the law of the process ( 1 2 φ x , x ∈ E) under the probability Now let (ψ x , x ∈ E) denote a permanental process, independent of X, with kernel g and index β (such a process exists thanks to the infinite infinite divisibility of φ).Then similarly to the above relation, one shows that for every β > 0, the process (L aa x + We start by showing the existence of a nice density with respect to the Lebesgue measure for permanental vectors with index 1/4. We denote by C kj (G) the entry G kj .We now compute the derivatives of F with respect to C kj .We have the following lemma.
Lemma 2.2.Let ψ = (ψ x k ) 1≤k≤n be a permanental vector with kernel (G(x k , x j ), 1 ≤ k, j ≤ n) and index 1/4.Let F be a bounded real valued function on R n + , admitting bounded second order derivatives.We have then: )]. (2.1) Assume moreover that ψ is infinitely divisible.For k = j, we have: where L xj x k is a vector independent of ψ with the law of the total accumulated local time of an associated Markov process conditionned to start at x j and killed at its last visit to x k .
Remark 2.2.1:Note that (2.2) completes the extended version of Dynkin isomorphism theorem presented above.Indeed this extended version involves the process L ab only for a = b, whereas in the symmetric case, according to the isomorphism theorem for a = b, (L ab + 1 2 η 2 ) has the same law as where η is a centered Gaussian process with covariance G.
Proof of Lemma 2.2 : Thanks to Lemma 2.1, we know that ψ/2 admits a nice density h(z, G) with respect to the Lebesgue measure on R n .Moreover, we have: Developing with respect to the k th line and then deriving with respect to C kk , gives where for any square matrix A, we denote by |A| kj the determinant of the matrix obtained by deleting the k th line and the j th column.We remark then that EJP 18 (2013), paper 99.
(2.2) For k = j, we have: (2.6) Indeed, we have We develop first with respect to the j th column and derive with respect to C kj to obtain: Since ψ is infinitely divisible, we know that there exists a diagonal matrix D = Diag(D(i), 1 ≤ i ≤ n) with positive entries on the diagonal such that G = DGD −1 is a potential matrix (see [5]).Denote by L xj x k the local time process of the Markov process X with Green function G, conditionned to start at x j and killed at x k .This is actually the local time process of the h-path transform of X with the function h(x) = G(x, x k ), conditioned to start at x j .The Green function of this last process is To compute the Laplace transform of L xj x k we make use of a well-known formula (see e.g.[12] (2.173) but for the Green function (G(x p , x q ) EJP 18 (2013), paper 99. ]. (2.7) Making use of (2.7), we have: where h jk is the density of the vector ( 2) after two integrations by parts. 2

Slepian lemmas for permanental vectors
In view of Lemma 2.2, we see that in order to stochastically compare two permanental vectors, we better have to choose them infinitely divisible.The problem is to find a path from one vector to the other that stays in the set of infinitely divisible permanental vectors.From the definition (1.1), one remarks that for a permanental vector there is no unicity of the kernel.For an infinitely divisible permanental vector with kernel G one can always choose a nonnegative kernel.Indeed, there exists a n × n-signature matrix σ such that σGσ is the inverse of a M -matrix (see [5]).We remind that a signature matrix is a diagonal matrix with its diagonal entries in {−1, 1}.A non singular matrix A is a M -matrix if its off-diagonal entries are nonpositive and the entries of A −1 are nonnegative.In particular all the entries of σGσ are nonnegative.We can choose (|G(i, j)|, 1 ≤ i, j ≤ n) to be the kernel of ψ.Given two inverse M -matrices, the problem becomes then to find a nice path from one to the other that stays in the set of inverse M -matrices.Unlike for positive definite matrices, linear interpolations between two inverse M -matrices are not always inverse M -matrices.This creates the limits for the use of the presented tool.
Here are some results of comparison of infinitely divisible permanental processes.The proofs are presented at the end of the section.Lemma 3.1.Let ψ and ψ be two infinitely divisible permanental vectors with index 1/4 and respective nonnegative kernels G and G such that for every i, j Then for every function The proof of Lemma 3.1 will show that (3.1) implies that for every i, j G(i, j) ≤ G(i, j).Lemma 3.2.Let ψ and ψ be two infinitely divisible permanental vectors with index 1/4 and respective nonnegative kernels G and G such that: EJP 18 (2013), paper 99.
for every 1 ≤ i, j ≤ n.Then for every positive s 1 , s 2 , ..., s n , we have: Under the assumptions of Lemma 3.2, we obtain for example: for every increasing function F on R + and when moreover G(i, i) = G(i, i) for every i, As a direct consequence of the work of Fang and Hu [8], one can stochastically compare two infinitely divisible squared Gaussian processes.Indeed let (η for every increasing in each variable function F on R n + .With elementary considerations, this comparison extends to permanental vectors with symmetric kernels and index 1/4.The above lemmas can be seen as extensions of this relation to infinitely divisible permanental vectors with non symmetric kernels.Lemma 3.3.Let ψ be an infinitely divisible permanental vector with kernel G and index 1/4.Then for every diagonal matrix D with nonnegative entries, there exists an infinitely divisible permanental vector ψ with kernel (G + D).Moreover for every positive s 1 , s 2 , ..., s n , we have: and The following lemma is an immediat consequence of the fact that infinite divisibility implies positive correlation (see [2]).
Lemma 3.4.Let ψ be a n-dimensional infinitely divisible permanental vector with index β and nonnegative kernel G. Let ψ be a n-dimensional permanental vector with index β and kernel D defined by Then for every positive s 1 , s 2 , ..., s n , we have: and , where c and c are positive numbers and P and P are convergent matrices (i.e.nonnegative matrices such that ρ(P ), ρ( P ) < 1).Note that c and P are not unique in the decomposition of G.One can hence choose c small enough to have : c ≤ c.Consequently: G −1 ij ≥ G−1 ij , implies that : P ij ≤ Pij , for every i, j.For θ in [0, 1], define the convergent matrix P (θ) by P (θ) ij = θ Pij +(1−θ)P ij , and the constant c θ by: c The matrix G(θ) is the kernel of an infinitely divisible permanental vector with index 1/4.Set: We have: . Note that : ∂θ (i, j) = Pij − P ij ≥ 0. Hence for every integer k , (P (θ)) k (i, j) is an increasing function of θ.Since c θ is also an increasing function of θ, we obtain: 2 and the assumptions on F lead then to: f (θ) ≥ 0. In particular: f (0) ≤ f (1), which means that: Proof of Lemma 3.2 (3.3)Let N be a real standard Gaussian variable and p the density with respect to the Lebesgue measure of N 1 N <0 .For ε > 0, set: As ε tends to 0, f ,c converges pointwise to 1 [c,+∞) .Note that on (−∞, c], f ε,c is C 2 with f ε,c ≥ 0 and f ε,c ≥ 0.
Define the function F ε on R n by One can not directly use Lemma 2.2 for F ε but thanks to (2.5), for any C kernel of an infinitely divisible permanental vector with index 1/4, we have: Note that we have: EJP 18 (2013), paper 99.
by performing two integration by parts.We note then that the two densities h and h 11 are connected as follows: 4C 1,1 h 11 (z, C) = z 1 h(z, C).One obtains in particular: , which leads to: Since h(z, C) |z 1 =0 = 0, one obtains: for every 1 ≤ k ≤ n.
Thanks to (2.8), one computes: Indeed, denote by L ab the local time process of the Markov process associated to C conditioned to start at a and to die at its last visit to b.Then we have: L ab a > 0 a.s. and L ab b > 0 a.s.We hence obtain which leads to: )] ≥ 0. One uses then the matrices G(θ) defined in (3.5) to obtain the conclusion similarly as in the proof of Lemma 3.1 by dominated convergence.
where f ,c is given by (3.6).Denote by f ,c the function (1 − f ,c ).As ε tends to 0, F (x) For every i, we have: Indeed thanks to (2.8), for any C kernel of an infinitely divisible permanental vector with index 1/4, we have, making use of the computations in the proof of (3.3) thanks to the computations in the proof of (3.3).More generally, we obtain for every We keep definition (3.5) for G(θ).Set: and for any kernel M of a n-dimensional permanental vector with index 1/4: Fε (M ) = E M [ Fε (ψ)].We have: EJP 18 (2013), paper 99.
For i = j, we have: By letting ε tend to 0, we finally obtain: Proof of Lemma 3.3: First we use the fact that G is an inverse M -matrix hence for every diagonal matrix D, (G + D) is still an inverse M -matrix (see e.g.[10]).Then for θ in [0, 1], define the M -matrix G(θ) by: G(θ) = θG + (1 − θ)(G + D), and the associated function f on [0, 1]: where Fε is defined by (3.10).Thanks to (3.11), one obtains the first inequality by letting ε tend to 0. The second one is obtained similarly with Fε replaced by F ε (defined by (3.7)).One concludes thanks to (3.8). 2

A weak Sudakov inequality
Let (η x ) x∈E be a centered Gaussian process with covariance function G.
Suppose that there exists a finite subset S of E such that for every distinct x and y elements of S, d η (x, y) > u, then according to Sudakov inequality  We consider now a kernel G = (G(x, y), (x, y) ∈ E × E), such that G is a bipotential.This means that both G and G t are Green functions of transient Markov processes.This is equivalent (see [6]) to the assumption that for any finite subset S of E, both G | S×S and G t | S×S are inverse of diagonally dominant M -matrices (a matrix (A ij ) 1≤i,j≤n is diagonally dominant if for every i, As a consequence of [6], we know that d G is a pseudo-distance on E. When there is no ambiguity, d G will be denoted by d.Following [11], we define E[sup x∈E ψ x ] as being sup{E[sup x∈F ψ x ], F finite subset of E}.

Assume that:
(1) G is a bipotential and that for every x in S, G(x, x) = 1.
(2) S is a finite subset of E such that for every distinct x and y elements of S: G(x, y) ≤ a.
Set: u = (2 − 2a) Indeed, one already knows that G + D is an inverse M -matrix.This is Theorem 1.6 in [10].Making use of its proof, one easily shows that the M -matrix (G+D) −1 is diagonally dominant.
Define the kernel G on S × S as follows: G(i, i) = 1 and for i = j, G(i, j) = a.
) is a Green function (thanks to Lemma 4.2).Since this is also true for its transpose, it remains a potential if we add the nonnegative constant (1 − θ)a to each entry (see e.g.[6]).
We use now the functions Fε defined by (3.10) to define Hε ((y i ) 1≤i≤n ) = Fε (( √ y i ) 1≤i≤n )), Thanks to Lemma 2.2, we have: Consequently we have for every ε > 0: and in particular as ε tends to 0, one obtains: Now, G is a covariance matrix, the corresponding vector ψ is the half sum of eight iid squared centered Gaussian vectors with covariance G. Denote by η a centered Gaussian vector with covariance G.We have: Note that for every distinct i and j in S:

Concentration inequalities for permanental processes
Here is a well-known concentration inequality for Gaussian vectors.There exists a universal constant K such that for every centered Gaussian vector (η i ) 1≤i≤n The following two subsections present partial extensions of (5.1) to infinitely disivible permanental vectors.
Note that G 1 is the kernel of an infinitely divisible permanental process.
[13]te by μ(z) the Fourier transform of a permanental vector with index 2. Then one checks that : R n |μ(z)| 2 dz < ∞.Hence µ * µ * µ * µ admits a continuous density with respect to the Lebesgue measure.We note then that: R n |μ(z)| 4 |z| 2 dz < ∞, which thanks to Proposition 28.1 in Sato's book[13](p.190)implies that the density of µ * 8 has a C 2 density with first and second derivatives converging to 0 as |z| tends to 0. 2 We define a functional F Lemma 2.1.A permanental vector (ψ 1 , ψ 2 , ..., ψ n ) with index 1/4 admits a density h with respect to the Lebesgue measure on R n .Moreover h is C 2 with first and second derivatives converging to 0 as |z| tends to 0.Proof:Let G be a n × n-matrix such that there exists a permanental vector with index 1/4 and kernel G.For any measurable function F on R n + , E G [F (ψ)] denotes the expectation with respect to a permanental vector with kernel G and index 1/4.