The barnes G function and its relations with sums and products of generalized Gamma convolution variables

We give a probabilistic interpretation for the Barnes G-function which appears in random matrix theory and in analytic number theory in the important moments conjecture due to Keating-Snaith for the Riemann zeta function, via the analogy with the characteristic polynomial of random unitary matrices. We show that the Mellin transform of the characteristic polynomial of random unitary matrices and the Barnes G-function are intimately related with products and sums of gamma, beta and log-gamma variables. In particular, we show that the law of the modulus of the characteristic polynomial of random unitary matrices can be expressed with the help of products of gamma or beta variables, and that the reciprocal of the Barnes G-function has a L\'{e}vy-Khintchin type representation. These results lead us to introduce the so called generalized gamma convolution variables.


Introduction, motivation and main results
The Barnes G-function, which was first introduced by Barnes in [3] (see also [1]), may be defined via its infinite product representation: (1.1) where γ is the Euler constant.
From (1.1), one can easily deduce the following (useful) development of the logarithm of G (1 + z) for |z| < 1: where ζ denotes the Riemann zeta function. This Barnes G-function has recently occurred in the work of Keating and Snaith [14] in their celebrated moments conjecture for the Riemann zeta function. More precisely, they consider the set of unitary matrices of size N , endowed with the Haar probability measure, and they prove the following results: Proposition 1.1 (Keating-Snaith [14]). If Z denotes the characteristic polynomial of a generic random unitary matrix, considered at any point of the unit circle (for example 1), then the following hold: (1) For λ any complex number satisfying Re(λ) > −1: (2) For Re(λ) > −1: Then, using a random matrix analogy (now called the "Keating-Snaith philosophy"), they make the following conjecture for the moments of the Riemann zeta function (see [14], [16]): where M (λ) is the "random matrix factor" M (λ) = (G (1 + λ)) 2 G (1 + 2λ) and A (λ) is the arithmetic factor where, as usual, P is the set of prime numbers. Due to the importance of this conjecture, as discussed in several papers in [16], it seems interesting to obtain probabilistic interpretations of the non arithmetic part of the conjecture. More precisely, the aim of this paper is twofold: • to give a probabilistic interpretation of the "random matrix factor" M (λ), and more generally of the Barnes G-function; • to understand better the nature of the limit theorem (1.4) and its relations with (generalized) gamma variables.
To this end, we first give a probabilistic translation in Theorem 1.2 of the infinite product (1.1) in terms of a limiting distribution involving gamma variables (we note that, concerning the Gamma function, similar translations have been presented in [10] and [9]). Let us recall that a gamma variable γ a with parameter a > 0 is distributed as: and has Laplace transform and Mellin transform: Theorem 1.2. If (γ n ) n≥1 are independent gamma random variables with respective parameters n, then for z such that Re(z) > −1: (1.9) The next theorem gives an identity in law for the characteristic polynomial which shall lead to a probabilistic interpretation of the "random matrix factor": Theorem 1.3. Let Λ denote the generic matrix of U (N ), the set of unitary matrices, fitted with the Haar probability measure, and Z N (Λ) = det (I − Λ). Then the following hold: (1) For Re(t) > −1, we have: (1.10) (2) Equivalently, in probabilistic terms: where all variables in sight are assumed to be independent, and γ j , γ ′ j are gamma random variables with parameter j.
The Barnes G-function now comes into the picture via the following limit results: Theorem 1.4. Let (γ n ) n≥1 be independent gamma random variables with respective parameters n; then the following hold: (1) for any λ, with Re(λ) > −1, we have: (2) consequently, from (1.12), together with (1.11), we recover the limit theorem (1.4) of Keating and Snaith: for Re(λ) > −1: We then naturally extend Theorem 1.2 to the more general case of sums of the form: and where y (n) i 1≤i≤n<∞ are independent, with the same distribution as a given random variable Y , where Y is a generalized gamma convolution variable (in short GGC), that is an infinitely divisible R + -valued random variable whose Lévy measure is of the form where µ (dξ) is a Radon measure on R + , called the Thorin measure associated to Y . We shall further assume that which, as we shall see is equivalent to the existence of a second moment for Y . The GGC variables have been studied by Thorin [19] and Bondesson [6], see, e.g., [12] for a recent survey of this topic. Theorem 1.5. Let Y be a GGC variable, and let (S N ) as in (1.13). We note Then the following limit theorem for (S N ) holds: if λ > 0,

15)
where the function H (λ) is given by: Limit results such as (1.15) are not standard in Probability theory, and we intend to develop a systematic study in a forthcoming paper [11]. The rest of the paper is organized as follows: • in Section 2, we prove Theorems 1. Hence, for z ≥ 0, the quantity We then write which proves formula (1.8), for z ≥ 0, and thus for Re(z) > −1 by analytic continuation. 2.1.
2. An interpretation of Theorem 1.2 in terms of Bessel processes. Let (R 2n (t)) t≥0 denote a BES(2n) process, starting from 0, with dimension 2n; we need to consider the sequence (R 2n ) n=1,2,... of such independent processes. It is well known (see [17] for example) that: Moreover, we have: We now write Theorem 1.2, for Re(z) > −1/t as: We now wish to write the LHS of (2.1) in terms of functional of a sum of squared Orstein-Uhlenbeck processes; indeed, if we write: the RHS appears as a martingale in t, with increasing process and we obtain that (2.1) may be written as: where, under P (z) the process (R 2n (t)) t≥0 satisfies (by Girsanov's theorem) That is, under P (z) , R 2 2n (t) t≥0 now appears as the square of a one dimensional Ornstein-Uhlenbeck process, with parameter − z n .
where all the variables in sight are assumed to be independent beta variables.
Proof. To deduce the result stated in the theorem from (1.11), we use the following factorization: where on the RHS, we have: Indeed, starting from (2.3), then multiplying both sides by γ j γ ′ j and using the beta-gamma algebra, we obtain that (2.3) is equivalent to: which easily follows from the duplication formula for the gamma function (again, see, e.g., Chaumont-Yor [8], p.93-98).
Remark 2.2. In the above Theorem, the factor 2 N can be explained by the fact that the characteristic polynomial of a unitary matrix is, in modulus, smaller than 2 N . Hence the products of beta variables appearing on the RHS of the formula (2.2) measures how the modulus of the characteristic polynomial deviates from the largest value it can take.

2.4.
A Lévy-Khintchine type representation of 1/G (1 + z). In this subsection, we give a Lévy-Khintchine representation type formula for 1/G (1 + z) which will be used next to prove Theorem 1.4.
Proposition 2.3. For any z ∈ C, such that Re(z) > −1, one has: Before proving Proposition 2.3, a few remarks are in order. against the measure du u (2 sinh (u/2)) 2 , which is not a Lévy measure. Indeed, Lévy measures integrate (u 2 ∧1), which is not the case here because of the equivalence: u (2 sinh (u/2)) 2 ∼ u 3 , when u → 0. Also, due to this singularity, one cannot integrate exp (−zu) − 1 + zu1 (u≤1) with respect to this measure, and one is forced to "bring in" under the integral sign the companion term u 2 z 2 2 . Proof. From the consideration of the series L (z) := ∞ n=3 (−1) n−1 ζ (n − 1) z n n (2.5) featured in (1.2), it seems natural to introduce a random variable Q taking values in R + , with Mellin transform: where e denotes a standard exponential variable. A little more analytically, formula (2.6) may be presented as We first show the existence of the random variable Q by computing its density (for an example of occurrence of the random variable Q in the theory of stochastic processes and some relation with the theory of the Riemann Zeta function, see e.g. [5] and [4]). From the definition of Q via its Mellin transform, we get for f : R + → R + : Hence by some elementary change of variables: , we get: Now, we consider the following series development for |z| < 1: hence, in comparison with formula (2.5) we obtain: Now, formula (2.4), for |z| < 1, follows from (2.8) and the fact that: The formula extends by analytic continuation to the case Re(z) > −1.

2.5.
Proof of Theorem 1.4. To prove Theorem 1.4, we shall use the following lemma: For any a > 0, the random variable log (γ a ) is infinitely divisible and its Lévy-Khintchine representation is given, for Re(λ) > −a, by: Proof. This is classical: it follows from (1.7) and some integral representation of the ψ-function, see, e.g. Lebedev [15] and Carmona-Petit-Yor [7] where this lemma is also used.
We are now in a position to prove Theorem 1.4. We start by proving the first part, i.e. formula (1.12). Let us write Thus with the help of Lemma 2.5, we obtain: and we may now write: log N (2.11) Next, we shall show that: together with some integral expression for J ∞ (λ), from which it will be easily deduced how J ∞ (λ) and G (1 + λ) are related thanks to Proposition 2.3.
We now write (2.11) in the form: Now letting N → ∞, we obtain: a limit which we shall show to exist and identify to be 1 + γ(= C) with the following lemma: Lemma 2.6. We have: Consequently, we have: Now, we write: The result now follows easily from the facts: We have thus proved so far that: We can still rewrite J ∞ (λ) as: To prove the second part of Theorem 1.4, we use formula (1.11) together with formula (1.12). Formula (1.11) yields: Multiplying both sides by exp −2λ N j=1 ψ (j) and using (1.12) we obtain: which completes the proof of Theorem 1.4.

Definition and examples.
We recall the definition of a GGC variable.
Definition 3.1. A random variable Y is called a GGC variable if it is infinitely divisible with Lévy measure ν of the form: Now we give some examples of GGC variables. Of course, γ a falls into this category with µ (dξ) = aδ 1 (dξ) where δ 1 (dξ) is the Dirac measure at 1.
More generally, the next proposition gives a large set of such variables: and let (γ u ) denote the standard gamma process. Then the variable Y defined as is a GGC variable.
Proof. It is easily shown, by approximating f by simple functions that which yields the result.
For much more details on GGC variables, see [12].

3.2.
Proof of Theorem 1.5. We now prove Theorem 1.5, which is a natural extension for Theorem 1.2. Recall that in (1.13), we have defined S N as: and where y are independent, with the same distribution as a given GGC variable Y , which has a second moment. For any λ ≥ 0, we have: where: Moreover, from Lemma 2.6, we have: (1)) .