Fractional extreme distributions

We consider three classes of linear differential equations on distribution functions, with a fractional order $\alpha\in [0,1].$ The integer case $\alpha =1$ corresponds to the three classical extreme families. In general, we show that there is a unique distribution function solving these equations, whose underlying random variable is expressed in terms of an exponential random variable and an integral transform of an independent $\alpha-$stable subordinator. From the analytical viewpoint, this law is in one-to-one correspondence with a Kilbas-Saigo function for the Weibull and Fr\'echet cases, and with a Le Roy function for the Gumbel case. By the stochastic representation, we can derive several analytical properties for the latter special functions, extending known features of the classical Mittag-Leffler function, and dealing with monotonicity, complete monotonicity, infinite divisibility, asymptotic behaviour at infinity, uniform hyperbolic bounds.


Introduction and statement of the main results
The classical Fisher-Tippett-Gnedenko theorem states that the limit distributions arising from a n (max(X 1 , . . . , X n ) − b n ) with a n > 0, b n ∈ R and (X 1 , . . . , X n ) an i.i.d. real sample, can be classified up to positive affine transformation into three families: where L is the unit exponential random variable, ρ is a positive parameter and, with an abuse of notation which we will make throughout the paper, we have identified a random variable with its law. From the distribution function viewpoint, the three above extreme laws can also be obtained as the unique solution to a certain ordinary differential equation. More precisely, if F (x) stands for a distribution function on R and F (x) = 1 − F (x) denotes its associated survival function, the following equations x > 0, F (0) = 0 have each a unique solution which is respectively given by the Weibull distribution W ρ , the Fréchet distribution F ρ and the Gumbel distribution G. Notice that since the derivative ofF is −F , and vice versa, those three equations involve a logarithmic derivative and are solved via the exponential function. In this paper, we will consider some extensions of these equations in the context of fractional calculus. There are many types of fractional operators and throughout the paper we shall only consider Liouville fractional integrals and derivatives, which are the most common ones -see the Appendix A for all precise definitions and notations. In Liouville fractional calculus, the classical Mittag-Leffler function E α (z) = n≥0 z n Γ(1 + nα) , α > 0, z ∈ C, which can be viewed as a generalization of the exponential function, plays a fundamental role. We refer to Chapter 3 in [12] for a modern account on this function, and also to Chapter 2 therein for an interesting historical overview. Let us first discuss an example. It is well-known from the general results of Barrett [3] -see also Lemma 3.24 and the inversion formula (E.1.10) in [12] -that for every α, λ > 0 the function x → E α (−λx α ) solves on R + the following fractional differential equation D α 0+ (1 − f )(x) = λf (x), (1.1) where D α 0+ is the progressive Liouville fractional derivative on the half-axis. Besides, it follows from the works of Pillai [17] that for every α ∈ (0, 1] the function x → E α (−x α ) is the survival function of a distribution on R + . More precisely, one has where Z α has a standard positive α-stable distribution with the normalization E[e −xZα ] = e −x α and, here and throughout, each terms of the product are assumed to be independent.
This shows that the distribution function of the random variable Z α × L 1 α solves the fractional differential equation on (0, ∞), with the initial condition F (0) = 0.

Fractional extreme distributions
On the other hand, the above Pillai result shows that for every α ∈ (0, 1], the function is a distribution function on R + , where the second equality follows from an elementary transformation of (1.2) involving the size-bias (Z −1 α ) (α) of order α of the inverse positive α-stable random variable. Recall that for t ∈ R and a positive random variable X such that E[X t ] < ∞ the size-bias of order t of X is the random variable X (t) whose law is defined by for all f : R + → R bounded continuous, and that E[Z −α α ] = 1/Γ(1 + α). All of this shows that the distribution function of the random variable (Z −1 α ) (α) × L −1/α solves the fractional differential equation on (0, ∞), with the initial condition F (0) = 0.
In this paper, we wish to study more general fractional equations than (1.3) and (1.4), which are natural extensions of the above differential equations characterizing the classical extreme distributions. Our findings involve the α-stable subordinator {σ (α) t t ≥ 0}, which is the real Lévy process starting from zero such that σ Observe that σ (α) is a pure drift for α = 1 that is σ (1) t = t, and a pure killing at an exponential time L for α = 0 that is σ on (0, ∞), with the initial condition F (0) = 0. The corresponding random variable is In the above statement, we have used the standard notation x + = max(x, 0) for x ∈ R. Observe that the integral on the right-hand side is finite a.s. for every ρ > 0 and α ∈ [0, 1] : this is clear for α = 1 since ρ > 0, and when α < 1 this follows from the fact that σ (α) is a non-decreasing càdlàg process which crosses the level 1 a.s. by a jump -see e.g. Theorem III.4 in [5]. The above result shows that the fractional index α ∈ [0, 1] of the derivative D α 0+ gives rise to a non-trivial multiplicative perturbation of the Weibull random variable W ρ expressed as the power of a certain Riemannian integral of the stable subordinator, whereas the parameter λ is simply a scaling constant with W α,λ,ρ d = λ −1/ρ W α,1,ρ . One has also the identities where, here and throughout, each terms of the quotient are assumed to be independent. The random variable on the right has a Pareto distribution of type III -see [2] for a study of the latter distribution, and the mapping in law α → W α,ρ α ,ρ can be viewed as a parametrized arc connecting this Pareto III distribution and the Weibull distribution, the parameter being the index of the underlying stable subordinator. Our second main result gives a fractional extension of the Fréchet distribution.
Theorem 1.2. For every λ, ρ > 0 and α ∈ [0, 1], there exists a unique distribution function solving the fractional differential equation on (0, ∞), with the initial condition F (0) = 0. The corresponding random variable is In the above statement the integral on the right-hand side is finite a.s. by the law of the iterated logarithm at infinity for σ (α) -see e.g. Theorem III.11 in [5]. As above, the index α of the derivative D α − produces a multiplicative perturbation of the Fréchet random variable F ρ via a Riemannian integral of σ (α) , whereas the parameter λ is a scaling constant with F α,λ,ρ d = λ 1/ρ F α,1,ρ . One has also the identities It is interesting to observe from the two above theorems that the two mappings in law with a sign switch at α = 0 corresponding to the trivial equation F = x ρF , produce a parametrized arc connecting the classical extreme random variables W ρ and F ρ . The traditional role of the α-stable subordinator is to define a fractional Laplacian via the underlying subordinated semi-group whose marginals are the symmetric β-stable distributions with β = 2α, the densities of the latter being up to some multiplicative constant the solutions to the fractional Cauchy problem ∂f ∂t = ∂ β f ∂x β on (0, ∞) × R + -see [11] and the references therein for more on this standard subject. The above results show that the α-stable subordinator is also involved, by means of its Riemannian integrals, in the solution to some fractional differential equations naturally associated to the Weibull or the Fréchet distribution.
The classical Gumbel distribution is the limit in law of either ρ(W ρ − 1) In order to define a fractional Gumbel distribution, it is natural from the above to introduce the random variable for every α ∈ [0, 1], the a.s. convergence of the integral being a well-known consequence We then have the following result involving the progressive Liouville fractional derivative on the line D α + , which can be guessed at the formal limit ρ → ∞ after a logarithmic change of variable in Theorem 1.1 or Theorem 1.2, and whose derivation is actually rigorous.
on R. The corresponding random variable is Unlike the fractional Weibull or Fréchet distributions, the perturbation on the standard Gumbel distribution induced by the parameter α of the progressive derivative D α + is linear and not multiplicative. Again, the parameter λ is a scaling constant with G α,λ d = λ −1 G α,1 . One has also the identities The random variable on the right has a standard logistic distribution -see [18] for an account on the latter distribution, and the mapping in law α → G α,1 can be viewed as a parametrized arc connecting the logistic distribution and the Gumbel distribution. The proof of the three above theorems is divided in two parts. In Section 2 we prove the uniqueness of the solutions to a more general class of fractional equations. These results have an independent interest, because the uniqueness problem is not always addressed in the literature on fractional calculus. It is classical in analysis to show that the solution of a differential equation must be the fixed point of an integral equation, and we use the same method, in the framework of fractional calculus. In Section 3 we show the existence of a probability law solving the above equations, and we establish the explicit multiplicative, respectively additive, factorizations. This is done via a one-to-one correspondence with a Kilbas-Saigo function, respectively a Le Roy function. These two unusual special functions lead to a family of positive random variables characterized by their entire moments and previously studied in [16], in a more general context. The results of [16] imply then the required factorizations with an exponential random variable and an independent stable subordinator.

Some uniqueness results on fractional hazard rates
In this section we prove the uniqueness of distribution functions solving fractional equations of the type (1.5), (1.6) or (1.7), where the power function is replaced by a more general hazard rate. We repeat that all definitions and notations on the fractional operators that we will consider here can be found in Appendix A.

The Weibull case
We consider the equation where F is a distribution function and h : (0, ∞) → R + is measurable and locally bounded. In the case α = 1, there exists a solution to (2.1) if and only if with a unique solution given bȳ Recall that the function h is called either the reliability function or the hazard rate of the underlying positive random variable. In the case α = 0, there exists a solution to (2.1) if and only if h is non-decreasing, h(0) = 0 and h(x) → ∞ as x → ∞, with a unique solution given byF In order to state our result in the fractional case α ∈ (0, 1), we introduce the following linear operator if it exists, the distribution function satisfying (2.1) is uniquely defined by the convergent seriesF Proof. We begin by transforming (2.1) into an integral equation. Since F is a distribution function on R + , there exists a probability measure µ on R + such that where we have set x > 0, and the second equality in (2.3) is a direct consequence of Fubini's theorem and of the evaluation of the Beta integral of the first kind. Moreover, it is easy to see that the function F α,µ , which might take infinite values, is nevertheless locally integrable at zero since µ is a probability. Hence, applying I α 0+ on both sides of (2.1), we can use the inversion formula (A.1) and get . This leads to the fixed point equationF = 1 − A α,h 0+ (F ) and, by the linearity of A α,h , an immediate induction based on the Beta integral of the first kind where the convergence towards zero follows e.g. from (1.1.5) in [1]. This completes the proof.  .2) is indeed a survival function on (0, ∞).
(b) In a different direction, sharing a certain analogy with the previous item, the authors have introduced in [19,20] generalized fractional distributions which are not conventional and classical distributions with fractional hazard rates. In [21], the stochastic approximation of fractional probability distribution have been studied.
(c) The above proof shows the more general fact that under the same assumption on h, for every α ∈ (0, 1) there exists a unique bounded solution to This can be viewed as an extension of (1.1) which handles the case when h is a positive constant.

The Fréchet case
We consider the equation In the case α = 0, there exists a solution to (2.4) if and only if h is non-increasing, h(0+) = ∞ and h(x) → 0 as x → ∞, with a unique solution given by In order to state our result in the fractional case α ∈ (0, 1), we introduce the following linear operator which is well-defined on measurable functions from (0, ∞) to R + .
Then, if it exists, the distribution function satisfying (2.4) is uniquely defined by the convergent series Proof. The proof is analogous to that of (2.3), except that we deal with survival functions.
SinceF is a survival function on R + , there exists a probability measure µ on R + such x > 0, and used Fubini's theorem together with the evaluation of the Beta integral of the second kind. The functionF α,µ is locally integrable at infinity since for every y > 0 one has for every n ≥ 1. Fixing now x > 0, the assumption made on h implies that there exists a constant c > 0 such that h(t) ≤ ct −ρ−α for every t > x. Since moreover F (t) ≤ 1 for every t > x, an induction based on the Beta integral of the second kind implies c n x −ρn → 0 as n → ∞, which completes the proof. there exists a unique bounded function having a limit at infinity and solving Observe that this solution is zero if = 0.

The Gumbel case
We consider the equation given on distribution functions, where h : R → R + is measurable and locally bounded.
In the case α = 1, there exists a solution to (2.6) if and only if h is locally integrable at −∞ and such that with a unique solution given bȳ In the case α = 0, there exists a solution to (2.1) if and only if h is non-decreasing, h(x) → 0 as x → −∞ and h(x) → ∞ as x → ∞, with a unique solution given bȳ In order to state our result in the fractional case α ∈ (0, 1), we introduce the following linear operator , which is well-defined on measurable functions from R to R + . The following result is a simple variation on Theorem 2.3.
Changing the variable backwards, we obtain the required (2.7).

Proof of the main theorems
In this section we show the existence of the real random variables associated to the fractional differential equations (1.5), (1.6) and (1.7), and we express them in terms of the unit exponential random variable and an integral transform of an independent α-stable subordinator. An important ingredient in the proof is the following infinite independent product T(a, b, c) = n≥0 a + nb + c a + nb B a+nb,c where, here and throughout, B a,b denotes a standard Beta random variable. We refer to Section 2.1 in [16] for more details on this product, including the fact that it is a.s. convergent for every a, b, c > 0. We also mention from Proposition 2 in [16] that its Mellin transform is for α, m > 0 and l > −1/α, with the convention made here and throughout that an empty product always equals 1. Note that E α = E α,1,0 and that Γ(β)E α,β = E α,1, β−1 α . We refer to Chapter 5.2 in [12] for an extensive account, including an extension to complex values of the parameter l. The Le Roy functions are simple generalizations of the exponential function, defined by for α > 0. Introduced in [15] in the context of analytic continuation, Le Roy functions are much less studied than Mittag-Leffler's -see however the recent paper [10] and the references therein.
It thus remains to prove that x → E α, ρ α , ρ α −1 (−λx ρ ) is a survival function on R + and to identify the underlying positive random variable. For every z ∈ C, one has E α, ρ α , ρ α −1 (z) = n≥0 a n (α, ρ) z n n! with a n (α, ρ) = ρ −n n k=1 Γ(kρ + 1 − α) Γ(kρ) · Let us now consider the positive random variable By the aforementioned Proposition 2 in [16] and the concatenation formula (B.1), the positive entire moments of the latter random variable are given by as n → ∞ for some positive constant κ, so that Carleman's criterion n≥1 a n (α, ρ) − 1 2n = ∞ is fulfilled, and the law of the latter random variable is determined by its positive entire moments. Finally, it follows from Theorem (b) (i) in [16] with q = ρ − α that All in all, we have shown that This implies for every x ≥ 0, which completes the proof for α ∈ (0, 1). The case α = 1 is that of the classical Weibull distribution and was already discussed in the introduction. Finally, the case α = 0 amounts to solving F = λx ρ (1 − F ), which yieldsF (x) = 1/(1 + λx ρ ) and The main result of [13] implies the identification where G(m, a) is the generalized stable random variable with parameters m > a > 0, whose density is up to normalization the unique positive solution to Fractional extreme distributions on (0, ∞). This yields the further identity in law In particular -see the introduction in [13] for the third identity, one has in accordance with the Pillai distribution mentioned in the introduction since Notice that the integro-differential equation (3.2) shares some formal similarities with (1.5). Nevertheless it is essentially different because it deals with densities whereas (1.5) deals with distribution functions. (b) There exist unique solutions to fractional differential equations of the type (1.5) without the restriction to distribution functions. The main result of [4] states that for every α ∈ (0, 1], there is a unique solution to at zero, which is the density function of the running maximum of a spectrally positive (α + 1)-stable Lévy process starting from zero.
(c) With the above notation, one has a n (α, ρ) = n! Φ α,ρ (1) · · · Φ α,ρ (n) This leads to Let us notice that the identification (3.3) can also be deduced from Corollary 5 in [16] is the case q = ρ − α > −α andρ = 1, with the notation therein. Observe finally that this is consistent with the limiting case α = 1 with ξ is completely monotone (CM) for every α ∈ (0, 1] and m > 0. This can be viewed as a generalization of the classic result by Pollard that E α (−x) = E α,1,0 (−x) is CM for every α ∈ (0, 1] -see e.g. Proposition 3.23 in [12]. This also solves a conjecture stated in [9] see Section 4 and equations (10) and (11) a well-known fact following from our discussion prior to the statement of Theorem 1.1.
In a forthcoming paper, we will generalize this fact and show some further analytical properties of the Kilbas-Saigo function E α,m,m−1 .
We end this section with a convergent series representation, in the non-explicit case α ∈ (0, 1), for the density f W α,λ,ρ of W α,λ,ρ . This is an immediate consequence of a term-by-term differentiation of the survival function which was obtained during the proof of Theorem 1.1.

Proof of Theorem 1.2
The proof is analogous to that of Theorem 1.1, except that we will deal with distribution functions instead of survival functions. We begin with the case α ∈ (0, 1) and the uniqueness follows from Theorem 2.3. Besides, we know by (2.5) that a distribution solving (1.6), if it exists, must have distribution function for every β > α, an induction implies for every x > 0. Alternatively, the fact that E α, ρ α , ρ−1 α (−λx −ρ ) is a solution to (1.6) follows from Theorem 5.30 and the inversion formula (E.1.10) in [12]. It remains to prove that is a distribution function on (0, ∞) and to identify the underlying positive random variable. For every z ∈ C, one has Reasoning exactly as above implies that {b n (α, ρ), n ≥ 0} is the determinate integer moment sequence of the positive random variable where the identity in law follows from Corollary 3 in [16]. We have hence shown that This implies for every x > 0, which completes the proof for α ∈ (0, 1). As for Theorem 1.2 the remaining cases α = 0 and α = 1 are elementary and we leave the details to the reader. On the other hand, this factor can also be viewed as the perpetuity of some subordinator: rewriting b n (α, ρ) = n! Ψ α,ρ (1) · · · Ψ α,ρ (n) with the Bernstein function This leads to Let us again notice that the identification (3.5) can be deduced from Corollary 5 in [16] is the case q = −ρ − α < −α andρ = 0 -see also Remark 10 therein. Observe also that the limiting cases α = 0 and α = 1 are consistent, with respectively ζ for all α ∈ [0, 1], ρ > 0 and n ≥ 1. By moment determinacy, this implies the following factorization of the unit exponential law (3.6) which is valid for all α ∈ [0, 1] and ρ > 0. For ρ = α, this factorization reads where the first identity follows from Remark 3.1 (d) and the second one from (3.4) in [16].
The simple identity L d = L α × Z −α α is well-known as Shanbhag-Sreehari's identity. It has been thoroughly discussed in Section 3 of [6] from the point of view of perpetuities of subordinators, and their associated remainders. Observe also that changing the variable and letting ρ → ∞ in (3.6) leads to another classic identity obtained in [8] -see Example E therein. Last, it is interesting to mention the following identity, which follows at once from (3.6), Theorem 1.1 and Theorem 1.2: (c) The above proof shows that the function is CM for every α ∈ (0, 1] and m > 0. This is a generalization of the fact that E α,1,1− 1 α (−x) = Γ(α)E α,α (−x) is CM for every α ∈ (0, 1], which is itself a direct consequence of the aforementioned Pollard theorem because αE α (−x) = E α,α (−x). The

formula (3.4) implies the Bernstein representation
For m = 1, with the notation of Remark 3.1 (d) we obtain α is the size-bias of order 1 of T α . This implies the curious identity dt.
In a different direction, is is worth recalling that for every β > α and α ∈ (0, 1] the is also CM, where the Bernstein representation involving the α-power of a standard Beta distribution follows directly from Lemma 4.26 in [12] -see also the references therein.
In a companion paper, we will come back to this example together with further analytical properties of the Kilbas-Saigo functions E α,m,m− 1 α .
We end this section with a convergent series representation in the non-explicit case α ∈ (0, 1) for the density f F α,λ,ρ of F α,λ,ρ . This is a consequence of a term-by-term differentiation of the distribution function which was obtained during the proof of Theorem 1.2.

Proof of Theorem 1.3
The argument is shorter than for Theorems 1.
A direct integration based on the Gamma integral and Fubini's theorem shows finally that D α + F (x) = λ α e λxF (x) for every x ∈ R as required. The case α = 1 was already discussed in the introduction with a unique solution F (x) = P[G ≤ λx] = P[G 1,λ ≤ x], whereas the unique solution in the case α = 0 is obviously F (x) = 1/(1 + e −λx ), which is the distribution function of λ −1 (G − G) = G 0,λ . Remark 3.5. (a) The above proof also shows that the unique distribution function solving the fractional differential equation (b) It is easy to deduce from the representations of the fractional extreme distributions in terms of integrals of the stable subordinator the following convergences in law as ρ → ∞, for every α ∈ [0, 1] and λ > 0. Observe that the case α = λ = 1 amounts to the aforementioned convergences in law ρ(W ρ − 1) As above, we finish this paragraph with a convergent series representation in the non-explicit case α ∈ (0, 1) for the density f G α,λ of G α,λ , which is a consequence of a term-by-term differentiation of the survival function P[G α,λ > x] = L α (−e λx ). Corollary 3.6. For every α ∈ (0, 1], the density of G α,λ has the following convergent series representation on R

A Fractional integrals and derivatives
In this paragraph, we fix the notation on the fractional operators which are used throughout the paper. This is an excerpt from the beginning of Chapter 2 in [14]. We will consider only three kinds of such operators which are the most familiar ones, and our fractional parameter α will always be supposed in [0, 1]. There are certainly many other fractional operators with a larger family of fractional parameters, and we refer to the whole Chapter 2 in [14] for an account.

A.1 Progressive Liouville operators on the half-axis
For every α ∈ (0, 1), the operator f → I α is well-defined, taking possibly infinite values, on measurable functions f : (0, ∞) → R + . It is easy to see that if f is integrable at zero, then so is I α 0+ (f ) -see Lemma 2.1 in [14] for a more general result. The corresponding fractional derivative f → D α 0+ f is such that and is well-defined almost everywhere as soon as f = I α 0+ (g) for some g integrable at zero. Moreover, for such functions there is an inversion formula which is valid almost everywhere -see Lemma 2.5 in [14]. These operators are extended to the boundary cases α = 0 with I 0 0+ = D 0 0+ = Id and α = 1 with I 1 0+ and D 1 0+ being respectively the usual running integral and derivative -see (2.1.7) in [14].

A.2 Regressive Liouville operators on the half-axis
For every α ∈ (0, 1), the operator f → I α − (f ) with is well-defined on measurable functions f : (0, ∞) → R + . It is easy to see that if f is integrable at infinity, then so is I α − (f ) -see again Lemma 2.1 in [14]. The corresponding fractional derivative f → D α − f is such that and is well-defined as soon as f = I α − (g) for some g integrable at infinity. Moreover, for such functions there is an inversion formula which is valid almost everywhere. These operators are extended to the boundary cases α = 0 with I 0 − = D 0 − = Id and α = 1 with I 1 − and D 1 − being respectively the usual running integral and the opposite of the usual derivative -see again (2.1.7) in [14].

A.3 Progressive Liouville operators on the real axis
For every α ∈ (0, 1), the operator f → I α + (f ) with is well-defined on measurable functions f : R → R + . Observe that I α + (f )(−x) = I α − (g)(x) with g(x) = f (−x) so that we can transfer to I α + the properties on I α − . In particular, if f is integrable at −∞, then so is I α + (f ). The corresponding fractional derivative f → D α + f is such that and is well-defined as soon as f = I α + (g) for some g integrable at −∞. Moreover, for such functions there is an inversion formula I α + D α + (f ) = I α + (g) = f , which is valid almost everywhere. These operators are extended to the boundary cases α = 0 with I 0 + = D 0 + = Id and α = 1 with I 1 + and D 1 + being respectively the usual running integral and derivative.

B Barnes' double Gamma function
In this paragraph, extracted from [7] to which we refer for further results, we collect a few facts about Barnes' double Gamma function G(z; δ). For every δ > 0, this function is defined as the unique solution to the functional equation is valid for |z| → ∞ with | arg(z)| < π, for some real constants A, B and C which are given in (4.5) of [7]. In this paper we make an extensive use of the following Pochhammer type symbol [a; δ] s = G(a + s; δ) G(a; δ) which is well-defined for every a, δ > 0 and s > −a.