Sequences of expected record values

We investigate conditions in order to decide whether a given sequence of real numbers represents expected record values arising from an independent, identically distributed, sequence of random variables. The main result provides a necessary and sufficient condition, relating any expected record sequence with the Stieltjes moment problem. The results are proved by means of a useful transformation on random variables. Some properties of this mapping, and its inverse, are discussed in detail, and, under mild conditions, an explicit inversion formula for the random variable that admits a given expected record sequence is obtained. Key words and phrases: characterizations; expected record values; Stieltjes moment problem; transformation of random variables; inversion formula. AMS subject classification: Primary 60E05, 62G30; Secondary 44A60.


Introduction
Let X be a random variable (r.v.) with distribution function (d.f.) F, and suppose that X 1 , X 2 , . . . is an independent, identically distributed (i.i.d.) sequence from F. The usual record times, T n , and (upper) record values, R n , corresponding to the i.i.d.sequence X 1 , X 2 , . .., are defined by T 1 = 1, R 1 = X 1 , and, inductively, by (1.1) It is obvious that (1.1) produces an infinite sequence of records (= record values) if and only if F has not an atom at its upper end-point (if finite).In a similar manner, one can define the so called weak (upper) records, W n , by T 1 = 1, W 1 = X 1 , and clearly, the sequence W n in (1.2) is non-terminating for every d.f.F. These models have been studied extensively in the literature.The interested reader is referred to the books by Ahsanullah (1995), Arnold et al. (1998) and Nevzorov (2001).
Clearly, the record processes (1.1) and (1.2) coincide with probability (w.p.) 1 whenever F is continuous (i.e., free of atoms).In that case, the record process (R 1 , R 2 . ..) has the same distribution as the sequence where U 1 < U 2 < • • • is the record process from the standard uniform d.f., U(0, 1), and It should be noted, however, that the records, as defined by (1.3), are neither weak nor ordinary records (when F is arbitrary).To illustrate the situation, consider the case where F is symmetric Bernoulli, b(1/2), that is, X = 0 or 1 w.p. 1/2.Then, The following table provides a realization of the corresponding i.i.d. and record processes.  1 shows that W 2 = F −1 (U 2 ) = 0 while R 2 = 1.Also, W 4 = 0 while F −1 (U 4 ) = 1 (and R 4 is undefined); thus, From now on we shall constantly use the notation R n for F −1 (U n ), where {U n } ∞ n=1 is the sequence of uniform records -the effect is not essential in applications, where it is customarily assumed that F is absolutely continuous.Of course, the three notions of records coincide (w.p. 1) if and only if F −1 (u) is strictly increasing in u ∈ (0, 1), and this is equivalent to the fact that P(X = x) = 0 for all x.
The present work is concentrated on questions of the form Does a given real sequence {ρ n } ∞ n=1 represents an expected record sequence (ERS) of some r.v.X?
That is, can we find an r.v.X such that E R n = ρ n for all n, where the record process R n is defined by (1.3)?Moreover, if the answer is in the affirmative, is this r.v.unique?How can we re-construct it from its ERS?
One of the central results of the paper reads as follows.
Theorem 1.1.A real sequence {ρ n } ∞ n=1 is an expected record sequence corresponding to a non-degenerate r.v.X if and only if for some r.v.T , with P(T > 0) = 1, possessing finite moments of any order.Characterizations of the parent distribution through its expected records (under mild additional assumptions like continuity and finite moment of order greater than one) are present in the bibliography for a long time, the most relevant being those given by Kirmani and Beg (1984) and Lin (1987); see also Lin and Huang (1987).However, these authors do not provide an explicit connection to the Stieltjes moment problem.In the contrary, the corresponding theory for an expected maxima sequence, EMS, µ n = E max{X 1 , . . ., X n }, is well-understood from Kadane (1971Kadane ( , 1974)).Namely, Kadane showed that {µ n } ∞ n=1 represents an EMS (of a non-degenerate, integrable, parent population) if and only if there exists a random variable T , with P(0 < T < 1) = 1, such that The representation (1.5) is closely connected to the Hausdorff (1921) moment problem, and improves on Hoeffding's (1953) characterization.The above kind of results enable further applications in the theory of maxima and order statistics, see, e.g., Hill andSpruill (1994, 2000), Huang (1998), Kolodynski (2000).Moreover, the r.v.T in (1.5), (the distribution of) which is clearly unique, admits the representation T = F(V) where F is the parent d.f. and V has density Papadatos (2017).Conversely, the parent distribution is characterized from the sequence {µ n } ∞ n=1 , and its location-scale family from T .
In the case of a record process we would like to verify similar results, guaranteing that the theory of maxima can be suitably adapted to that of records.However, there are essential differences between these two models -see, e.g., Resnick (1973Resnick ( , 1987)), Nagaraja (1978), Tryfos and Blackmore (1985), Embrechts et al. (1997), Papadatos (2012) or Barakat et al. (2019); see also Appendix A. In this spirit, (1.4) can be viewed as the natural record-analogue of (1.5).
The results presented here are based on a suitable mapping ϕ on the distribution of a random variable.Using ϕ, the location-scale family of any suitable X is transformed to (the distribution of) a unique positive random variable T with finite moments of any order.The mapping is one to one and onto (hence, invertible), and several properties of the expected record sequence of X are easily extracted from the behavior of T = ϕ(X).The basic properties of the mapping ϕ are discussed in Section 2. Using them, we provide a complete description of the class of r.v.'s that are characterized from their expected record sequence -see Theorem 2.3.Moreover, under mild assumptions, an inversion formula for the distribution function of the random variable that admits a given expected record sequence is obtained; see Theorem 2.4.The main results are presented in Section 2, and the proofs together with some auxiliary lemmas are postponed to the appendices.
Through the rest of the article, X = Y for r.v.'s X, Y means that X, Y are identically distributed, and inverse d.f.'s are always taken to be left-continuous, namely, F −1 (u) := inf{x : F(x) ≥ u}, u ∈ (0, 1).

The mapping ϕ with applications to characterizations
For the investigation of the mapping ϕ it is necessary to introduce two suitable spaces of r.v.'s.
Notice that X ∈ H * if and only if X is non-degenerate and the corresponding record process in (1.3) Definition 2.2.The space T consists of all r.v.'s T , with P(T > 0) = 1, possessing finite moments of any order, where identically distributed r.v.'s are considered as equal.We customarily write F T ∈ T in order to denote T ∈ T , where F T is the d.f. of T .
We are now ready to define the mapping ϕ and its inverse ϕ ′ .What we shall prove in the sequel is that, essentially, the spaces H 0 and T are identified through the restriction of ϕ on H 0 .
, where F is the d.f. of X and the r.v.V has density (with respect to Lebesgue measure) given by f The mapping ϕ is well-defined because X ∈ H * so that f V is integrable, strictly positive in the non-empty interval {x : 0 < F(x) < 1}, and zero otherwise.Definition 2.4.For any T ∈ T with d.f.F T we define X 0 = ϕ ′ (T ) ∈ H 0 to be the r.v. with inverse d.f.G 0 , for which the function H 0 (y) = G 0 (1 − e −y ), y > 0, is given by the formula In this formula, F T (y−) = P(T < y), y and c T is the unique constant (depending only on T ) for which ∞ 0 e −y H 0 (y)dy = 0. We shall prove in Lemma D.7 that where I denotes an indicator function.
Theorem 2.1.The transformation ϕ : H 0 → T of Definition 2.3 is one to one and onto, with inverse ϕ −1 : T → H 0 , where ϕ −1 = ϕ ′ is given by Definition 2.4.Theorem 1.1 holds true, since it is an immediate corollary of the following result.4), where the mapping ϕ is given by Definition 2.3.
Corollary 2.1.If the r.v.T ∈ T has a density f T then the function H 0 in (2.1) is given by Remark 2.2.The formula (2.3) is unable to describe several continuous r.v.'s in H 0 , for which, however, the ordinary record process {R n } n≥1 is well-defined.This is so because any r.v.T ∈ T with dense support in (0, ∞) will produce a continuous r.v.
This observation is a consequence of (D.3), which implies that, for such an r.v.T , H 0 is strictly increasing, and hence, its d.f. is continuous.It is obvious that we can find discrete r.v.'s T with dense support and finite moment generating function at a neighborhood of zero.As a concrete example, set T = T 1 + T 2 , where T 1 follows a Poisson d.f. with mean 1, P(T 2 = r n ) = 2 −n (n = 1, 2, . ..), with {r 1 , r 2 , . ..} being an enumeration of the rationals of the interval (0, 1], and assume that T 1 , T 2 are independent.Set also X = ϕ −1 (T ).The following theorem shows that this particular continuous r.v.X is, indeed, characterized from its ERS.
With the aim of mapping ϕ, a complete characterization result based on the expected record sequence becomes possible, as follows.
Theorem 2.3.A random variable X ∈ H * is characterized from its expected record sequence if and only if the random variable T = ϕ(X) ∈ T is characterized from its moments, where the mapping ϕ is given by Definition 2.3.
Suppose that for a given (non-degenerate) r.v.X, E X − < ∞ and E(X + ) p < ∞ for some p > 1.According to Theorem 2.4, below, the transformation T = ϕ(X) of any such r.v. has finite moment generating function at a neighborhood of zero; hence T it characterized from its moments, and we obtain the following result.
Corollary 2.2.(Kirmani and Beg, 1984).Every random variable X with finite absolute moment of order p > 1 is characterized from its expected record sequence.
However, we emphasize that the Kirmani-Beg characterization do not extends to H * : Example 2.1.There exist different r.v.'s in H 0 with identical expected record sequence.
A concrete example leading to absolutely continuous r.v.'s can be constructed by means of the classical example due to Stieltjes, as follows.Let T be the lognormal r.v. with density f T (t) = e −(log t) 2 /2 /(t √ 2π), t > 0, and moments E T n = e n 2 /2 .Each density in the set f λ (t) := (1 + λ sin(π log t)) f T (t), −1 ≤ λ ≤ 1 admits the same moments as T -see Stoyanov (2013) or Stoyanov and Tolmatz (2005).Assume that T λ has density f λ , and consider the r.v.X λ = ϕ −1 (T λ ), with distribution inverse given by Using an obvious notation, it is clear from Theorem 2.2(ii) and Corollary 2.
, and the sequence ρ (λ) n := E R n (X λ ) satisfies (1.4) with T λ in place of T .Thus, each X λ , −1 ≤ λ ≤ 1, has the same expected record sequence, namely, where an empty sum should be treated as zero.
it is non-zero (when λ 1 λ 2 ) in a set of positive measure, and satisfies It is easily checked that every X λ admits a density.
In fact, the Kirmani-Beg characterization holds true because the system of functions Lin (1987), while (2.4) implies that L is not complete in the larger space H(0, 1).
Our final result is applicable to most practical situations regarding characterizations (and inversions) in terms of the expected record sequence.
It is well-known that any r.v.T is uniquely determined from its moments, if it admits a finite moment generating function at a neighborhood of zero.On the other hand, it is also known that we can find several r.v.'s T ∈ T that are characterized from their moment sequence, although E e aT = ∞ for all a > 0. A large family of such r.v.'s is the so called Hardy class -see Stoyanov (2013) and Lin and Stoyanov (2016) for more details.Clearly, the corresponding r.v.'s ϕ −1 (T ) are not treated by Kirmani-Beg's (1984) characterization, showing that the proposed method, based on the transformation ϕ, is quite efficient.and I denotes the indicator function.We may use (A.1) to calculate the d.f.F n of R n as follows: , where E 1 , . . ., E n are i.i.d.from the standard exponential, Exp (1).From the well-known relationship regarding waiting times for the standard (with intensity one) Poisson process, {Y t , t ≥ 0}, we have Therefore, with t = L(F(x)), we obtain (cf.Nagaraja, 1978) In the above sum, the term L(F(x)) 0 should be treated as 1 for all x; moreover, the product (1 − F(x))L(F(x)) k should be treated as 0 whenever k ≥ 1 and F(x) = 1.Hence, (A.2) yields F 1 (x) = F(x) and, e.g., Since our problem concerns the expectations E R n for all n, we have to define an appropriate space to work with; that is, to guarantee that these expectations are, all, finite.The natural space is given by Definition 2.1, since the next proposition holds true.
Proposition A.1.The following statements are equivalent: (i) X ∈ H, i.e., H ∈ H, where H(y) := F −1 (1 − e −y ), y > 0, and F is the d.f. of X.These results are due to Nagaraja (1978) in the particular case where X has a density and/or is non-negative, but his proofs continue to hold in our case too.
Obviously, the results extend to a t-interval −ǫ 0 < t < ǫ 0 by analytic continuation.

D Construction of X
We shall provide a detailed proof of Theorem 2.2(ii), which also verifies the half counterpart of Proposition 2.1, showing that the mapping ϕ ′ is well-defined with domain T and range into H 0 , as stated.We notice that the present appendix is self-contained; it does not require any further results from the present article.Suppose we are given an r.v.T ∈ T with d.f.F T , i.e., F T (0) = 0 and E T n < ∞ for all n.Define H(y) := H 0 (y) where H 0 is as in (2.1) and c T as in (2.2), and rewrite (2.1) as From (D.2) we see that H(y) ≤ 0 for y ≤ 1 and ≥ 0 otherwise.
Lemma D.1.H is non-decreasing and left-continuous.
Proof.Left-continuity is obvious.Also, H is non-positive in (0, 1] and non-negative in [1, ∞).Choose now y 1 , y 2 with 0 < y 1 < y 2 ≤ 1.Then, A similar argument applies to the case 1 ≤ y 1 < y 2 < ∞. for almost all (y, x) ∈ (0, ∞) × (0, 1).Interchanging the order of integration according to Tonelli's theorem, we get x 0 e −y F T (x) − F T (y) dydx Obviously, I 2 is finite and it remains to verify that I 1 < ∞.In view of (D.4) we obtain The last equation shows that I 1 is finite, because the inner integral is less than e t /t.Thus, and the function T → (e T − 1)I(T ≤ δ)/T is (non-negative and) bounded.
where we made use of the substitution t = a −1 k (y).On the other hand, Lemma D.2(ii) shows that I 1 is finite, and we proceed to verify that I 2 is also finite.Using (D.2) we have for almost all (y, x) ∈ (m, ∞)×(1, ∞).It remains to show I 3 < ∞.Using Tonelli's theorem, Now J 1 is obviously finite, because the inner integral is less that k! and the function x → (x − 1)e x /x 2 is bounded for x ∈ [1, m].Applying Lemma D.3 to the inner integral in J 2 we obtain Therefore, since the inner integral is less than e t /t, and !, and this is finite because T has been assumed to possess finite moments of any order.
From Lemmas D.1, D.4 we conclude that H ∈ H, so that H 0 ∈ H (since these functions differ by a constant-see (D.1)).We now proceed to show that H 0 ∈ H 0 .
where a k is given by (D.5).
Now we compute these three integrals.From (D.2), Similarly, and, finally, The above calculation shows that where dxdy.The integrand in J 2 is non-negative, so we can change the order of integration.In order to justify that this is also permitted for J 3 , we compute  and T has finite moments of any order.Thus, k!e −t = k!e −t k j=1 t j / j!, we obtain (after changing the order of integration once again) Subtracting the above equations we deduce the desired result.

Table 1 .
Table satisfies E |R n | < ∞ for all n -see Proposition A.1.It is worth pointed out that every X ∈ H admits the equivalent representation X = H(E), where the function H belongs to H and E is a standard exponential r.v.This says that a left-continuous, non-decreasing function H belongs to H if and only if E |H(S m )| < ∞ for all m ≥ 1, where S m follows the Erlang distribution with parameters m and 1, i.e., S m is the sum of m i.i.d.standard exponential r.v.'s.The subspace H 0 consists of those H ∈ H for which E and note that the integral in (D.8) is finite -see Lemma D.4.Clearly, y k > k! for y ∈ (β k , ∞) and y k < k! for y ∈ (0, β k ).We split the integral in (D.8) as follows: