The m ( n ) out of k ( n ) bootstrap for partial sums of St . Petersburg type games

This paper illustrates that the bootstrap of a partial sum given by i.i.d. copies of a random variable X1 has to be dealt with care in general. It turns out that in various cases a whole spectrum of different limit laws of the m(n) out of k(n) bootstrap can be obtained for different choices of m(n) k(n) → 0 whenever X1 does not lie in the domain of attraction of a stable law. As a concrete example we study bootstrap limit laws for the cumulated gain sequence of repeated St. Petersburg games. It is shown that here a continuum of different semi-stable bootstrap limit laws occurs.


Introduction
Consider an arbitrary i.i.d.sequence (X i ) i∈N of real valued random variables (r.v.).
At finite sample size k(n) the following question is of high interest.What can be said about the distribution of the partial sum (or about S k(n) /k(n) − µ if the mean µ exists) given the realizations of X 1 , . . ., X k(n) ?
In many cases this question can be attacked by Efron's bootstrap, see Efron (1979), or existing bootstrap modifications which are widely used tools in modern statistics.
However, as it is well known, Efron's bootstrap may fail for heavy tailed X 1 .This paper The m(n) out of k(n) bootstrap for partial sums of St. Petersburg games points out that also bootstrap modifications like the m(n) out of k(n) bootstrap cannot solve the problem without further assumptions on X 1 or the bootstrap sample size m(n).In conclusion we will see that the bootstrap has to be applied with care in general because it even may fail for partial sums of i.i.d.random variables.
To judge the quality of its approximation the bootstrap should at least be able to reproduce a limit distribution, when it exists, of a suitably normalized S k(n) .This question is discussed throughout.To be concrete, the r.v.X 1 could be the famous St. Petersburg game which describes the gain of 2 k dollars in a fair coin tossing experiment when "head" falls first at time k, see e.g.Feller (1968) and Section 4 for details.This popular example will be used to explain how the bootstrap works.Note, that in connection with our general question the distribution of X 1 will of course be unknown.Now let S n denote the total gain of n repeated St. Petersburg games.It is worth noting, that these St. Petersburg sums do not posses a limit distribution in the usual sense, i.e. there exists no random variable S such that a suitably normalized S n converges in distribution to S. However, along subsequences such limit distributions exists.In particular, Martin-Löf (1985) proved convergence in distribution where W = W 1 is a member of a semi-stable Lévy process which is not stable since its Lévy measure is discrete.Here and throughout the paper " d −→ " and " d = " denote convergence and equality in distribution, respectively.The asymptotics of the partial sums S k(n) were clarified in Csörgő and Dodunekova (1991) and Csörgő (2002Csörgő ( , 2007Csörgő ( , 2010)).
There (see e.g.Theorem 1.1.in combination with the remark on page 241 in Csörgő, 2010) it is shown that the cardinality of all non-trivial distributional cluster points of normalized partial sums is the continuum.In particular, we have where log 2 (•) denotes the logarithm to the base 2 and i the imaginary unit.In Section 4 we will see that all limit laws W γ can be reached by the m(n) bootstrap of the partial sum of k(n) = 2 n consecutive St. Petersburg games.In Section 3 a whole spectrum of conditional bootstrap limit laws is derived in the general set-up of i. (1.4) be an m(n) out of k(n) bootstrap sample drawn with replacement from the entries of the vector X k(n) := (X 1 , . . ., X k(n) ).Below we will focus on the bootstrap of partial sums, where the central question can be discussed in detail.In Efron's case with Giné and Zinn (1989) showed that the bootstrap of the sample mean is consistent if and only if X 1 is in the domain of attraction of a normal law, see also Csörgő and Mason (1989), Mammen (1992)  For non-normal but stable limit laws the low intensity bootstrap can be consistent but the size m(n) may depend on the index α of stability.In any case Janssen and Pauls (2003) showed under mild conditions that all conditional limit laws for the m(n) out of k(n) bootstrap are infinitely divisible.

Notation and unconditional results
The next definition fixes all limit laws of our partial sums via a suitable normalization.

Definition 2.1. (a)
The domain of partial limit laws P LL(X 1 ) of X 1 is the set of all non-constant random variables Y such that there exists a sequence k(n) → ∞ and consists of all non-constant infinitely divisible random variables.
The introduced term P LL(X 1 ) shall not be confused with the common notion of partial domain of attraction of a random variable.
Remark 2.2.It is well known that all members Y of P LL(X 1 ) are infinitely divisible.
The existence of universal random variables X 1 has been shown by Doeblin, see Doeblin (1940) and Feller (1971).
In the following a fixed sequence k(n) → ∞ and the portion X k(n) = (X 1 , ..., X k(n) ) of our i.i.d.sequence is always regarded.In case that (2.1) holds one likes to bootstrap the partial sum k(n) i=1 X i .A suitable normalized bootstrap should then reproduce the limit Y given in (2.1).However, in case that its limit Y is not normal Efron's bootstrap cannot reproduce this law and the same holds true for the m(n) bootstrap with moderate to k(n) → 0 can reach the whole domain of partial limit laws P LL(X 1 ).Note, that for all Z ∈ P LL(X 1 ) there exist some non-decreasing general not be chosen strictly increasing.Next, we show the equivalence between (2.2) and its unconditional bootstrap counterpart for the same sequences m(n), β n and α n .
Note, that in most applications the limit (2.1) is typically assumed to exist.
However, the m(n) bootstrap under study only uses the vector X k(n) from which the resample is drawn.Below we therefore state the results as general as possible and (2.1) is only supposed to hold if explicitly specified.
(a) Consider the bootstrap sample (1.4) and suppose that (2.2) hold.Then we also have unconditional convergence in distribution with the same normalizing sequences as in  βn − α n , respectively.Since X 1 , . . ., X m(n) are i.i.d.this implies by well known properties of characteristic func- where Y j = ϕ(jt/β n ) µn(j) for j ≥ 2 and Y j = 1 for µ n (j) = 0.By assumption, convergence Φ n (t) → Φ(t) holds for all t ∈ R as n → ∞.Fix t ∈ R and observe, that the complex random variables ϕ(t/β n ) µn( 1) and e −iαnt m(n) j=2 Y j are tight since their absolute values are bounded.Hence, by Prohorov's theorem, any subsequence contains a further subsequence, say n, such that both converge jointly in distribution to some random variables A(t) and B(t).But then, by (2.4), ϕ(t/β n ) m(n) d −→ A(t) and, by dominated convergence, it follows that j=2 Y j is a characteristic function, we can apply Theorem 4.9, p. 26, in Araujo and Giné (1980) to conclude that, for some sequence αn , the sequence (c) shows that the m(n) bootstrap can reproduce the limit law along subsequences of k(n).However, if m(n) is not a subsequence of k(n) this is not true in general.In particular, part (a) shows that if X 1 is of Doeblin's class the general m(n bootstrap of the partial sum can reproduce all infinitely divisible laws in the limit.Thus the statistician has to handle the bootstrap with care in general.The m(n) out of k(n) bootstrap for partial sums of St. Petersburg games Recall that the bootstrap is a two step procedure, where first the data is observed and then the bootstrap sample is drawn.Hence there are good reasons to study the conditional bootstrap distribution of the underlying statistics given the data.For instance, the quality of bootstrap tests or confidence intervals with estimated bootstrap quantiles relies heavily on the asymptotic correctness of the conditional bootstrap distribution, confer for instance Janssen and Pauls (2003).In the next section a conditional version of Proposition 2.3 can be obtained.

Conditional partial limit laws for the low intensity bootstrap
The following notation is used throughout.L(X) can reach every limit law L(Z) with Z ∈ P LL(X 1 ).Note, that for the scheme (2.2) we may assume without loss of generality the representation of its infinitely divisible limit distribution as L(Z) = N (0, σ 2 ) * c τ P ois(η), where the probability measure c τ P ois(η), τ > Here τ is a continuity point of η.Moreover, we introduce the centering variables (3.1) 4).Then the following conditional bootstrap limit laws (a) and (b) hold for the same normalizing coefficients β n and α n as in (2.2): in probability as n → ∞.
(b) The centering variables (3.1) can be substituted by α n , i.e. we have convergence To prove (b) we first show that m(n)X n,τ can be substituted by its mean.Recall that a necessary condition for the convergence (2.2) is see Gnedenko and Kolmogorov (1968, p.116).Hence V ar(m(n in probability as n → ∞.Together with part (a) this implies as n → ∞ according to a twofold application of the subsequence principle for convergence in probability, see e.g.Theorem 9.2.1 in Dudley (2002).Note, that this is first achieved along almost sure convergent subsequences of (3.4).
Since conditional convergence implies unconditional convergence, see e.To prove (c) note that conditional convergence (3.3) implies unconditional convergence (2.3).Hence the result follows from the converse part of Proposition 2.3.Now suppose that (3.2) holds.In this case the above computations prove that the centering part of (3.2) can be substituted by its mean and the result follows as for (3.3).
Remark 3.2.It is remarkable that for the conditional set-up again all variables Z ∈ P LL(X 1 ) can be reached by suitable m(n) out of X k(n) bootstrap schemes.In particular, all infinitely divisible Z will be bootstrap limit laws if X 1 is of Doeblin's class.This observation supplements the results from Janssen and Pauls (2003) who showed that conditional bootstrap limit laws are always infinitely divisible.(b) Now consider the special case of the Martin-Löf limit law L(W 1 ).In this case we have that the m(n) out of k(n) = 2 n bootstrap can reach every distribution in {L(W γ ) : γ ∈ (1/2, 1]} ⊂ P LL(X 1 ) in the limit for adequate choices of subsequences m(n), where X 1 is as in (1.1).
Example 4.3 (Generalized St. Petersburg games).Csörgő (2002) has analyzed a broader class of games, the so called generalized St. Petersburg(α, p) games, that allow for biased coins and different payoffs, see also Csörgő (2007), Gut (2010) and Pap (2011) for other occurrences of this generalization.Similarly to the classical case, semi-stable infinitely divisible laws show up as cluster points of normalized partial sums.As above the m(n) bootstrap may reproduce these limits.
Finally we like to discuss the bootstrap for general semi-stable limit distributions.
Let (X t ) t≥0 be an r-semi-stable Lévy process, 0 < r < 1, i.e. a Lévy process with the property that there exists 0 < c = 1 such that for all ∈ N there exists d ∈ R with ECP 18 (2013), paper 91.

−→ 1
is the number of different variables in the bootstrap sample.Moreover, by Lemma 2.1 in del Barrio et al. (2009a) we have N (m(n)) m(n) p as n → ∞, where here and throughout " p −→ " stands for convergence in probability.On the other hand we can write m(n) as m(n) = m(n) j=1 jµ n (j), from which it follows that m(n) ≥ µ n (1) + 2 m(n) j=2 µ n (j) = N (m(n)) + m(n) j=2 µ n (j) ECP 18 (2013), paper 91.Page 4/10 ecp.ejpecp.orgThe m(n) out of k(n) bootstrap for partial sums of St. Petersburg games holds.Combining this with the above implies convergence m(n) j=2 µ n (j) /m(n) us now write ϕ for the characteristic function of the X j 's and Φ, Φ n and ϕ n for the characteristic functions of Z,

SRemark 2 . 4 .
m(n) βn − αn is tight.Passing again to convergent subsequences we could use part (a) to see that we can take αn = α n and conclude that convergence in distribution S m(n) βn − α n d −→ Z holds by the convergence of types theorem along this subsequence.Noting that the limit does not depend on the particular subsequence the proof is completed.Proposition 2.3 has positive and negative aspects.If (2.1) holds part

Proposition 4 . 1 .
Suppose that (2.1) holds and consider an arbitrary m(n) out of X k(n) bootstrap scheme withm(n) k(n) → 0, such that m(n) i=1 X *i has a non degenerate unconditional limit law ξ (depending on the choice of m(n)) after some suitable normalization.Suppose that L(Y ) in (2.1) can be reproduced up to a shift and scale transformation by L(ξ) for all possible m(n) out of k(n) bootstrap limit laws, i.e. we have equality in distribution Y d = ξ b − a for some a = a(ξ), b = b(ξ).

(4. 1 )
Then Y is already a stable r.v..Proof of Proposition4.1.Recall that Y = Y 1 is stable whenever Y d = Ytbt − a t holds for suitable coefficients so that the result follows from Theorem 3.1.Observe that in the stable case the conditional correctness of the m(n) out of k(n)bootstrap is only a matter of proper normalization.Now we turn to the famous St. Petersburg game.Consider (1.3) for γ ∈ (1/2, 1], where we may assume without restrictions that r(n) k(n) → 0 holds.We now put m(n) = r(n).
i.d.variables.As positive result it is shown that the bootstrap can reproduce the limit distribution up to a shift and scale adjustment provided that m(n) is a subsequence of k(n).
stands for the distribution of the random variable X and L(Y |X) for the conditional distribution of the random variable Y given X.Let d be a distance on the set of distributions on R that metrizes weak convergence, e.g. the Prohorov distance, see Section 11.3 in Dudley (2002) for more details.Given the data, our main theorem shows that also the conditional distribution of the low resampling

4 Bootstrap limit laws for the St. Petersburg game
Barrio et al. (2009a)ee that (2.2) may hold with L(Z) = L(Y t ) for suitable slowsample sizes m(n) → ∞ with m(n) k(n) → 0.The following result is already hidden in delBarrio et al. (2009a)and explains the popularity of the m(n) out of k(n) bootstrap for stable limit laws.
Note, that the limit r.v.Y in (2.1) is a member Y = Y 1 of a uniquely determined Lévy process (Y t ) t≥0 .Obviously, every Y t is a member of the set of partial limit laws ECP 18 (2013), paper 91.