On the asymptotic of the maximal weighted increment of a random walk with regularly varying jumps: the boundary case

Let (Xi)i≥1 be i.i.d. random variables with EX1 = 0, regularly varying with exponent a > 2 and tP (|X1| > t) ∼ L(t) slowly varying as t→∞. We give the limit distribution of Tn(γ)=max0≤j<k≤n |Xj+1 + · · ·+Xk|(k−j) in the threshold case γa :=1/2−1/a which separates the Brownian phase corresponding to 0 ≤ γ < γa where the limit of Tn(γ) is σT (γ), with σ 2 = EX 1 , T (γ) is the γ-Hölder norm of a standard Brownian motion and the Fréchet phase corresponding to γa < γ < 1 where the limit of Tn(γ) is Ya with Fréchet distribution P (Ya ≤ x) = exp(−x−a), x > 0. We prove that c−1 n (Tn(γa)− μn), converges in distribution to some random variable Z if and only if L has a limit τ ∈ [0,∞] at infinity. In such case, there are A > 0, B ∈ R such that Z = AVa,σ,τ + B in distribution, where for 0 < τ < ∞, Va,σ,τ := max(σT (γa), τYa) with T (γa) and Ya independent and Va,σ,0 := σT (γa), Va,σ,∞ := Ya. When τ < ∞, a possible choice for the normalization is cn = n −1/a and μn = 0, with Z = Va,σ,τ . We also build an example where L has no limit at infinity and (Tn(γ))n≥1 has for each τ ∈ [0,∞] a subsequence converging after normalization to Va,σ,τ .

Asymptotic properties of various extremal statistics based on the random walk (S k , k ≥ 0) and the random field (S k − S j , 0 ≤ j < k ≤ n, n ≥ 1) of its increments are important from both theoretical and practical points of view and have been widely discussed in the literature. We refer to Darling, Erdös [6], Einmahl [8], Bertoin [3], Révész [21] Csörgő and Révész [5], Shao [22], Kabluchko [13], Račkauskas and Suquet [17], and the references therein for a comprehensive information on the subject.
The main object of this paper is the maximal weighted increment of the random walk (S n , n ≥ 0) defined by Throughout, we denote by X a generic random variable which is identically distributed with each X k and we assume that X is regularly varying with index a > 0, denoted X ∈ RV a , in the sense that the tail balance condition P (X > x) ∼ px −a L(x) and P (X ≤ −x) ∼ qx −a L(x), as x → ∞, (1.1) is satisfied, where L is a slowly varying function, and p, q ∈ (0, 1), p+q = 1. We refer to [4] for an encyclopeadic treatment of regular and slow variation. For reader's convenience, we gathered in Appendix A.4 the basics on slow variation used in this paper. In what follows, we denote Condition (1.1) imposes a priori two requirements on the choice of L. First, x −a L(x) has to be equivalent to a nonincreasing function with limit 0 at infinity, which is automatically satisfied, see Cor.A.12. Next, x −a L(x) has to be equivalent to a left continuous function, which discards none slowly varying L since L is always equivalent to a C ∞ function [4,Th.1.3.3]. As L has not necessarily a limit at infinity, see the example built in the proof of Th.2.5, it is clear that 0≤τ ≤∞ RV a (τ ) RV a .
Finally, we complete the picture by proving in Th.2.6 that if for some increasing positive sequence (c n ) n≥1 and a sequence of reals (µ n ) n≥1 , c −1 n (T n (γ a ) − µ n ) converges in distribution to some random variable Z then L has a limit τ a ∈ [0, ∞] at infinity, hence X ∈ RV a (τ ), and Z = AV a,σ,τ + B in distribution for some A > 0, B ∈ R. Section 2 contains the statements of our results. Theorem 2.1, Corollary 2.2 and Theorem 2.3, dealing with truncated versions of T n (γ a ) with respect to the length of increments S k+ − S k , are preparatory for Theorem 2.4.
The proofs are presented in Section 3. Auxiliary material and results are shifted in the Appendix section in the hope to keep a tolerable size of the proofs.

Statement of results
Throughout the paper we will need to split the range of length of increments S k+ − S k in two or three consecutive intervals. This leads us to introduce the following generic notation for the induced blocs of weighted increments in T n (γ). We set for any real numbers u, v such that 0 ≤ u < v ≤ n, As γ = γ a almost everywhere in the sequel, we will abbreviate T u,v n (γ a ) as T u,v n .
i) There is an increasing sequence of integers (n i ) i≥1 such that ii) For every τ ∈ (0, ∞), there is an increasing sequence of integers (n i ) i≥1 such that iii) There is an increasing sequence of integers (n i ) i≥1 such that n −1/a i T ni (γ a ) is not stochastically bounded and denoting by a ni the 1/n i quantile of |X|, (2.14) Theorem 2.6. Assume that X ∈ RV a with a > 2 and E X = 0 and there exists an increasing positive sequence (c n ) n≥1 and a sequence of reals (µ n ) n≥1 such that In what follows, we denote by k n the number of elements of I n . It is easily seen that 2k n = (n − [d n ])(n − [d n ] + 1), so k n < n 2 /2. Let us choose and fix some enumeration of I n . Then we introduce for j = 1, . . . , n, the random vector X n,j = n −1/2+γ (X j δ j ( , k), ( , k) ∈ I n ), viewed as a random vector in R kn . This leads to the representation Following Lindeberg method, we substitute step by step each X n,j by Y n,j where Y n,j = n −1/2+γ (Y j δ j ( , k), ( , k) ∈ I n ) ∈ R kn , in order to compare the distribution of T dn,n n with that of To this aim we consider for each r > 0 and ε > 0 the function f r,ε : R kn → R which enjoys the following two properties: EJP 26 (2021), paper 122.
(ii) the function f r,ε is three times differentiable and for each m = 1, 2, 3, there exists an absolute constant c m > 0 such that, recalling that k n + 1 < n 2 , r,ε denotes the m th derivative of the function f r,ε and f (m) r,ε (x)(h) m the corresponding differential. Such functions are constructed in Bentkus [2]. By using (i), we have Now, we introduce the hybrid sums Z n,j with hole at index j, Z n,j := 1≤i<j X n,i + j<i≤n Y n,i , j = 0, . . . , n, n + 1, with the convention i∈∅ := 0. In particular n i=1 X n,i = Z n,n+1 and n i=1 Y n,i = Z n0 . Defining X n0 := 0 and Y n,n+1 := 0, we notice also that Z n,j + Y n,j = Z n,j−1 + X n,j−1 , j = 1, . . . , n + 1.
With these notations, we can describe the progressive substitution of the X n,j 's by the Y n,j 's as EJP 26 (2021), paper 122.
valid for any 2 < β ≤ 3. Recalling that E |X j | b < ∞ for some b > 2, we choose from now on β = min(b, 3). For each j = 1, . . . , n, the random vectors Z n,j and X n,j of R kn are independent and the same holds for Z n,j and Y n,j . By Lemma A.2 in Appendix A.1, this implies E f r,ε (Z n,j )(X n,j ) = E (f r,ε (Z n,j ) (E X n,j ) and similarly with Y n,j instead of X n,j . As X n,j and Y n,j have null expectation, this gives E f r,ε (Z n,j )(X n,j ) = E f r,ε (Z n,j )(Y n,j ) = 0. As moreover X n,j and Y n,j have the same covariance matrix, Lemma A. 3 gives also E f r,ε (Z n,j )(X n,j , X n,j ) = E f r,ε (Z n,j )(Y n,j , Y n,j ). Now applying Taylor's formula (3.4) to each term in (3.3) and accounting (3.5) gives Noticing that X n,j is the product of the (scalar) real random variable X j by the deterministic vector n −1/2+γ δ j ( , k), ( , k) ∈ I n of R kn , it is clear that Therefore, puting C β : By condition (2.6), ∆(n, ε) → 0 as n → ∞ and we find from (3.1), for each ε > 0,  From weak invariance principle in Hölder spaces, see [16] and Appendix A.2 below, for any sequence d n such that d n /n → 0 as n → ∞. Since the distribution function of T (γ) is continuous, the lim sup in the right-hand side of (3.9) is a true limit equal to P (T (γ) ≤ r + ε) and we obtain lim sup n→∞ P (n −1/2+γ T dn,n n ≤ r) ≤ P (T (γ) ≤ r + ε). To find a lower bound for lim inf n→∞ P (n −1/2+γ T dn,n n ≤ r), we consider ε > 0 such that 0 < ε < r and the function f r−ε,ε , which gives Acting as above we estimate Since the distribution function of T (γ) is continuous, see the Appendix A.3, the proof is completed by letting ε → 0 in (3.12).
Proof of Corollary 2.2. If the tail function of |X| is regularly varying with index −a, a > 2, then E |X| b < ∞ for any 0 ≤ b < a. As a > 2, we can choose 2 < b < a and apply Th.2.2 with β = min(3, b) = 2 + δ which gives the result.

On L-subsequences
Before presenting the proof of theorems 2.3 and 2.4, it seems convenient to make some remarks on the use of subsequences in this paper. In the first part of our contribution, i.e. until Th. 2.4, we obtain the limiting behavior of T n (γ a ) for X ∈ 0≤τ ≤∞ RV a (τ ). Next we investigate the complementary case where L has no limit at all at infinity. Our main tool is then the exploitation of some L-subsequences versions of all the convergence theorems leading to Th.4, see for instance the subsequence version of Hölderian invariance principle, A.2-Th.A.4. By L-subsequence we mean a subsequence indexed by an infinite subset I of N, whose construction depends on the asymptotic behavior of L. Our main example is I such that The convergence of (n −1/a T n (γ a )) n∈I cannot be inherited from the whole sequence since (3.13) is weaker in general than X ∈ RV a (τ ). This announces the tedious task of a careful rereading of the proofs of Th.2.1 to Th.2.4 and also of the convergences results for the special cases τ = 0, τ = ∞ to check if an adaptation to some L-subsequence is possible. In order to minimize such a burden, we will write the forthcoming proofs of the Theorems 2.3 and 2.4 for a generic subsequence indexed by an infinite subset I of N, replacing the hypothesis L(x) → τ a by (3.13). Then the proof of the theorem will just be the special case where I = N, where (3.13) is automaticaly satisfied when X ∈ RV a (τ ). The reader interested by the proof of Th. 2.4 only, can ignore the mention n ∈ I everywhere in the proof. To have minor modifications in the typesetting we adopt the notation − −−−−−− → n→∞,n∈I which avoids double indexing by n i .
Before proceeding, let us remark that Th.2.1 and Cor.2.2 remain valid for any Lsubsequence because they are established under the hypothesis E |X| b < ∞ for some 2 < b < a which does not involve L at all. EJP 26 (2021), paper 122.
The following auxiliary results are used in the proof.
This lemma is proved in [15], see Lemma 3.3 therein. The next one extends and completes Lemma 2.4 in [19]. In view of its role in this current work, we provide a detailed proof below.

Lemma 3.2.
Assume that X satisfies for some a > 1 and some slowly varying L, Denote by a n the 1 − 1/n quantile defined by (1.4) and by (c n ) n≥1 a nondecreasing sequence of positive reals such that for n large enough, uniformly in y ∈ [1, ∞).
ii) For any s > a,
iii) The inequalities (3.16) and (3.17) remain valid if we replace c n by b n ∼ a n . Moreover if (c n ) n≥1 satisfies (3.15), then c n ∼ a n as n → ∞.
iv) a n = n 1/a l(n) where l is a slowly varying function.
Proof. By (3.14), L( To prove i), a Fubini argument gives with the notations G s and G * s defined by (A. 19), For y ≥ 1, c n y ≥ c n → ∞, so the above convergence is obviously uniform in y ∈ [1, ∞). Hence when n tends to infinity, EJP 26 (2021), paper 122.
The proof of ii) is completely similar and will be omitted.
To prove iii), we note first that an obvious choice is c n = a n . If we replace c n by b n in the proof of i), everything works identically until the first equivalence in (3.19), noting that the equivalence a n ∼ b n implies that b n tends to infinity. To obtain the second equivalence in (3.19) leading to (3.16), we note that L(a n ) To check the second assertion in iii), we use the fact that (3.15) and (3.16) imply Indeed if F and F n are the distribution functions of |X| and max 1≤k≤n |X k |, F n = F n and with some function u(x) → 1 at infinity and for any fixed x > 0, (3.20) works with the two sequences of normalizing constants (c n ) n≥1 and (a n ) n≥1 , so by the convergence of types, see [11,Th.1,2, or [20, Prop.0.2], there is a constant A > 0 such that c n /a n converges to 1 and P (Y a ≤ x) = P (Y a ≤ Ax), so A = 1, that is c n ∼ a n .
To prove iv), we first recall that the left continuous inverse of a non decreasing function H(x) is H ← (y) := inf{x : H(x) ≥ y}. In particular it is easily seen that a n = (1/G) ← (n), noticing that the slowly varying L in (3.14) is positive on some neighborhood of infinity which forbid G to vanish at some real x. Now 1/G(x) = x a / L(x), so 1/G varies regularly with exponent a. By [20, Prop.0.8(v) p.23], this entails that (1/G) ← varies regularly with exponent 1/a. Proof of Th. 2.3. Let X be in RV a and I be an infinite subset of N such that We prove the weak convergence of (n −1/a T 0,dn n ) n∈I to τ Y a under (2.9) restricted to I and (3.21). Th. 2.3 follows when X ∈ RV a (τ ), since this membership allows the choice Using (3.21), one sees that P (|X| > τ n 1/a ) ∼ n −1 , as n → ∞, n ∈ I. Hence by Lemma 3.2, a n ∼ τ n 1/a as n → ∞, n ∈ I, whence (n −1/a T 0,d n ) n∈I converges to τ Y a . So by Lemma 3.1, we only need to prove that for each ε > 0, Until the end of the proof, n belongs to I. Consider the truncated random variables . . , n, n ∈ I and the corresponding partial sums By Lemma 3.2 (i) applied with c n = τ n 1/a , n ∈ I and noticing that because E X = 0, Moreover as j ≤ log 2 (n/h), we have 2 −j ≤ 1 ≤ (n2 −j ) γa since γ a ≥ 0. Hence for h large enough and uniformly in j ≤ log 2 (n/h), Hence, for h large enough and uniformly in j such that 2 j ≤ n/h, Fix p > a. Since (| S k |, k ≥ 1) is a submartingale, by Doob and Markov inequalities we EJP 26 (2021), paper 122. By Rosenthal inequality, where the constant c p > 0 depends on p only. Applying Lemma 3.2, we obtain This leads to Hence, recalling that γ a = 1/2 − 1/a and p > a, we obtain This completes the proof of (3.23) since K n → 0 as n → ∞, n ∈ I, since d n /n → 0.

Proof of Theorem 2.4
We proceed as in the proof of Th.2.3 by proving the convergence (2.11) for X ∈ RV a and a subsequence indexed by I verifying (3.21). Again, a n ∼ τ n 1/a when n → ∞, n ∈ I. Throughout the proof, n belongs to I. Choose d n = cn κ for n ≥ n 0 , with κ ∈ 2(a−2−δ) (2+δ)(a−2) , 1 and 0 < δ < a − 2 if a ≤ 3, δ = 1 if a > 3. Then d n → ∞, d n /n → 0 as n → ∞, n ∈ I and (2.8) is satisfied. For 1 < h < d n < n, recalling the notation (2.1), T n (γ a ) can be expressed as  This will enable us to show that n −1/a T n (γ a ) and n −1/a max{T n , T n } have the same limiting distribution. For the moment, we just note for ulterior use the related inequalities (3.25) and (3.26) below. First we notice that for any r > 0, P (n −1/a T n (γ a ) ≤ r) = P (n −1/a max{T n , T n , T h,dn n } ≤ r) ≤ P (n −1/a max{T n , T n } ≤ r). For a lower bound, we write P (n −1/a max{T n , T n } ≤ r − ε) = P (n −1/a max{T n , T n } ≤ r − ε, n −1/a T h,dn n ≤ ε) = P (n −1/a max{T n , T n } + n −1/a T h,dn n ≤ r) whence for any r > 0, ε > 0, (3.26) Now we analyse max{T n , T n }. To this aim, introducing the random vectors of R h we consider the random measure N n on (the Borel σ-field of) R h defined by where δ y denotes the Dirac mass at the point y of R h . Write for r > 0, Then {N n (B c r ) = 0} = {n −1/a T n ≤ r}. Indeed, N n (B c r ) = 0, if and only if for each k = 1, . . . , n, δ n −1/a U k (B c r ) = 0 or equivalently n −1/a (X k , . . . , X k + · · · + X k+h−1 ) ∈ B r . This means that for each k = 1, . . . , n, n −1/a i −γa |X k + · · · + X k+i | ≤ r for i = 1, . . . , h. Summing up, N n (B c r ) = 0 if and only if n −1/a i −γa |X k + · · · + X k+i−1 | ≤ r, i = 1, . . . , h, k = 1, . . . , n − h + 1 or if and only if n −1/a T n ≤ r.
Introducing the interval of integers u, v := [u, v] ∩ N, with the usual convention [u, v] = ∅ when v < u, consider for j = 1, . . . , n the sets In what follows we use the functions f r,ε and the random vectors X n,i , Y n,i , Z n,i introduced in the proof of Th. 2.2. We have P (n −1/a max{T n , T n } ≤ r) = P (n −1/a T n ≤ r, n −1/a T n ≤ r) EJP 26 (2021), paper 122.
To estimate P n1 we use Taylor's expansion which gives n (B c (x)) = 0} and X n,j are independent, 1{N (j) n (B c (r)) = 0}f (Z n,j ) and 1{N (j) n (B c (r)) = 0}f (Z n,j ) are respectively a random linear form on R kn and a random bilinear symetric form on R k × R k , both independent of X n,j . By lemmas A.2 and A.3 in Appendix, recalling that X n,j and Y n,j have the same (null) expectation and covariance matrix, one sees that This yields By our choice of d n , we have for each r > 0 and ε > 0, lim sup n→∞, n∈I |P n1 (r, ε)| = 0. To estimate P n2 (r, ε), we introduce and notice that E : As Y j and N (j) n are independent, P n2 (r) = n j=1 P ( N (j) n (B c r ))E |Y j |.
Similarly we estimate  Since the sequences (X k ) k≥1 and (Y k ) k≥1 are independent,   In a similar way we prove for any r > 0 and 0 < ε < r/2, lim inf n→∞, n∈I Accounting (3.26), this implies for any h > 1, r > 0 and 0 < ε < r/2, lim inf n→∞, n∈I In this inequality, only the lim sup term depends on h. So letting h tend to infinity we obtain by (3.24) lim inf n→∞, n∈I

Proof of Theorem 2.5
As a preliminary, we construct a sequence of reals (m i ) i≥0 increasing to infinity and a slowly varying function L = L a such that α) for every a > 2, x −a L a (x) decreases from 1 to 0 on [m 0 , ∞); β) L a (m 2i ) i≥1 increases to infinity and L a (m 2i+1 ) i≥0 decreases to zero.
We start with an arbitrary sequence (m i ) ↑ ∞ on which we will progressively put some constraints. Choosing m 0 ≥ e 1/2 , we define the function i : By A.4-Th.A.11 below, L is clearly a slowly varying function. We can already check α) without additional conditions on (m i ) i≥1 . Obviously m −a 0 L a (m 0 ) = 1. Writing For u ≥ e 1/2 , −a + (−1) i(u) / log u ≤ −a + 2 < 0 which implies the decreasingness of x −a L a (x) on [m 0 , ∞) and its convergence to 0 at infinity. To find conditions on m i implying β), we note that for i ≥ 2, Hence the increasingness of L a (m 2i ) i≥1 as well as the decreasingness of L a (m 2i+1 ) i≥0 require that (m i ) i≥2 satisfies the condition This means that the sequence (log log m i ) i≥0 has to be strictly convex. A simple choice satisfying this condition is log log m i = i b , with b > 1. In particular with b = 2, m i = exp(exp(i 2 )), since i 2 + (i − 2) 2 − 2(i − 1) 2 = 2, we find EJP 26 (2021), paper 122.
To To prove ii), we can build the sequence n i = n i (a, τ ) as follows. First we note that for Recalling that m i = exp(exp(i 2 )), we note also that Therefore L a (m a 2i−1 ) tends to 0 as i tends to infinity. As moreover L a (m 2i ) tends to infinity, L a (m a 2i−1 ) < τ a < L a (m 2i ) for i large enough. Then by increasingness and continuity of   By independence and identical distribution of the X k 's and (3.38), for every r > 0, Suppose that λ is positive. Then there exists a subsequence (n ij ) of (n i ) such that n 1/a ij /a ni j > λ/2, whence which is contradictory. Therefore λ = 0 and the proof of iii) is complete.

Proof of Theorem 2.6
Assume that L(x) as no limit as x → ∞. Then In the case where θ = 0, then n −1/a i T ni (γ a ) converges in distribution to σT (γ a ) by Th.A.4 and continuous mapping. If 0 < θ < ∞, as (3.21) is satisfied by (n i ) i≥1 , the corresponding subsequence version of Th.2.4 gives the weak convergence of n −1/a i T ni (γ a ) to V a,σ,θ . When θ < ∞, the same argument applied with n i := [t i a ] gives the weak convergence of n i −1/a T n i (γ a ) to V a,σ,θ . In the special case where θ = ∞ we have to modify the definition of n i in the following way. As the quantile sequence (a n ) n≥1 is nondecreasing and tends to infinity, we set n i := max{k ≥ 1 : a k ≤ t i }. As (t i ) i≥1 is increasing, (a n i ) i≥1 is nondecreasing and verifies a n i ≤ t i < a 1+n i . As by Lemma 3.2 iv), a n = n 1/a l(n) with l slowly varying, this implies so a n i ∼ t i and L(a n i ) → ∞. To deduce from this that a −1 remains only (see the proof of Th. 2.5 iii) to check that n i 1/a = o(a n i ). This follows from P (|X| > a n i ) ∼ 1/n i and P (|X| > a n i ) ∼ L(a n i )a −a n i which give a a n i So we found two nondecreasing sequences of integers (n i ) i≥1 and (n i ) i≥1 such that   • θ = 0 < θ ≤ ∞. Then (3.44) is impossible because V a,σ,0 has a subgaussian tail by (1.8), while αV a,σ,θ has an heavy tail equivalent to (αθ ) a x −a when 0 < θ < ∞ by (1.10) or to α a x −a when θ = ∞. • 0 < θ < θ < ∞. Looking again at the tails we see by (1.10) that necessarily θ = αθ , which reformulates (3.44) as V a,σ,θ d = θ θ V a,σ,θ . Noticing that for every x > 0, we see by comparison with (1.9) that necessarily, It is elementary to check that if a non negative random variable T non degenerated to 0 has the same distribution as cT for some constant c then c = 1. Therefore when 0 < θ < θ , (3.44) is impossible.
Now if θ > α, G is not a distribution function. If θ = α, G(x) = 1 for every x > 0 which is clearly not true for the left-hand side of (3.46). If θ < α, G is the distribution function of a Fréchet distribution with scale parameter (α a − θ a ) 1/a hence heavy tailed while the d.f. in the left-hand side of (3.46) is subgaussian, so (3.46) is false. Finally in this third case (3.44) cannot be true.
To conclude we have proved that θ = θ , i.e. that L(x) has a limit τ a ∈ [0, ∞] when x tends to infinity. By [16], [12, Th.5b)] and Th.2.4, b −1 n T n (γ a ) converges in distribution to V a,σ,τ where b n is defined as in (3.42). By the convergence of types theorem applied to the whole sequence we obtain that c n ∼ Ab n for some positive constant A. This shows that the only possible limits in distribution of T n (γ a ) under affine normalisation are the random variables Z = AV a,σ,τ + B, A > 0, B ∈ R.

A.1 Taylor expansions and Lindeberg method
The following special Taylor expansion is useful when applied to functions of random variables having moments of order r ∈ [2, 3). We applied it in the proofs of Th.
Proof. By Taylor formula at the order 2 with integral remainder, The Taylor formula at the order 1 with integral remainder provides another bound for R.
The bound (A.3) seems preferable for "small" values of h E , while (A.4) can be privilegied for "large" values of h E . More formally, for an arbitrary parameter t > 0 to be precised later and 2 < β ≤ 3, we get the bound To unify these two bounds, we remark that for a, b > 0, at 3−β = bt 2−β for t = b/a. With a = 1 6 f and b = f , this choice of t gives (A.2).
Before providing the justifications of (3.6) and (3.7), we need to introduce some notations. For m ≥ 1, we denote by L m (R k , R) the space of m-linear forms on (R k ) m . A norm x being choosen in R k , we denote by g * m the corresponding operator norm of g ∈ L m (R k , R), that is g * m := sup{|g(x 1 , . . . , x m )| : x i ∈ R k , x i ≤ 1, i = 1, . . . , m}.
As we work with finite dimensional spaces, Pettis and Bochner integrals coincide and we say that a random element in R k or in L m (R k , R) is integrable if its norm is an integrable random variable in the usual sense.
Lemma A.2. Let f be a measurable map from R k into its dual L 1 (R k , R) and X and Z be two independent random vectors in R k . Assume moreover that X and f (Z) are integrable. Then f (Z)(X) is an integrable real random variable and According to its order of apparition in (A.5), the expectation symbol E denotes successively the expectation of a real valued random variable, of a random linear form on R k (or random element in L 1 (R k , R)) and of a random vector in R k .
Proof. Denote by P (Z,X) , P X and P Z the respective distributions of (Z, X), X and Z. By independence of Z and X, P (Z,X) is the product measure P Z ⊗ P X . The real valued random variable f (Z)(X) is integrable since recalling that if g = f (z) is a linear form on R k and E X < ∞, g(X) is integrable and E g(X) = g(E X). So (A.5) is established.
3. Let f be a measurable map from R k into L 2 (R k , R) and X, Y and Z are random vectors in R k such that b) X and Y have the same covariance matrix; c) X and Z are independent, Y and Z are independent; .
Proof. As f (z) is a bilinear form on R k for each z ∈ R k , it admits the representation a i,j (z)x i y j , x = (x 1 , . . . , x k ), y = (y 1 , . . . , y k ) and the integrability of f (Z) implies the integrability of the k 2 random variables a i,j (Z) because a i,j (Z) = f (Z)(e i , e j ) where e 1 , . . . , e k denotes the canonical basis of R k . Writing X = (X 1 , . . . , X k ) and Y = (Y 1 , . . . , Y k ), we see that where the last equality uses the square integrability of X which gives the integrability of X i X j , the integrability of a i,j (Z) and the independence of X and Z. Obviously the same equality holds substituting X by Y and we conclude by b).

A.2 On the use of invariance principles
Proof of (2.2). We introduce first some notations.
with a > 2 and L slowly varying, E X 1 = 0, σ 2 = E X 2 1 . Let ξ n be the polygonal process built on the partial sums of (X k ) k≥1 . Assume that for some increasing sequence of integers (n i ) i≥1 , L(n Proof. We refer to [16] where the Lamperti invariance principle is proved for the whole sequence n −1/2 ξ n under the assumption lim x→∞ x a P (|X 1 | > x) = 0, that is in our notations lim x→∞ L(x) = 0. The convergence of finite dimensional distributions follows from the assumption a > 2 and does not involve L. Before looking at the tightness, we note that ∀t > 0, n i P (|X 1 | > tn Now for the tightness, following step by step the proof exposed in [16], with the same notations, we obtain first This upper bound tends to 0 by (A.13). The same works for The hard part of the proof is the treatment of P 1,1 (J, n i , ε) where the X k are truncated at the level δn 1/a i . Everything around the use of Rosenthal inequality works replacing n by n i , except maybe the control of the moments of the truncated variables X k = X k 1 |X k | ≤ δn 1/a i which can be achieved by the following adaptations.
The first term in the right-hand side above tends to 0 by (A.13). To treat the second term, we cannot use here sup s>0 s a P (|X 1 | > s) < ∞ like in [16], but we can exploit the slow variation of L (which was not supposed in [16] The other truncated moment we have to take care of is E | X k q |, for q > a. We use again Karamata theorem and slow variation of L as follows. As L(n 1/a i ) tends to zero, one can again find a constant c such that E | X k q | ≤ cδ q−a n q/a−1 i like in [16].
There is no other modification to do in the proof presented in [16].

A.3 Continuity of the distribution function of T (γ)
Here we justify the continuity of the distribution function F (r) = P ( W γ ≤ r) for 0 < γ < 1/2.
Let us check first the continuity at r = 0. Since obviously F (r) = 0 for r < 0 and F is right continuous everywhere, this continuity is equivalent to F (0) = 0. As ω γ (x, 1) = 0 implies that x(1) = x(0), Next we claim that F (r) > 0 for every r > 0. To see this, it may be convenient to use the equivalent Ciesielski's sequential norm W seq γ built on the weighted dyadic second differences of W , see e.g. [18]. In particular, for 0 < γ < 1/2, there is a positive constant c γ such that W seq γ ≥ c γ W γ . By where erf t := 2 √ π t 0 exp(−s 2 ) ds and the support of the distribution of W seq γ is [0, ∞) because lim j→∞ j 1/2 2 j(γ−1/2) = 0. In particular G(t) > 0 for every t > 0. Applying this with t = c γ r, r > 0, we obtain 0 < P W seq γ ≤ c γ r ≤ P W γ ≤ r , that is F (r) > 0 for every r > 0.

A.4 Slow variation
We gathered here some properties of slow variation used in the paper. Then L is said to be slowly varying.
Remark A.7. Every measurable function equivalent to a slowly varying function when x tends to infinity is itself slowly varying on some [B, ∞). ii) If L varies slowly and a > 0, then L(x a ) varies slowly.
Corollary A.12. If L varies slowly, then for every positive real a, x a L(x) is equivalent to a function which increases ultimately to infinity and x −a L(x) equivalent to a function which decreases ultimately to 0.
A positive measurable function G defined on some neighborhood [A, ∞) (A ≥ 0) of infinity and satisfying for some real p and every y > 0, is said regularly varying (at infinity) with exponent p. In the special case where p = 0, G is slowly varying. It is easily seen that each regulary varying function G with exponent p can be writen as G(x) = x p L(x) where L is slowly varying. Assuming for notational simplicity that A = 0 and that G is locally bounded and regularly varying with exponent p, let us define for r real, G r (x) =