Moments of the superdiffusive elephant random walk with general step distribution

We consider the elephant random walk with general step distribution. We calculate the first four moments of the limiting distribution of the position rescaled by $n^\alpha$ in the superdiffusive regime where $\alpha$ is the memory parameter. This extends the results obtained by Bercu.


Introduction and results
The elephant random walk (ERW) is a one-dimensional discrete-time random walk with memory. With probability α the walker repeats one of its previous steps chosen uniformly at random and with probability 1 − α the next step is sampled independently from the past where α ∈ [0, 1] is the memory parameter.
The ERW was first introduced in 1993 by Drezner and Farnum [DF93] as correlated Bernoulli process with Bernoulli step distribution and time-dependent memory parameter. For the case of time-homogeneous memory parameter and Bernoulli step distribution it was proved in [Hey04] that the behaviour shows a phase transition in the value of the memory parameter α. In the diffusive regime (α < 1/2) asymptotic normality is proved after diffusive scaling, in the critical regime (α = 1/2) normality remains valid with a logarithmic correction in the scaling. In the superdiffusive regime (α > 1/2), after scaling with n α , the limiting distribution is found to be non-degenerate. It was also stated without proof that the limiting distribution is different from the normal distribution. The proof uses the martingale which naturally appears in the problem. In the case of general time-dependent memory parameter sufficient conditions for the law of large numbers, central limit theorem and law of iterated logarithm were given in [JJQ08] using the martingale method.
The same model with +1 and −1 jumps was first named as elephant random walk in [ST04] and the probability distribution of its position after n steps was analysed. The connection of the ERW with Pólya-type urns was exploited in [BB16] to prove process convergence of the ERW trajectory using know results on urns. The fact that the limiting distribution of the superdiffusive ERW is not Gaussian was first proved rigorously in [Ber17] by computing its first four moments using martingales. New hypergeometric identities are obtained in [BCR19] by computing these moments in two different ways. Number of zeros in the elephant random walk is analysed in [Ber22b]. The generalization when zero jumps are also allowed is called the delayed ERW, see [GS21,Ber22a]. In [Bus18] the steps of the ERW are sampled from the β-stable distribution with parameter β ∈ (0, 2] and the phase transition in the memory parameter is proved to happen at the value α = 1/β using the connection with random recursive trees. In the superdiffusive regime the fluctuations after subtracting the non-Gaussian limit are proved to be normal in [KT19].
In the present note we consider the ERW with general step distribution which is defined as follows. Let α ∈ [0, 1] be the memory parameter of the ERW. Let ξ 1 , ξ 2 , . . . be an arbitrary i.i.d. sequence of random variables with certain moment conditions imposed later. We denote by X n the nth step of the random walk. We suppose that the random walk starts from the origin, i.e., S 0 = 0. The first step is X 1 = ξ 1 . Every further step is defined as where the index K has uniform distribution on the index set {1, 2, . . . , n}, that is, with probability α one of the previous steps is repeated and otherwise the step is an independent new sample from the step distribution. Note that the steps X 1 , X 2 , . . . are not independent but the walk has a long memory. The position of the ERW is denoted by for k = 1, 2, . . . denote the moments and centered moments of the step distribution. For the ERW with general step distribution the same phase transition appears in the value of the memory parameter α as for the original model since the martingale method used in the majority of the previous literature on ERW extends naturally for our model as we explain below. We believe that the proof of the law of large numbers and central limit theorem in the diffusive and critical regime survives for the case of general step distribution after appropriate modifications with the same Gaussian limits. Hence we focus on the most exciting superdiffusive regime in the present note where the limiting distribution is different from Gaussian. Our main results for the superdiffusive ERW with general step distribution are the following.
1. Let (S n ) denote the elephant random walk with memory parameter α. Assume that α ∈ (1/2, 1], that is, we consider the superdiffusive regime. Suppose that the step distribution has finite variance, that is, m 2 < ∞. Then almost surely with some non-degenerate random variable Q.
2. Let p be a positive even integer. Assume that the pth absolute moment of the step distribution is finite, that is, m p < ∞. Then the above convergence is also true in L p which means that (1.5) Theorem 1.2. Assume that the step distribution of the elephant random walk (S n ) has finite fourth moment, that is, m 4 < ∞. Then the first four moments of the random variable Q which arises as the limits in (1.4)-(1.5) are given by .
(1.9) Theorem 1.1 follows from the application of the martingale method to the case of general step distribution. The almost sure convergence in (1.4) was already proved in Theorem 1.1 of [Ber21a]. The L p convergence was established for p = 2 in Theorem 3.2 of [Ber21c] and for general p in Theorem 2.2 of [BCR19] for the standard elephant random walk. See also [Ber21b] for other generalizations of these convergence results. We provide a simple and elementry proof of the almost sure and L p convergence results of Theorem 1.1 in Section 2 which relies on proving the L p boundedness of the natural martingale if the step distribution has a finite pth absolute moment. In particular we prove the L p boundedness of a sequence of martingale differences in Lemma 2.1.
Theorem 1.2 is proved in Section 3 by solving the recursions for the mixed moments of the centered ERW. The moments in (1.6)-(1.9) generalize the formulas found in [Ber17] in the case of symmetric first step. We mention that higher moments of Q could in principle be determined using the method presented here but the recursions are much more complicated beyond the fourth moment.

Martingale method and convergence
We assume that the first two moments of the step distribution are finite. Let denote the centered ERW. Then by the definition (1.1) we have for any n = 1, 2, . . . that where F n = σ(X 1 , . . . , X n ) is the natural filtration. As a consequence, holds and the process Q n = a n S n (2.4) is a martingale with respect to F n where the sequence (a n ) is given by as n → ∞ with the empty product understood to be equal to 1 in the definition of a 1 = Γ(1 + α) −1 . We mention that our definition (2.5) of a n compared to the literature is simplified by a factor Γ(1 + α), see e.g. [Ber17]. The martingale (Q n ) can be written as where ε 1 = X 1 − m 1 and for all k = 2, 3, . . . , Lemma 2.1. Let p be a positive integer and assume that the pth absolute moment of the step distribution is finite. Then the martingale differences (ε n ) are bounded in L p and Proof of Lemma 2.1. We first use induction to see that E(|X n | p ) = E(|ξ 1 | p ). The statement is clear for n = 1 and for n = 2, 3, . . . , one can write by the law of total expectation that which is equal to E(|ξ 1 | p ) by the induction hypothesis.
On the other hand, Jensen's inequality implies that |E(X n |F n−1 )| p ≤ E(|X n | p |F n−1 ), which after taking expectation yields that E(|E(X n |F n−1 )| p ) ≤ E(|ξ 1 | p ). Then by applying the Minkowski inequality for ε n = X n − E(X n |F n−1 ) from (2.7), we have that which proves (2.8).
Proof of Theorem 1.1. 1. It is clear from the representation (2.6) and from Lemma 2.1 that the expectation of the predictable quadratic variation process can be bounded as which remains finite in n exactly in the superdiffusive regime α ∈ (1/2, 1]. As a consequence, the increasing limit lim n→∞ Q n is an almost surely finite random variable and the martingale (Q n ) converges almost surely to its limit Q = ∞ k=1 a k ε k .
2. The conditional expectation of the pth power of Q n+1 using Q n+1 = Q n + a n+1 ε n+1 from (2.6) can be written as (2.12) Note that the k = 1 term above vanishes since E(ε n+1 |F n ) = 0. The absolute value of the expectation of the random variable which appears in the k = 2, . . . , p terms on the right-hand side of (2.12) can be upper bounded as where we used Hölder's inequality in the second inequality above and Jensen's inequality for conditional expectations in the last one. By taking expectation in (2.12) we get that (2.14) where we used (2.13) and the fact that a n+1 ∈ (0, 1] in the first inequality above and the upper bounds (E(|ε n+1 | p )) k/p ≤ 1 + E(|ε n+1 | p ) and (E(Q p n )) (p−k)/p ≤ 1 + E(Q p n ) in the second one. Note also that since p is even, we have E(Q p n ) = E(|Q n | p ). By Lemma 2.1, we have E(|ε n+1 | p ) ≤ 2 p E(|ξ 1 | p ) where the upper bound does not depend on n. By Lemma 2.2 below with β = 2α and c = 2 p (1 + 2 p E(|ξ 1 | p )), the expectations E(Q p n ) remain bounded in n, that is, the martingale (Q n ) is bounded in L p , hence it converges to its limit Q also in L p . holds for all n = 1, 2, . . . . The upper bound on the right-hand side of (2.16) is increasing in n and its n → ∞ limit is finite since β > 1.

Limiting moments
We give the proof of Theorem 1.2 in this section. For this we introduce We define the mixed moments Note that the moments M k given in (1.3) can be expressed in terms of the moments m k as The idea to compute the moments of the limit Q in Theorem 1.2 is to use the convergence in L p from Theorem 1.1 with p = 4 and to write down and solve recursions for the mixed moments of the elephant random walk, see Propositions 3.1 and 3.2 below.
Proposition 3.1. The mixed moments of S n , T n and U n satisfy the following recursions: n.
Proof of Proposition 3.1. We start by writing We use these formulas on the left-hand side of the recursions (3.9)-(3.15) and we expand the products under the expectation. Then we get the sum of several expectations involving products with combinations of S n , T n , U n multiplied by powers of X n+1 . The expectation of such a product is computed by taking the conditional expectation of the factor involving X n+1 with respect to F n first and then by taking expectation, e.g.
(3.26) There are two types of terms in the resulting expressions: mixed terms including powers of X n+1 multiplied by an expression of S n , T n or U n under the expectation (k = 1, 2, 3, 4 terms in (3.26)) and pure terms being the expectation of a polynomial of X n+1 only (k = 0 term in (3.26)). In order to compute the expectation appearing in mixed terms, we use the conditional expectation of powers of X n+1 given in (3.27)-(3.32) below. For the pure terms, the computation of the conditional expectation of the appropriate polynomial is not needed, the expectations given in (3.36) are enough to get the recursions (3.9)-(3.15) for the expectations.
Further using the definitions (3.8), (3.4), (3.3) and (3.5), we can see by induction on n the equality of expectations (3.36) For the expectation of the recentered sums holds.
Then we are ready to verify the recursions (3.9)-(3.15). We rewrite the (n + 1)st terms on the left-hand side by For the proof of Proposition 3.2 about the solutions of recursions in Proposition 3.1 one uses the following two lemmas. The first one provides the general solution of recursions which the moments of the elephant random walk satisfy, the second one contains two useful identities about sums of gamma ratios.
Lemma 3.4. Let a and b be two arbitrary non-negative real numbers such that b = a + 1. Then for all n = 1, 2, . . . , the following identities hold n j=1 Γ(j + a) where we used the solutions (3.17) and (3.18) in the second equality above and the identity M 1,2 − 2m 1 M 2 = M 3 in the last one. With this value of c n , the summation on the right-hand side of (3.39) is − Γ(n + 1) Γ(n + 3α) (3.44) with the use of (3.41) from Lemma 3.4 in the last equality. Substituting it to the righthand side of (3.39) one arrives at (3.19) after the simplification of the leading term.
The proof of (3.22) is similar. We have β = 3α, b 1 = M 1,1,2 and  can be given as follows where the asymptotic equality above follows since (3.48) holds as n → ∞ and from three other similar asymptotic equalities corresponding to the summations in further terms of (3.47). These asymptotics can be seen from Lemma 3.4 by neglecting the terms vanishing in the n → ∞ limit. By substituting (3.47) into (3.39) we see that as n → ∞, E( S 4 n ) is asymptotically equal to a constant times Γ(n + 4α)/Γ(n) ∼ n 4α . The value of the constant is obtained by adding b 1 /Γ(4α + 1) = M 4 /Γ(4α + 1) to the expression in (3.47). This verifies that vanishing terms in (3.47) can be disregarded. Straghtforward simplification of the sum of M 4 /Γ(4α + 1) and the right-hand side of (3.47) yields the coefficient of Γ(n + 4α)/Γ(n) on the right-hand side of (3.23) which completes the proof.