Limit Theorems for Trawl Processes

In this work we derive limit theorems for trawl processes. First,we study the asymptotic behaviour of the partial sums of the discretized trawl process $(X_{i\Delta_{n}})_{i=0}^{\lfloor nt\rfloor-1}$, under the assumption that as $n\uparrow\infty$, $\Delta_{n}\downarrow0$ and $n\Delta_{n}\rightarrow\mu\in[0,+\infty]$. Second, we derive a functional limit theorem for trawl processes as the L\'{e}vy measure of the trawl seed grows to infinity and show that the limiting process has a Gaussian moving average representation.


Introduction
In this paper, we study probabilistic limit theorems for a class of stationary infinitely divisible stochastic processes called trawl processes, which were introduced for the first time in 2011 by Barndorff-Nielsen [2]. By construction, a trawl process allows for both a very flexible autocorrelation structure and the possibility of generating any kind of marginal distribution within the class of infinitely divisible distributions. In particular, the marginal distribution and the autocorrelation structure can be modelled independently of each other. Often the marginal distribution is chosen among infinitely divisible distributions on positive integers, with a view to applying the process as a model of serially correlated temporal count data, although in general such an assumption is not necessary.
Barndorff-Nielsen et al. [5] provide the first systematic study of trawl processes, investigating their probabilistic properties and analysing volatility modulation within this framework. Since this paper appeared, there has been an increasing interest in trawl processes, covering a wide range of issues ranging from applications to theoretical investigations, and for the convenience of the reader we provide here a brief review of the recent literature on these processes.
Prior to the present paper, several limit theorems for trawl processes have been derived. Doukhan et al. [11] characterize a class of discrete time stationary trawl processes and study the functional limits of their partial sums. Grahovac et al. [14] investigate the intermittency property of trawl process, while Paulauskas [19] investigates trawl processes (and general linear processes) with tapered innovations. Additionally, Talarczyk and Treszczotko [27] study limit theorems for integrated trawl processes with symmetric Lévy bases.
In a more applied realm, Noven et al. [18] develop a latent trawl process model for extreme values and apply it to environmental time series. This work is partially extended by Courgeau and Veraart [9], who derive an asymptotic theory for inference on the latent trawl model for extreme values. Further work in the direction of extreme values has been done by Bacro et al. [1], who propose hierarchical space-time modelling of asymptotically independent exceedances based on a space-time extension of the trawl process and apply their model to precipitation data. In finance, Shephard and Yang [26] and Veraart [30] adapt the trawl process to provide a coherent statistical model of high-frequency data, while the suitability of trawl processes for the modelling of high-frequency data is further corroborated by the results of Rossi and Santucci de Magistris [22].
With regards to estimation methodology for trawl processes, in addition to the aforementioned works [1,5,9,18,26], Doukhan [12] introduces spectral estimation for non-linear long range dependent discrete time trawl processes, and Shephard and Yang [25] develop likelihood inference for exponential-trawl processes.
In our paper we study two types of limit theorems for trawl processes. Our first main result concerns the asymptotic behaviour of the partial sums of the discretized trawl process as the size of the discretization step goes to zero. In particular, let L be an homogeneous Lévy basis on R 2 , let a : R + → R + be a nonincreasing integrable function and let A = {(r, y) : r ≤ 0, 0 ≤ y ≤ a(−r)}. Then, X t := L(A t ), t ∈ R, where A t := A + (t, 0), is termed the trawl process. Let (∆ n ) n∈N be a sequence of non-negative constants such that ∆ n ↓ 0 and n∆ n → µ ∈ [0, +∞] as n ↑ ∞. We study the asymptotic behaviour of the (properly rescaled) functional The asymptotic behaviour of this functional depends on the value of µ. Thus, we divide our analysis in three cases 0 < µ < ∞, µ = 0, and µ = +∞. When 0 < µ < ∞ we obtain that the above functional becomes a Riemann sum and thus we obtain a functional convergence in probability to tµ 0 X t − E(X t )ds. In the case µ = 0 it turns out that the behaviour of S n depends on the increments of X around 0. Based on this, we show that S n , after centering and properly rescaling, converges stably to certain stochastic integral driven by two independent Lévy processes.
Lastly, when µ = +∞ the limit depends on whether the trawl process X has short or long memory. Under short memory, we show that, when properly scaled, S n converges to a Brownian motion. In contrast, when X exhibits long memory we have to further distinguish whether the Gaussian component of the trawl process is present or not. If the Gaussian component is present, then S n under proper scaling converges towards a fractional Brownian motion with Hurst parameter H > 1/2. Interestingly, if the Gaussian component is absent, the limit is no longer Gaussian and the rate of convergence of S n is governed by the Blumenthal-Getoor index of the trawl process. We note that these findings agree with those obtained by Grahovac et al. [13] on superpositions of Ornstein-Uhlenbeck type processes.
Our second main result is a functional limit theorem for trawl processes that links them with stationary Gaussian processes. In particular, we show that the sequence of scaled trawl processes converges in distribution, as their Lévy measures tend to infinity, to a limiting process which admits a Gaussian moving average representation. Moreover, we are able to show explicitly the relation between the function that determines the upper boundary of the trawl set, namely the function a(·) introduced above, and the kernel function of the Gaussian moving average. We stress that the existence of the Gaussian moving average representation of the limiting process is explicitly proved.
The paper is structured as follows. Section 2 lays out the notations used throughout the paper and discusses some essential preliminaries. In Sections 3 and 4 we formulate the main results of the paper, concerning the asymptotics of partials sums of trawl processes and the convergence of a sequence of trawl processes to a Gaussian moving average, respectively. For the sake of ease of exposition, we defer the proofs of these results to the end of the paper, namely to Section 5. Finally, the Appendix contains the computation of the fourth moment of the trawl process.

Preliminaries
This part is devoted to introduce the basic notations as well as to recall several basic results and concepts that will be used through this paper.

Functions of regular variation
In this case we will write g ∈ RV ∞ α . If we replace t → ∞ by t → 0 + in the previous equation, then g is called regularly varying at 0 and in this case we denote this as g ∈ RV 0 α . In the previous definitions α = 0, typically we will refer to g as slowly varying. It is well known that if g ∈ RV ∞ 0 , then as x → ∞ One of the key results for functions of regular variation is the so Karamata's Theorem (KT for short) which states that if g ∈ RV ∞ α and locally bounded in [x 0 , +∞) then 2. For every ρ < α − 1, we have that For a complete exposition on the basic properties of functions of regular variation we refer the reader to [7].

Stable convergence
For the rest of this paper (Ω, F, P) will denote a complete probability space. The notations P → and d → stand, respectively, for convergence in probability and distribution of random vectors (r.v.'s for short).
If X n /Y n such that for any A, B ∈ B µ b (R d ), L(A) and L(B) are ID r.v.'s that are independent whenever A ∩ B = ∅. The cumulant of a r.v. ξ, in case it exists, will be denoted by C(z ‡ ξ) := log E(e iuξ ). We will say that L is separable with control measure µ, if with γ ∈ R, b ≥ 0 and ν is a Lévy measure, i.e. ν({0}) = 0 and R\{0} (1∧|x| 2 )ν(dx) < ∞. When µ = Leb, in which Leb represents the Lebesgue measure on R d , L is called homogeneous. The ID r.v. associated to the characteristic triplet (γ, b, ν) is called the Lévy seed of L and will be denoted by L . As usual, (γ, b, ν) will be called the characteristic triplet of L and ψ its characteristic exponent. The Blumenthal-Getoor index of an ID distribution with triplet (γ, b, ν), is defined and denoted as Within this framework, we will also refer to β ν as the Bluementhal-Getoor index of a homogeneous Lévy basis with characteristic triplet (γ, b, ν). In this paper, the sigma field generated by L is denoted by F L .

Trawl processes
Let L be a homogeneous Lévy basis on R 2 with characteristic triplet (β, b, ν). In addition, let a : R + → R + be a non-increasing integrable function and put The process defined by where A t := A + (t, 0), is termed as a trawl process. From now on, we will refer to A and a, as the trawl set and the trawl function, respectively. It is well known that X is strictly stationary and, in the case when L is square integrable, its auto-covariance function is given by Moreover, Γ X uniquely characterizes a. More precisely, if L is square integrable, and X andX are two trawls processes associated to L with trawls functions a andã, respectively, then a =ã a.e. if and only if For a detailed exposition on the basic properties of trawl processes we refer to [5] and [4].

Limit theorems for partial sums of trawl processes
In this section we focus on the limit theorems for the partial sums of (X i∆n ) n−1 i=0 under the assumption that as n ↑ ∞, ∆ n ↓ 0 and n∆ n → µ ∈ [0, +∞]. More specifically, we study the asymptotic behavior of the process S n = (S ∆n [nt] ) t≥0 , where with X as in (2.2). Note that we will always assume that the associated Lévy basis L has characteristic triplet (γ, b, ν) and E(|L |) < ∞ and that a is continuous in [0, ∞). Furthermore, for the sake of exposition all of our proofs are presented in Section 5.

Main results
Through this part we state our main results concerning S n . As expected, the rate of convergence will depend entirely on the sampling scheme, which is in turn represented by µ. In what follows we will use the notationX 3.1.1 0 < µ < ∞ Let us start assuming that n∆ n → µ ∈ (0, ∞). In this situation the points form a partition of [0, tµ]. Consequently, ∆ n S n t becomes a Riemann sum for the mapping s →X s . Based on this observation, the following result is not surprising.
Let us now turn our attention to the case when µ = 0. Intuitively, when this occur one should expect that X ∆nn ≈ X ∆ni ≈ X 0 , i = 0, 1, . . . , n − 1, for n large, which suggests that 1 n S ∆n n ≈X 0 .
This turn out to be true as the following result shows.

Proposition 2.
Suppose that E(|L |) < ∞ and a is continuously differentiable on a neighborhood of 0. If ∆ n n → 0 as n → ∞, then 1 n S n t P → tX 0 , t ≥ 0.
Next, we proceed to derive second order asymptotics for S n when µ = 0. Following the previously discussed heuristic, one should expect that for large n Therefore, in this case the asymptotics are determined by the behavior of the increments of X. Before presenting our results in this framework, we introduce our working assumption, which reads as Assumption 1. There is 0 < β < 2 such that (see Section 2) ν ± (x) ∼K ± x −β as x → 0 + with K + +K − > 0. Furthermore, if β = 1 assume in addition thatK + =K − and PV  Then the following holds where B (1) , B (2) are two independent Brownian motions which are in turn independent of L, and σ = b 2 a(0).
iii. Suppose that b = 0 and Assumption 1 holds. Then as n → ∞ where and Y (1) and Y (2) two i.i.d. strictly β-stable Lévy processes independent of L satisfying that

The case when µ = +∞
Suppose now that n∆ n → µ = +∞ as n ↑ ∞ and ∆ n ↓ 0. In order to get some intuition of what one should expect in this situation, firstly let ∆ n = ∆, i.e. the space between observations if fixed. Obviously, ∆ n n → +∞ and the process (X ∆n ) n≥1 is strictly stationary. In this situation S n becomes the partial sums of a discrete-time stationary process. In view of this, when properly scaled, S n typically converges to either a Brownian motion or a fractional Brownian motion (fBm for short), depending whether (X ∆n ) n≥1 has short memory or long memory, respectively. It turned out that in our general setup, the former case the same result holds, while in the latter S n will converge to a fBm only when L has a Gaussian component. Before presenting our results for this sampling scheme, we introduce our working assumptions.
Our first result concerns to the short memory case: Theorem 2. Suppose that µ = +∞, and that Assumption 2 is fulfilled. Put .
where σ 2 a = V ar(L ) R a(s)ds and B is a Brownian motion independent of G X . Remark 1. By the independent scattered property of L, the limiting process appearing in Theorem 2 is not only independent of G X , but also of Furthermore, in view that the array of σ-fields F X j,n := σ(X 0 , X ∆n , . . . , X j∆n ), j = 0, 1, . . . , n − 1, is "almost nested", we conjecture that The asymptotic behavior drastically changes when X has long memory. In order to have better exposition of our results, we split our finding in two theorems that distinguish the case in whether the Gaussian component in L is present or not.
When the Gaussian component is not present, the limit is not anymore Gaussian and the rate of convergence for S n varies according to the behavior of β ν the Blumenthal-Getoor index of L. More precisely: Theorem 4. Let Assumption 4 hold. Suppose that b = 0, E |L | 2 < ∞, and that µ = +∞. The following holds: where Y is a strictly (κ − 1)-stable Lévy process satisfying that (see (2.1) ii. When 2 > β ν > κ − 1 > 1, further assume that for someK where ξ is a strictly β ν -stable such that in which K ±,κ,βν = a β νK± and property only concerns to the behavior of the Lévy measure of L around zero. Hence, one can have simultaneously that this condition is satisfied and that the second moment of L is finite. An example of such infinitely divisible distribution is the normal inverse Gaussian distribution (see [24]).
Most of our estimates used in the proof of the previous theorem heavily rely on the square integrability of L. Thus, it is natural to consider the situation in which this condition does not hold anymore. The following result give a partial answer to this matter.

Convergence to a Gaussian moving average
In this section we show that under certain assumptions a sequence of trawl processes converge to a Gaussian moving average. In particular, the main theorem of this section explores the case where the Lévy measure of the Lévy seed of the trawl process explodes as n → ∞.
Let T > 0, let r, s, t ∈ [0, T ] with r ≤ s ≤ t, and letB t,s,r := A s \ A t \ A r (see also Figure 1), namely Consider the following general assumption on the behaviour of the trawl set.
Assumption 5 (On the behaviour of trawl sets). We assume that a is monotone, that given any t, s ∈ R, and can take any value in (0, ∞).
Remark 3. This assumption is only needed to prove tightness in the proof of Theorem 6, since finite dimensional distribution convergence does not rely on it.
Example 1 (Exponential). For p ≥ 0 consider a(p) := C T e −p with C T ∈ (0, T −2 ], then by the mean value theorem we have that Example 2 (Bounded first derivative on compact intervals). Consider a monotone function f : , then by the mean value theorem we have that and notice that when s = (t + r)/2 (namely t − s = s − r) by denoting x := t − s then which does not satisfies the desired condition of Assumption 5.
In the above examples we have given a particular structure to the trawl function a and then check whether such a satisfies Assumption 5 or not. However, there is another modelling point of view we can take. Imagine that we would like to approximate a moving average with a particular kernel by a sequence of trawl processes. How can we choose the right a? In other words, how do we choose the right sequence of trawl processes? In the next result, which is the main result of this section, we answer this question too, namely we provide a link between a and the kernel function of the moving average.
be the associated trawl process with characteristics (γ (n) , b (n) , ν (n) ), n ∈ N. Assume that Assumption 5 holds and that Remark 4. Observe that in case we have g is differentiable and positive monotone (bounded) then by the monotone (bounded) convergence theorem we have d The theorem can be generalised using a general sequence of real numbers (a n ) n∈N instead of ( 1 √ n ) n∈N . However, since the computations are exactly the same we decided to leave it with the more natural notation 1 √ n . Example 4 (The Poisson case). In this example we are going to show that the assumptions of Theorem 6 are satisfied for the case X is the intensity parameter, or equivalently, for the case L (n) ∼ Poisson(λ (n) ). In particular, we have that In order to satisfies the assumptions we have to impose that λ (n) = n + o(n) (e.g. λ (n) = n + bn γ for b ∈ R and γ < 1). Indeed, as n → ∞.
Example 5. Let g : R + → R + be integrable, monotonically decreasing and second order differentiable with ds satisfies the assumptions of Theorem 6. Indeed, it is possible to see that a is positive (since g is negative), is monotonically decreasing (since g is monotonically increasing), and satisfies Assumption 5 thanks to

Existence of the limiting moving average
In this subsection we answer the following question: Does our limiting process have a moving average representation? Indeed, we only know that there is a limiting process such that it is a stationary centred Gaussian process with covariance structure given by where g is a continuous function such that g(x) = 0 for x < 0. The answer is positive and it is given by the following proposition.
Proposition 3. Let Y t be a stationary Gaussian process with covariance where f is the spectral density. Then we have that the limiting object can assume a moving average representation where the integral is well defined sinceB is a one dimensional Brownian motion andḡ ∈ L 2 (R) with g(x) = 0 for x < 0.
Proof. First, we have that by approximating g with continuous functions with compact support (see Remark 2.4 of [8]). Moreover, from we know that Y t has an absolutely continuous spectral distribution (see [10], page 532). Then, using the second part of assumption (4.1) together with [17] Satz 5 we conclude the proof.
Notice that the spectral density f in our case is given by: and since we are in the real valued framework, it reduces to Further, in terms of the trawl function we have

Proofs
Through all our proofs the non-random positive constants will be denoted by the generic symbol C > 0, and they may change from line to line. Additionally, for simplicity and without loss of generality, we may and do assume that E(L ) = 0 and V ar(L ) = 1 in such a way that Γ X (h) = ∞ h a(s)ds, for h ≥ 0. We note that below we will use the notation T n = n∆ n

Technical lemmas
We start by analyzing the variance of S ∆ m .
ii. If 0 < µ < ∞, then iii. If µ = +∞, then In a similar way, we obtain that All above implies that This estimate together with (5.2) give (5.1). Now assume that ∆m → β ∈ [0, +∞]. i., ii. and part a) of iii. follow immediately by (5.1) and the Dominated Convergence Theorem. Therefore, for the rest of the proof we will assume that ∆m → +∞ and that a ∈ RV ∞ α in which 1 < α < 2. By KT we get that Since a ∈ RV ∞ α , it admits the representation a(x) = x −α l(x), with l a slowly varying function at ∞. Thus, where we have used that for any slowly varying function l(x)x ρ → +∞ as x ↑ ∞ whenever ρ > 0. Consequently, by (5.1), we deduce that which completes the proof.
Next, we find a very useful decomposition for S ∆ m . For any ∆ > 0, let and Based on these observations, the following result is obvious. j))). Then, almost surely When β = +∞, it turns out that in the short memory case S ∆,1 m dominates the asymptotics.
Lemma 3. Let m n ∈ N be such that m n ↑ ∞, ∆ n m n → ∞ and ∆ n ↓ 0, as n → ∞. Suppose that E(|L | 2 ) < ∞ and that ∞ 0 ∞ r a(s)dsdr < ∞. Then The proof of Lemma 3 heavily relies on the next property.
Lemma 4. Let f ≥ 0 be an integrable continuous function such that Proof. If f ≡ 0 a.e. the result is trivial, so assume that f > 0. For x ≥ 0, put F (x) := ∞ x f (s)ds. Integration by parts gives that In view that f > 0 and ∞ 0 F (s)ds < ∞, the Dominated Convergence Theorem guarantees that This shows in particular that the following limit exists Observe that if > 0, then, as Hence, = 0 as required. To show the last part, observe first that when Therefore, from the first part of the proof, as x → +∞ 1 x Moreover, by L'Hospital's Rule and the continuity of f we have that which applied to (5.7) concludes the proof.
Proof of Lemma 3. Since L is independently scattered, we get by (5.4) that for any m ∈ N and ∆ > 0 Moreover, in view that the trawl function is non-negative continuous and such that ∞ 0 ∞ x a(s)dsdx < ∞, Lemma 4 can be applied in order to obtain that ∆ n m n V ar(S ∆n,4 mn ) = ∆ n m n ∞ ∆nmn a(s)ds → 0.
We proceed now to show that for every m ∈ N and ∆ > 0 which is exactly (5.8). In a similar way, we see that We proceed now to find some estimates for the characteristic function of S ∆,l m , for l = 3, . . . , 4. For doing this, the following result is essential and its proof follows the lines of the proof of Proposition 3.6 in [21] as well as the well-known inequality e izx − 1 ≤ 2 |zx| 1 |zx|≤1 + 1 |zx|>1 .
Lemma 5. Let ψ the characteristic exponent of an ID distribution with mean 0. Then ψ is continuously differentiable and there is a constant C > 0 depending only on (γ, b, ν) such that |ψ(z)| ≤ b 2 |z| 2 + C R (1 ∧ |xz| 2 )ν(dx), z ∈ R; (5.10) Then the following estimates hold Proof. Recall that we assume that L is centered. By the independent scattered property of L it follows from 14) The claimed estimates are easily obtained by noting that from Lemma 6 and the Mean Value Theorem

Proof of Propositions 1 and 2
Proof of Proposition 1. For simplicity we will assume that µ = 1. Following the reasoning in Section 3 in [3], we can always find a measurable modification of X, so without loss of generality we may and do assume that X is measurable and almost surely t 0 X 2 s ds < ∞, for all t ≥ 0. Thus, using the well known bound and Jensen's inequality, we see that for any V > 0 and t ≤ V where we have used that n∆ n is bounded. From this estimate we deduce that as n → ∞ The result now follows by observing that in view that |t − [nt] ∆ n | ≤ ∆ n + V |1 − ∆ n n| and that Therefore, thanks to (5.5) we only need to check that for l = 1, 2, 3, 1 n S ∆n,l n P → 0. To see this observe that from equations (5.12)-(5.14), the continuity of ψ where we have further used that a is continuously differentiable in a neighborhood of 0. This completes the proof.

Proof of Theorem 1
Our proof in this case relies heavily on the asymptotic behavior of the Lévy measure of L around 0. It is worth noting that if L is deterministic, then almost surely Z n t ≡ 0, so by the Lévy-Itô decomposition of Lévy bases (see [20]), in our proof we will always assume that γ = |x|≤1 xν(dx) or γ = 0, depending whether R (1∧|x|)ν(dx) < ∞ or not. In this situation, under Assumption 1, Theorem 2 in [16] establishes that as ε → 0 if b > 0 and β = 2; ψ(z; β, K + β, K − β,γ) under iii. and 0 < β < 2; (5. 16) where ψ(·; β, K + β, K − β,γ) as in (2.1). Note that the convergence takes place uniformly in compacts. The proof is divided in several steps: In the first step we show that S ∆n,1 n = o P (nT n 1/β ). In the second step we argue that L can be assumed to be strictly β-stable. Finally, we show that i. and ii. hold.
Step 1: S ∆n,1 n = o P (n(n∆ n ) 1/β ). Assume that (5.16) holds. and put The C 1 property of a and the fact that 0 ≤ t j /T n ≤ 1 then lead us to where we have also used the fact that ψ β is strictly stable and continuous. This is enough for the negligibility of 1 n(n∆n) 1/β S ∆n,1 n .
Step 3: Proof of i. and ii. We start by showing that the f.d.d. distributions of Z n converge to those stated in the theorem. In the last part we show that the convergence in distribution can strengthened to stable convergence.
Assume that b > 0. In this case, in virtue of Step 2, we may and do assume that γ = 0 and ν ≡ 0. Accordingly, Z n is a centered Gaussian process satisfying (5.17). Thus, by the independent scattered property of L, the convergence in i. is achieved whenever To see that this is the case, take t ≥ u ≥ 0. Then (5.18) follows now an easy application of the Dominated Convergence Theorem. Suppose now that b = 0 and Assumption 1 holds, such that (5.18). Therefore, by previous step, we may and do assume that L is strictly stable with characteristic exponent ψ β . Therefore, under the notation of Step 2, the strict stability of ψ β results in Similarly, as n → ∞ C z ‡ 1 We have therefore shown that the f.d.d. of Z converge weakly to those of stated in theorem. Therefore, in order to conclude the proof it rests to verify that the convergence also take place stably and the limit is independent of L. Let B be a bounded Borel set. Since for every n ∈ N,Z n isF L -measurable, thanks to Theorem 3.2 in [15], it is sufficient to show that 19) and that for all z 1 , . . . , z q+1 ∈ R.

Proof of Theorem 2
Here we show the validity of Theorem 2. The proof will be divided into three steps. We first show the convergence of the finite-dimensional distributions. Secondly, we verify that our sequence is tight. We conclude by proving that the convergence is also stable. Therefore, for the rest of this subsection we will let Assumption 2 hold. We finally emphasize that thanks to the LÃ c vy-ItÃŽ decomposition of Lévy bases (see [20]) and Lemma 1, we may and do assume that L has no Gaussian component, i.e. b = 0.
Therefore, from Corollary 1.2.7. in [29], there is a constant C > 0 only depending on p and ν(·), such that Hence I n,p,1 ≤ C(I (1) n,p,1 + I (2) n,p,1 ), (5.24) where we have let Thus, (5.23) is obtained whenever I n,p,1 + I n,p,1 = o (m n ∆ n ) p/2 . Observe that for any j = 1, . . . , m n − 1, t j−1 ≤ ζ ≤ t j and p 0 > q ≥ 2, it holds that Furthermore, since the distribution function is continuous and bounded, it is also uniformly continuous on R. Hence, as n → ∞ Using the previous properties and the fact that µ p,a (R) < ∞, one easily deduce that for any p 0 > p > 2 which concludes the argument for (5.21). Now, let λ 0 , . . . , λ r ∈ R and 0 = t 0 < t 1 < · · · < t r = 1. To show the convergence of the finitedimensional distributions, we are going to verify that it follows that for p 0 ∧ 3 > p > 2, where we have used (5.23). Hence, in view that L is independently scattered, (5.26) is obtained whenever Since for any N > M > K > U = 0, if l = q.
Combining Lemmas 1 and 3, and the stationarity of X, (5.28) follows immediately.
Step 2: Tightness. Using similar arguments as in the proof of Lemma 2.1 in [28] and Lemma 1, we deduce that ∆n n S n is tight if for any sequence m n ∈ N, such that m n ↑ ∞, ∆ n m n → ∞ and ∆ n ↓ 0, as n → ∞, it holds that where I n,p,1 is as in step 1, and We have already seen that I n,p,1 → 0 as n → ∞. On the other hand, invoking once again Corollary 1.2.7. in [29] and using (5.4), we get Similarly as in Step 1, we deduce that for p 0 ∧ 3 > p > 2 and n large enough, the following estimates are valid Assumption 2 now asserts that as n → ∞ I p,4 ≤ C(∆ n m n ) p/2−p 0 +1 → 0, because 2 < p < p 0 < 2(p 0 − 1). Furthermore, analogous arguments used to establish (5.8) and (5.9), can be applied in order to get that Step 3: Stability. From Step 2, Proposition 3.9 in [15] and its subsequent remark, the stable convergence in D([0, 1]) will be obtained if (5.28) can be strengthened to G X -stable convergence. Consider the filtration is F n i -measurable and independent of F n i−1 for all i = 1, . . . , n − 1. Consequently, thanks to (5.28), (5.27) and Theorem 6.1 in [15], it holds that r q=1 λ q (S n tq − S n t q−1 ) where G := σ(∪ n≥1 ∩ N ≥n F N n ), but in view that σ(X 0 , X ∆n , . . . , X i∆n ) ⊆ F n i , i = 0, 1, . . . , m n − 1, it follows immediately that G X ⊆ G, which concludes the proof.

Proof of Theorems 3-5
In this subsection we justify the statements of Theorems 2-4. In what follows (γ, b, ν) and ψ will denote respectively, the characteristic triplet and exponent of L. We note that, for the sake of exposition, the proof of each theorem is given in a corresponding subsubsection.

b > 0
For every n ∈ N we will let r n = 1 n √ a(n∆n)n∆n . Observe that the same argument used in step 2 in the proof Theorem 2 together with Lemma 1, give us automatically that r n S n is tight in D([0, 1]) if E |L | 2 < ∞. Therefore, we only need to show that within the framework of Assumption 3, the finite-dimensional distributions of r n S n converge stably to those of the fBm with index H = 2 − κ/2.
Before proceeding with the proof we would like to emphasize that in this situation the Lyapunov condition is not satisfied in general. Indeed, for instance if L is symmetric, we have that Thus, by Rosenthal's inequality, it holds that for any p > 2 Nevertheless, from the proof of Theorem 3 ii. below, and the LÃ c vy-ItÃŽ decomposition for Lévy bases, it follows that the non-Gaussian component of r n S n is negligible. Consequently, we may and do assume in this part that γ = 0 and ν ≡ 0.
Thus, thanks to the Lindeberg-Feller Theorem and Theorem 6.1 in [15], for the stable convergence of the f.d.d. of r n S n to those of the fBm, we only need to check that as n → ∞ [nt]−1 we can use analogous estimates derived in the proof of Lemma 1 to deduce that Relation (5.33) now follows easily from this and KT.

b = 0
In this part, unless otherwise said, we will always assume that b = 0. Observe that under the assumptions of Theorem 4 ii. (5.16) is once again valid. We recall to thew reader that we are also assuming that L has mean zero. Finally, we would like to stress that by replacing β ν by 2 below, it follows that 1 n a(n∆ n )n∆ n S n t P → 0, n → ∞.
We now proceed to present a proof for Theorem 4.
Proof of Theorem 4. The proof is organized as follow: First, based on our assumption, we derive some preliminary estimates. Secondly, by using (5.5), we approximate the characteristic function of S n t by means of I ∆,1 n (·) and I ∆,2 n (·), where the latter are as in Lemma 6. We conclude by applying such approximation to obtain the desired result. For the rest of the proof we will use the notation T n = n∆ n , r n = a(T n )T n , andα = κ − 1 as well as Preliminary estimates: First, by the mean of Assumption 4, we can invoke the so-called Potter's bounds (see Theorem 1.5.6 in [7]). Such a result provides the existence of a positive constant only depending on ε > 0, such that for all 0 ≤ r ≤ 1 a(T n s) a(T n ) ≤ C s −α−ε ∨ s −α+ε , and a (r∆ n + T n s) a (T n ) ≤ C s −κ−ε ∨ s −κ+ε , s > 0. (5.34) From Lemma 5, for every 2 ≥ θ > β ν , we have that for |z| > 1 |ψ (z)| ≤ C |z| 2 ∧ |z| θ , (5.35) while for |z| ≤ 1, by the square integrability of L we easily obtain that On the other, using that E |L | 2 < ∞, it follows that as x → ∞, ν ± (x) = O(x θ ), for all θ ≤ 2.
Furthermore, in view of (5.29) and (5.30), (S ∆n [nt] ) t≥0 has independent increments. From this observation and the stationarity of X, in order to finish the proof, we only need to check that as n → ∞ where Y as in the theorem. For simplicity we let t = 1. Suppose now that z > 0, then by doing the change of variables x = (sz)/(c a T n ) 1 κ−1 we see that Using a similar argument as in (5.38) shows that which for ε small enough, allow us to apply the Dominated Convergence Theorem in order to get that The result now follows from Fubini's Theorem and the relation (see Lemma 14.11 in [23]) ∞ 0 (e ±ir ± ir − 1)r − −1 dθ = Γ(2 − ) ( − 1) e ∓i π 2 , 1 < < 2. Hence, regarding tightness, using the fact that L is an independently scattered random measure, we have n 2 V ar L (n) (Ã t,s ) V ar L (n) (C t,s,r ) + V ar L (n) (Ã t,s ) V ar L (n) (D s,r ) +V ar L (n) (Ã t,s ) V ar L (n) (B t,s,r ) + V ar L (n) (Ẽ t,s,r ) V ar L (n) (C t,s,r ) +V ar L (n) (Ẽ t,s,r ) 2 V ar L (n) (D s,r ) + V ar L (n) (Ẽ t,s,r ) V ar L (n) (B t,s,r ) +V ar L (n) (B t,s,r ) V ar L (n) (C t,s,r ) + V ar L (n) (B t,s,r ) V ar L (n) (D s,r ) Let us concentrate first on V ar L (n) (Ã t,s ) V ar L (n) (D s,r ) .
We have V ar L (n) (Ã t,s ) V ar L (n) (D s,r ) = 9 n 2 (b (n) ) 2 + for some positive constant K . Therefore, we conclude that Appendix: The fourth moment of the trawl process