On discrete-time self-similar processes with stationary increments

In this paper we study the self-similar processes with stationary increments in a discrete-time setting. Different from the continuous-time case, it is shown that the scaling function of such a process may not take the form of a power function $b(a)=a^H$. More precisely, its scaling function can belong to one of three types, among which one type is degenerate, one type has a continuous-time counterpart, while the other type is new and unique for the discrete-time setting. We then focus on this last type of processes, construct two classes of examples, and prove a special spectral representation result for the processes of this type. We also derive basic properties of discrete-time self-similar processes with stationary increments of different types.


Introduction
Self-similar processes has been an important research topic in stochastic processes for a long time, due to its technical tractability and various applications in areas such as finance and physics. A general introduction of self-similar processes can be found, for example, in [2] and [8].
Among self-similar processes, those having stationary increments, abbreviated as "ss-si processes", often attract special attention from the researchers. The ss-si processes combine two types of probability symmetries: self-similarity, corresponding to the invariance of the distribution under rescaling, and the stationarity of the increments, corresponding to the invariance of the distribution of the increments under translation. As a result, they possess many desirable properties and include commonly used processes such as fractional Brownian motion and stable Lévy processes.
The classical setting for self-similar processes is in continuous-time, i.e., t ∈ [0, ∞). In this case, if a process X = {X(t)} t≥0 satisfies that for any a > 0, there exists b(a) > 0 such that {X(at)} t≥0 d = {b(a)X(t)} t≥0 , then X is said to be self-similar. It is easy to show that if the process is in addition nontrivial and stochastically continuous at 0, then the only possible function b to make this condition hold is b(a) = a H for some H ≥ 0 ( [2]). Consequently, self-similar processes are also often defined as processes X such that {X(at)} t≥0 d = {a H X(t)} t≥0 . It should be noted, however, that the second definition of the self-similar processes is not what the term "self-similar" originally or literally means. It is taken as a definition simply because of the equivalence between the two definitions of self-similarity. Logically, if one takes the first definition as the original definition, then the second definition should be regarded as a property of self-similarity.
In this paper we consider dt-ss-si processes, the self-similar processes with stationary increments defined on N 0 , the set of all non-negative integers, instead of on [0, ∞). The self-similarity in discrete-time becomes {X nm } m∈N0 d = {b(n)X m } m∈N0 , where n can only be positive integers now. Interestingly, it turns out that when defined on N 0 , the two definitions of self-similarity are no longer equivalent. More precisely, besides the case where b(n) = n H , H > 0 and the degenerate case where b(n) ≡ 1, a new possibility b(n) = (|n| p ) H , where H > 0 and |n| p is the p-adic norm of n, arises. As one can see from the later parts of this paper, this change from the continuous-time case is mainly due to the discretization of the possible rescaling factor and the drop of the continuity requirement, which no longer makes sense in the discrete-time setting.
As the new, nondegenerate type in the discrete-time setting, the case where b(n) = (|n| p ) H , H > 0 is further studied in this paper. Two classes of dt-ss-si processes having such scaling function b are constructed. Moreover, we find that the dt-ss-si processes which are of this type and in L 2 have a very particular spectral representation. Very roughly speaking, such a process can always be decomposed into waves with periods of different powers of p and magnitudes decreasing in period.
The rest of this paper is organized as follows: Section 2 introduces basic settings and notations. Section 3 establishes the classification result, followed by an embedding result for dt-ss-si processes with b(n) = n H and basic properties for dt-ss-si processes of different types. Sections 4 and 5 focus on the dt-ss-si processes with b(n) = (|n| p ) H . We give two classes of such processes in Section 4, then state and prove the spectral representation using the notion of almost periodic functions in Section 5.

Basic settings and notations
Let N 0 = {0, 1, . . . } be the set of non-negative integers and N = {1, 2, . . . } be the set of positive integers. We first extend the definition of self-similarity to discretetime. In order to ensure that the rescaled process is comparable to the original process, the scaling factor must be a positive integer in this case. Therefore, we have Here and later, " d =" means equality in the sense of distribution, i.e., the two sides of this symbol have the same distribution.
Recall that a discrete-time stochastic process X = {X m } m∈N0 is said to have stationary increments, if its increment process is stationary. In other words, for any k ∈ N, In this paper, we are mainly interested in the discrete-time self-similar processes with stationary increments, dt-ss-si. Thus, the dt-ss-si processes are the processes that satisfy both (2.1) and (2.2).
Denote by P = {2, 3, 5, . . . } the set of all primes. It can be easily seen from (2.1) that the scaling function b(n) for a discrete-time self-similar process must be completely multiplicative, i.e., b(m)b(n) = b(mn) for all m, n ∈ N. Consequently, for n = pi∈P p ri i , b(n) = pi∈P (b(p i )) ri , hence b(n) is determined by its values on P. On the other hand, any completely multiplicative function b : N → R + is a legitimate scaling function for some discrete-time self-similar process. A simple example is given by X 0 = 0, X n = b(n)X 1 for n ≥ 1.

Classification and properties of dt-ss-csi processes
In this part we show that dt-ss-si processes can be classified into three types according to their scaling properties. Among these three types, one has a continuous counterpart, one is degenerate, while the other only exists for the discrete case. It turns out that the results in this section actually work for a larger family of processes for which both the self-similarity and the stationarity of the increments only hold marginally. Moreover, the stationarity of the increments can be relaxed to the cyclostationarity with some integer period. We begin this section by generalizing these notions, and will work with the processes which are marginally self-similar with marginally cyclostationary increments in this section.
Definition 3.1. Let X = {X m } m∈N0 be a discrete-time stochastic process. If (2.1) holds marginally, i.e., for any m ∈ N 0 and n ∈ N, Definition 3.2. Let τ ∈ N. A stochastic process X = {X m } m∈N0 is said to have marginally cyclostationary increments with period τ , if for any k, m ∈ N 0 , A discrete-time, marginally self-similar process with marginally cyclostationary increments with period τ is denoted as dt-ss-csi(τ ), or dt-ss-csi when it is not necessary to specify the value of τ .
The case X ≡ 0 being trivial, we can assume the distribution of X 1 is nondegenerate. Denote by F 1 its distribution function. Since −X is dt-ss-csi if and only if X is dt-ss-csi, we can without loss of generality assume P (X 1 > 0) > 0. This is equivalent to assuming F 1 is not always 1 on (0, ∞). Moreover, for any probability distribution F on (R, B(R)) and a ∈ R, denote by "Y /a ∼ F " the relation that Y ∈ B = F (aB) for any Borel set B ⊂ R, where aB = {ax : x ∈ B}. Here and later, we always identify a probability distribution on (R, B(R)) with its distribution function.
The main result of this section is the following Ostrowski-type classification theorem. Note that since the marginal distributions of X are not necessarily in L 1 , we do not have the triangle inequality required by a direct application of the classical Ostrowski's theorem. Theorem 3.3. The scaling function b(n) for a dt-ss-csi(τ ) process must be one of the followings: (1) b(n) = 1 for all n ∈ N.
(2) There exist a unique prime p and H > 0 such that b(n) = (|n| p ) H . In other words, b(p) < 1 and b(q) = 1 for q ∈ P, q = p.
Moreover, for any completely multiplicative function b(n) on N satisfying one of the conditions above, there exists a non-trivial dt-ss-si process having b(n) as its scaling function.
Some preparatory results are needed to prove Theorem 3.3.
In the proof of Proposition (3.4), since Q 1 is non-decreasing and not constantly 0 on (0, 1), there exists s ∈ R such that Q 1 (s) > 0, and Q 1 is continuous at s. As a result, there exists ǫ ∈ (0, min{s, 1 − s}) satisfyinḡ It suffices to take As immediate consequences of Propositions 3.4 and 3.5, we have Corollary 3.6. Let {X n } n∈N0 be a dt-ss-csi(τ ) process, and b(n), n ∈ N be its scaling function. Then for any m ∈ N, k > 1, there exists c m ∈ R, such that Proof. By the cyclostationarity of the increments, X n+m − X n d = X n ′ +m − X n ′ , where n ′ ∈ 0, . . . , τ −1 and n ≡ n ′ (mod τ ). Then by Proposition 3.5 with G 1 , G 2 , G 3 being the distributions of Corollary 3.7. Let {X n } n∈N0 be a dt-ss-csi(τ ) process, and b(n), n ∈ N be its scaling function, which is not identically 1. Then for any m ∈ N, there exists Proof. Similar as in the proof of Corollary 3.6, where the second equality holds since the (marginal) self-similarity clearly implies X 0 = 0 almost surely when b(n) ≡ 1. Applying Proposition 3.4 with G 1 , G 2 and G 3 being the distributions of X m , X 1 , −X 1 respectively, and a 1 = 1, a 2 = b(nτ ), a 3 = b(nτ +m), there exist constants k 2,m and k 3,m , such that 1 ≤ k 2,m b(nτ )+k 3,m b(nτ + m). Moreover, as b(nτ ) and b(nτ +m) are non-negative, k 2,m and k 3,m can be chosen to be strictly positive, be the set of possible values of f for prime numbers. We first prove that A is bounded from above by contradiction. Suppose sup(A) = ∞. Then for any L > 0, P L := {p ∈ P : f (p) ≥ L} is not empty. Choose L large enough such that 2 ∈ P L . Denote by p L the smallest element in P L . Then for any n < p L , b(n) < n L . Hence contradicting Corollary 3.6. Hence A must be bounded from above. Next we show that if sup(A) > 0, then this supremum must be achieved by some We prove that f (2) = H which will give a contradiction. Suppose f (2) = H − ε < H for some ε > 0. For each n ∈ N, let p n be the smallest prime such that f (p n ) > H − 1 n . Thus the sequence {p n } and {f (p n )} are both non-decreasing, with limits ∞ and H respectively. Let N ∈ N be such that 1/N < ε and H − 1/N > 0, then for n ≥ N , we have by a similar argument as in (3.7), This contradicts Corollary 3.6, as for k ∈ (1, K) and any As a result, if sup(A) > 0, there must exists p ∈ P, such that f (p) = sup(A). We show that in this case, f (q) = sup(A) for any q ∈ P. As a result, b(n) = n H for n ∈ N and H = sup(A) > 0. Suppose this is not true, then there exists q ∈ P satisfying f (q) < f (p). For each r ∈ N satisfying p r > q, there exists m ∈ {1, · · · , q − 1}, such that q|p r − m. By Corollary 3.6, for any k > 1, we have By the choice of q, b(q) q H < 1. Hence (3.9) can not hold for k ∈ (1, q H b(q) ) and r large enough. Thus, we conclude that f (q) = sup(A), and consequently, b(n) = n H .
It remains to consider the case where sup(A) ≤ 0, which is equivalent to b(n) ≤ 1 for all n ∈ N. Suppose there exist two distinct primes p, q ∈ P such that b(p) < 1 and b(q) < 1. Let d m , m = 1, . . . , τ be as given in Corollary 3.7 and define d = Therefore there exists at most one prime p such that b(p) < 1. This leads to cases (1) and (2).
Finally, for any completely multiplicative function b(n) satisfying one of the three conditions in Theorem 3.3, there exists a non-trivial dt-ss-si process having b(n) as its scaling function, according to Examples 3.8, 4.1, 4.5 and Theorem 3.9 that we will see.
It should be pointed out that a similar result was obtained in [3] for secondorder dt-ss-si processes, i.e., the processes in L 2 whose covariance function satisfies properties related to the self-similarity and the stationarity of the increments of the process. In this sense, Theorem 3.3 can be regarded as a generalization of that result to the general dt-ss-si processes which are not necessarily in L 2 .
Example 3.8. Let X n , n ∈ N 0 be independent and identically distributed random variables, then {X n } n∈N0 is a trivial example of a dt-ss-si process with b(n) = 1, n ∈ N.
We call the dt-ss-si processes with scaling functions satisfying the three cases in Theorem 3.3 processes of types I, II, III, respectively. Type III in Theorem 3.3 is what people are familiar with from the continuous-time ss-si processes. The following theorem shows that there is indeed a correspondence between the continuoustime ss-si processes and the dt-ss-si processes of type III. Theorem 3.9. If {X(t)} t≥0 is an ss-si process, then {X n } n∈N0 given by X n = X(n), n ∈ N 0 is a dt-ss-si process. Conversely, if {X n } n∈N0 is a dt-ss-si process with scaling function b(n) = n H for H > 0, then there exists a unique in distribution Proof. An ss-si process observed at discrete-time N 0 is clearly a dt-ss-si process. Hence we focus on the other direction of the proof. For that purpose, we will derive the distribution of the ss-si process, Y = {Y (t)} t≥0 , from any arbitrary dt-ss-si process X = {X n } n∈N0 , so that they have the same distribution on This distribution does not depend on the choice of s i and t i , i = 1, . . . , n. Moreover, since the original finite-dimensional distributions on N 0 are consistent, the finite-dimensional distributions on Q + are also consistent. By Kolmogorov's extension theorem, there exists a process {Y r } r∈Q + , such that its distribution is given by (3.10). Such a process is ss-si on Q + . Indeed, for any n ∈ N, p, s 1 , . . . , s n ∈ N 0 and q, t 1 , . . . t n ∈ N, (3.11) Also, by the stationarity of the increments of {X n } n∈N0 , (3.12) Finally, as it is proved in [9] that every ss-si process with H > 0 is stochastically continuous, the distribution on Q + uniquely extends to the distribution on [0, ∞). The self-similarity and the stationarity of the increments are naturally inherited. Thus, we conclude that any dt-ss-si process with b(n) = n H , H > 0 determines a unique in distribution ss-si process, which has the same distribution on N 0 as the dt-ss-si process.
The following proposition collects several basic properties for dt-ss-si processes of type III. They are direct consequences of Theorem 3.9 and the corresponding results in continuous-time, which we cite individually.
More interestingly, for the dt-ss-si processes of type II, which do not find their counterparts in continuous-time, we have Proposition 3.11. Let {X n } n∈N0 be a dt-ss-si process of type II with b(n) = (|n| p ) H for some H > 0, then: (1) X 0 = 0 almost surely.
(2) {X n } n∈N0 is recurrent, in the sense that each X n is a limit point of {X n } n∈N0 almost surely.
Proof. (1) and (2) are trivial from definition. For (3), note that by the stationarity of the increments, for any n ∈ N, Since X p n d = p −nH X 1 , H > 0, X p n → 0 in distribution and hence in probability as n → ∞. We thus have (4) We have for m, n ∈ N 0 and any M > 0, thus equation (14) in [4] can be replaced by lim n→∞ P(X m = X p n = 0) = 0.
The rest of the proof follows as in Lemma 3 of [4].

Examples of dt-ss-si processes of type II
As shown in the previous section, the dt-ss-si processes can be classified into three types. Type I is degenerate and type III has continuous-time counterparts. Type II, for which b(n) = (|n| p ) H , only exists in the discrete-time setting and is, therefore, of special interest. Sections 4 and 5 are mainly dedicated to the study of this type. In this section, we give two classes of examples for dt-ss-si processes of type II.
Example 4.1. Let p ∈ P and 0 < b < 1. Let {Y k n } k∈N0, 0≤n≤p k+1 −1 be a sequence of independent and identically distributed random variables having any non-degenerate distribution such that E(|Y 0 0 −Y 1 0 | q ) < ∞ for some q > 0. Sufficient conditions for this can be Y 0 0 ∈ L 1 (Ω, F , P), or Y 0 0 is α−stable with 0 < α ≤ 2. Extend the sequence periodically to {Y k n } k,n∈N0 by defining Y k ℓ = Y k n for ℓ ≡ n (mod p k+1 ). Define It is easy to see that the above summation converges almost surely for any n ∈ N 0 , thus {X n } n∈N0 is well-defined. Indeed, {X n } n∈N0 is a dt-ss-si process of type II. We show this in the following proposition. Proof. We first show that {X qn } n∈N0 d = {X n } n∈N0 for q ∈ P, q = p. Note that for any fixed k ∈ N 0 , by the periodicity of Y k n , {Y k qn } 0≤n<p k+1 is just a permutation of {Y k n } 0≤n<p k+1 , hence also a sequence of independent and identically distributed random variables. Moreover, both {Y k n } n∈N0 and {Y k qn } n∈N0 have period p k+1 with respect to n. Thus, Since the sequences with different values of k are independent, To show {X pn } n∈N0 d = {bX n } n∈N0 , note that by independence, where the last equality follows from Y 0 np = Y 0 0 , n ∈ N 0 . Therefore, according to (4.1), we have {X pn } n∈N0 d = {bX n } n∈N0 . Finally, to show the stationarity of the increments, note that the process {Y k n } n∈N0 is stationary, so {Y k n − Y k 0 } n∈N0 has stationary increments for all k ∈ N 0 . Again by the independence of the components with different values of k, {X n } n∈N0 has stationary increments.   [4] and [9], it is not clear whether for a continuous-time ss-si processes with 0 < H < 1, denoted by {X(t)} t≥0 , the support of X(1) must be unbounded. By Example 4.1, we know this is not true for dt-ss-si processes of type II when the support of Y 0 0 is bounded. Another open problem raised in [4] asks whether the distribution of X(1) must be absolutely continuous on R \ {0} when H > 0. The answer is also negative for our dt-ss-si process of type II. It is easy to see that X n can be expressed in the form n } k∈N0 is a sequence of independent and identically distributed random variables. When the support of Y 0 0 is finite, this corresponds to an extension of the Bernoulli convolution in [5]. When b is a reciprocal of a Pisot number in a certain interval, the distribution of X n will be singular. This is also the case when b is close enough to 0, where the support of X n is a Cantor-type set, again provided that the support of Y 0 0 is finite.
Example 4.5. Fix p ∈ P, b ∈ (0, 1). Let u be a p-dimensional random vector whose entries u 0 , . . . , u p−1 sum up to 0. For k ∈ N 0 , let {V k (n)} n∈N0 be the stochastic process given by Let {U j k } k∈N0,0<j<p k+1 be a sequence of independent random variables such that It is easy to see that {Y j k (n)} n∈N0 is stationary, has period p k+1 , and the sum in each period is zero since the sum of the entries of u is zero. Moreover, these stationary sequences are independent conditional on u by independence of {U j k } k∈N0,0<j<p k+1 . For j = 1, 2, . . . , p k+1 − 1, define which has period p k+1 and the sum in each period is again zero. Let {J k } k∈N0 be another sequence of independent uniform random variables on {0, 1, . . . , p k+1 − 1}, independent of u and {U j k } k∈N0,0<j<p k+1 . Finally, define the random sequence for n ∈ N 0 , which converges almost surely since 0 < b < 1. Note that if u is bounded, then X n is bounded uniformly in n. Proof. As the mixture of dt-ss-si processes with a common scaling function is again a dt-ss-si process with the same scaling function, it suffices to prove the result for the case where u is deterministic.
The stationarity of the increments of {X n } n∈N0 follows directly from the stationarity of {Y j k (n)} n∈N0 hence also of {Y k,j (n)} n∈N0 , and the independence of the sequences with different values of k and j.
In order to show the self-similarity, first note that qn ℓ=q(n−1)+1 where the last equality in distribution follows from the fact that J k is uniformly distributed and is independent of everything else. Since the components with different values of k are independent, we must have For {X pn } n∈N0 , note that by the construction of V k , for any i ∈ N 0 , where "⌊·⌋" gives the largest integer which is smaller than or equal to the variable. Hence pn ℓ=p(n−1)+1 Moreover, because J k is uniformly distributed on {0, . . . , p k+1 − 1}, [J k ] is uniformly distributed on {0, . . . , p k − 1}. Hence by the independence of U j k with different values of k and j, Again by independence, a change of index k ′ = k − 1 leads to where the term with k = 0 on the right hand side of the first line can be dropped since Y 0,j has period p and the entries in one period have sum 0. Therefore, {X n } n∈N0 is dt-ss-si with scaling function given by b(p) = b and b(q) = 1 for all q ∈ P, q = p.
Remark 4.7. In the case where u is deterministic, one can show that the distribution of X n is also of a Bernoulli convolution type. That is, when denoting n } k∈N0 are independent and identically distributed. One can also prove that the class of marginal distributions given here belongs to the class given in Example 4.1, by making Y 0 0 follow the same distribution as J0 k=0 u k . However, the joint distributions will differ when p > 2 unless in certain trivial cases, which is not hard to see from the dependence structures of {X (k) n } 1≤n<p . The proof is purely combinatorial and omitted here.
Remark 4.8. In Example 4.5 the processes {Y j k (n)} n∈N0 with different values of k and j share a common u. Following the same derivation as in the proof of Proposition 4.6, one can easily see that the result will still hold if u is replaced by a sequence of independent copies of it, {u k } k∈N0 , as long as the summation in (4.2) converges. For such processes, {Y j k (n)} n∈N0 with different values of k are independent, while in Example 4.5 they are conditionally independent given u.

Spectral representation
Let {X n } n∈N0 be a dt-ss-si process of type II, with scaling function b(n) = (|n| p ) H for H > 0. Intuitively, since b(p i ) = (b(p)) i → 0 as i → ∞, the distribution of X p , X p 2 , . . . will be more and more concentrated around 0. By the stationarity of the increments, this implies that X n+p i − X n is small when i is large. Such an observation leads to the following spectral representation result.
Here and later, we use the notation e(x) = e i2πx . } m∈N,0<ℓ<p m ,p∤ℓ is an orthogonal sequence in L 2 (Ω, F , P) and satisfies: (1) (2) for q ∈ P, q = p, Many results are needed for the proof of Theorem 5.1. We start by introducing the notion of almost periodic functions with values in Banach spaces, which can be found, for example, in [1].
Definition 5.2. Let (X, · ) be a Banach space. A sequence f : Z → X is almost periodic if for all ε > 0, there exists N (ε) > 0, such that any consecutive N (ε) integers contain an integer T with Let {X n } n∈N0 be a dt-ss-si process of type II. By the stationarity of the increments, {Y n := X n+1 − X n } n∈N0 is a stationary process. Kolmogorov's extension theorem allows us to extend this sequence to Z while keeping the stationarity. That is, there exists a stationary process then {X ′ n } n∈Z is clearly a dt-ss-si process on Z, in the sense that it is of stationary increments, and for any n ∈ N, there exists b(n) > 0, such that Moreover, by the stationarity of the increments, {X ′ n } n∈Z is an almost periodic sequence in L 2 (Ω, F , P) if {X n } n∈N0 is in L 2 (Ω, F , P). Proposition 5.3. Let {X n } n∈N0 be a dt-ss-si process of type II satisfying E(X 2 1 ) < ∞. Then it has an extension on Z, denoted by {X ′ n } n∈Z , which is an almost periodic sequence in L 2 (Ω, F , P).
Proof. Let {X ′ n } n∈Z be the extension of {X n } n∈N0 on Z given in the paragraph above Proposition 5.3. For any ε > 0, take where ⌈·⌉ is the smallest integer which is larger than or equal to the argument. Then, every closed interval of length N (ε) contains a number τ satisfying N (ε)|τ . We now have By [1] (Sections 6.3, 1.3), we can associate an almost periodic sequence in L 2 (Ω, F , P), hence also {X n } n∈N0 , with a Fourier series ∞ k=1 A k e(nλ k ), n ∈ N 0 for some countable set of real numbers {λ k } ∞ k=1 . {A k } k∈N ⊂ L 2 (Ω, F , P) is given by X n e(−nλ k ), k ∈ N in L 2 (Ω, F , P). We denote this relation by A k e(nλ k ), n ∈ N 0 .
If moreover, the right hand side of (5.2) is uniformly convergent in L 2 (Ω, F , P), then where the infinite sum is in the sense of L 2 (Ω, F , P). We do not have the convergence at this moment, but will establish it using the properties of the process {X n } n∈N0 .
The following lemma shows that the coefficient A k can be nonzero only if the corresponding λ k is a p-adic rational.
Proof. It suffices to show A k = 0 in (5.2) for λ k not of the form ℓp −m where ℓ ∈ N 0 , m ∈ N. Let λ ∈ R be such that p m λ is not an integer for any m ∈ N. Using (5.1), for every m ∈ N, the coefficient corresponding to λ, denoted by a(λ), satisfies By Cauchy-Schwarz inequality, As p m λ is not an integer, it is easy to see that which converges to 0 as N → ∞. Therefore E |a(λ)| 2 ≤ p −2mH E(X 2 1 ). Since this holds for all m ∈ N, letting m → ∞ leads to the conclusion that A k can only be non-zero if the corresponding λ k is a p-adic rational. Finally, since e(x) has period 1, {e(nλ)} n∈N0 = {e(n(λ + 1))} n∈N0 . Hence we only need p-adic rationals in [0, 1).
Remark 5.5. The above lemma also holds in the Banach space L p (Ω) if E(|X 1 | p ) < ∞ for p ≥ 1. The proof is essentially the same by replacing the Cauchy-Schwarz inequality by Hölder's inequality. For simplicity, we only consider the L 2 case. Also note that for 1 ≤ p < 2, the convergence of the associated Fourier series is not guaranteed, hence although still valid, the result of Lemma 5.4 becomes less important.
Lemma 5.4 allows us to further explore the detailed impact of the stationarity of the increments and the self-similarity of the process to the representation (5.2). We start from the following simple observation about the increment process.
Lemma 5.6. Let {X n } n∈N0 be a dt-ss-si process of type II satisfying E(X 2 1 ) < ∞ and Then its increment process {X n } n∈N0 , given bỹ is almost periodic in L 2 (Ω, F , P) and stationary. Moreover, Proof. Stationarity is trivial, and almost periodicity follows directly from The representation is obvious from the relationX n = X n+1 − X n .
As a consequence of Lemma 5.6, the increment process {X n } n∈N0 is associated with the Fourier series m∈N 0<ℓ<p m , p∤ℓÃ Proof. Assume {Y n } n∈N0 is stationary. Since the process {Y n+1 } n∈N0 is also almost periodic and in L 2 (Ω, F , P), it is associated with a Fourier series as well. The coefficient A ′ k corresponding to λ k is given by , by the uniqueness of the associated Fourier series, the coefficients of the corresponding terms must also have the same distribution. Hence (5.3) holds.
Lemma 5.4 also allows us to directly rewrite the representation (5.2) as where A 1 is the coefficient corresponding to λ 1 = 0, i.e., the constant term. As a result, Lemma 5.7 has the following simple corollary for processes with stationary increments.
Corollary 5.8. Let {X n } n∈N0 be an almost periodic process in L 2 (Ω, F , P) with the representation is an orthogonal sequence in L 2 (Ω, F , P).
The proof of this corollary is trivial by noticing that A (m) ℓ andÃ (m) ℓ are different only by a deterministic multiplicative factor.
We have seen how the stationarity of the increments has an impact on the coefficients for the increment process and therefore, also on the coefficients for the original process. Next, we discuss an impact of the self-similarity to the coefficients in the representation. Lemma 5.9. Let {X n } n∈N0 be an almost periodic process with E(X 2 1 ) < ∞ and the representation Proof. For any m ∈ N and ℓ satisfying 0 < ℓ < p m , p ∤ ℓ, . Corollary 5.8 and Lemma 5.9 together guarantee a very important result: the convergence of the Fourier series associated with a dt-ss-si process of type II in L 2 (Ω, F , P). which converges uniformly to 0 as M, N → ∞. Hence the Fourier series converges uniformly in L 2 (Ω, F , P).
As a direct consequence of Proposition 5.10, all the Fourier series discussed in this section converge and hence are equal to the original sequences. In other words, the "∼" can be now replaced by "=" in the sense of L 2 (Ω, F , P). This allows us to easily expand Corollary 5.8 to a two-directional result.
Combining the results of Lemma 5.4, Propositions 5.10, 5.11 and 5.12 trivially leads to Theorem 5.1.