Long-range Dependence through Gamma-mixed Ornstein–uhlenbeck Process

The limit process of aggregational models—(i) sum of random coefficient AR(1) processes with independent Brownian motion (BM) inputs and (ii) sum of AR(1) processes with random coefficients of Gamma distribution and with input of common BM's,—proves to be Gaussian and stationary and its transfer function is the mixture of transfer functions of Ornstein–Uhlenbeck (OU) processes by Gamma distribution. It is called Gamma-mixed Ornstein–Uhlenbeck process (ΓMOU). For independent Poisson alternating 0–1 reward processes with proper random intensity it is shown that the standardized sum of the processes converges to the standardized ΓMOU process. The ΓMOU process has various interesting properties and it is a new candidate for the successful modelling of several Gaussian stationary data with long-range dependence. Possible applications and problems are also considered


Introduction
Long-range dependence or long-memory is receiving an increasing emphasis in several fields of science.Recently it has been discovered also in biological systems, [HP96], [PHSG95], [SBG + 93], [SBG + 94], in the traffic of high speed networks, [LTWW94], [WTSW95], [WTSW97], in economic processes and in a variety of pieces of music, [BI98], as well.The question arises naturally how do micro level processes imply long-range dependence at macro level.The Brownian motion (BM) can be constructed as the result of infinitely many effects at the same time at micro level, while the basic long range dependent processes such as fractional Brownian motion (FBM), fractional Gaussian noise (FGN) and fractional ARIMA processes are constructed by time-aggregation (e.g.fractional integration).We are interested in constructions in which the long-range dependence arises not by aggregation in time but similarly to the BM by aggregating the copies of a short range dependent process, each copy taken at the same time point.The different copies are the micro level processes.Surprisingly, long-range dependence may come into being in this manner, as the following examples show.(i) The most famous paper in this field is the one by Granger [Gra80], who aggregated random coefficient AR(1) time series to get an approximation of long-range dependence.(ii) Other interesting models due to Willinger et al. [WTSW95], [WTSW97] and Taqqu et al. [TWS97], aggregation of independent ON/OFF processes with heavy tailed ON or OFF time periods.They are based on the idea of Mandelbrot [Man69], namely that superposition of many sources exhibiting the infinite variance syndrome results in long-range dependence.(iii) Lowen, Teich and Ryu [LT93], [RL98] study essentially the same processes under the names of superposition of fractal renewal processes and alternating fractal renewal process.(iv) Recently Carmona, Coutin and Montseny ( [CCM], [CC98]) discovered that the Riemann-Liouville fractional integral operator can be represented as a mixture of AR(1) operators.Though [CCM] and [CC98] deal with nonstationary processes for which the long-range dependence is not defined they are very closely related to our results.(v) Some mixture of two AR(1)s has been used for modeling the traffic of high speed networks in [AZN95].
In this paper we apply the ideas of both Granger [Gra80], Carmona, Coutin and Montseny [CCM], [CC98] and Mandelbrot [Man69] and construct a continuous time stationary process with long-range dependence.In Section 2 we deal with the sum of random coefficient AR(1) processes.In Granger's models the squares of the AR(1) coefficients are Beta distributed.In our second model the AR(1) coefficients are Gamma distributed and the input Brownian motions are common.In our first model the AR(1) coefficients are doubly stochastic, that is they are Gamma distributed with a Pareto distributed random scale parameter, and the input BM's are independent.We show that the two aggregational models lead to the same limit process.It is Gaussian and stationary and its transfer function ϕ(ω) is the mixture of transfer functions of Ornstein-Uhlenbeck (OU) processes by Gamma distribution, i.e.
It is long-range dependent, we shall call it Gamma-mixed Ornstein-Uhlenbeck process with long-range parameter h and scale parameter λ and denote it by ΓMOU(h, λ).
The third aggregational model of Section 2 diverges from the previous two in its starting point, i.e., the type of the micro level processes.In particular, we take independent random intensity Poisson alternating 0-1 reward processes and show that if the intensity parameter follows the Gamma distribution with a Pareto distributed random scale parameter, then the standardized sum of the processes converges to the standardized ΓMOU process.
The ΓMOU process is a new candidate for the successful modelling of several Gaussian stationary data with long-range dependence.In Section 3 we show several interesting properties of it.Namely, it is asymptotically self-similar, it converges to the continuous time FGN when λ → 0 and to the BM when λ → ∞.Its position between the continuous time FGN and the BM can be controlled simply by the scaling parameter λ.The ΓMOU process has a simple time domain moving average representation, almost the same as that of the continuous time FGN.Moreover, it is semimartingal, so it is possible to integrate with respect to it without any difficulties.
Section 4 is a digression in which we look at what happens if the limiting procedures leading to the ΓMOU process are taken in reverse order.
Possible applications and problems are considered in Section 5. Modelling the heart interbeat intervals time series by the ΓMOU process may lead closer to an understanding of the mysterious phenomenon of heart rate variability.The other possibility of making use of the ΓMOU process is that it can serve as the input to different chaotic models.It also suggests a new definition of stochastic differential equations with FBM input.That new definition-which is somewhat more handy than the ones known at present-is based on almost sure (a.s.) smooth approximations of the FBM.The properly scaled integral process of the ΓMOU process serves as a natural example of such an approximating process.
For the sake of unanimity let us agree on the parametrization of distributions we shall deal with.
The probability density function of the Gaussian distribution N (µ, σ 2 ) is denoted by f N (µ,σ 2 ) (x).The Γ distribution with shape parameter p > 0 and scale parameter λ > 0, Γ(p, λ) for short, has the probability density function The Pareto distribution with scale parameter µ > 0 and shape parameter q > 0, P(q, µ) for short, has the probability density function 2 Representations by aggregation

The limit of aggregated random coefficient independent Ornstein-Uhlenbeck processes
Let us consider a sequence of stationary OU processes X k (t) , k ∈ N with random coefficient, i.e., X k is the stationary solution of the stochastic differential equation The processes B k (t) , k ∈ N, are independent BM's with time parameter t ∈ R. Let us denote the probability space on which they are defined by (Ω B , F B , P B ).The autoregression coefficients, α k , k ∈ N, are independent identically distributed random variables on the probability space (Ω α , F α , P α ).We consider them as extended versions to the space (Ω α , F α , P α )×(Ω B , F B , P B ), i.e., the two sets {B k (t) : t ∈ R, k ∈ N}, {α k : k ∈ N} are independent.Moreover, let us fix parameters 0 < h < 1 2 , and λ > 0 and suppose that the distribution of the random variables −α k is the mixture of Gamma Γ(2 − h, z) distributions by the weight function for short, and the scale parameter θ k has Pareto distribution.In other words they are the conditional distributions P −α k |θ k , which are Gamma distributions, i.e., The dependence structure of the random variables θ k , k ∈ N is arbitrary.For example, the θ k 's may be independent or even the same.It must be stressed again that for parameter h we specify the range 0 < h < 1 2 .Otherwise the process we shall deal with will not be L 2 -stationary and long-range dependent.

Actually the density function f
denote the series of the √ n-standardized aggregated processes.X k (t) and Z n (t) can be considered P α -almost surely (a.s.) to be random processes on (Ω B , F B , P B ). Since P α -a.s. the processes X k (t) , k ∈ N, are independent, the spectral density of Z n (t) is the average of those of X k (t) , k=1, . . .,n, so, P α -a.s.
Let Y (t) be a zero mean Gaussian process on (Ω B , F B , P B ) with spectral density (3) For the proof see the Appendix.

The limit of aggregated random coefficient Ornstein-Uhlenbeck processes with a common input
It is interesting to note that the process Y (t) defined in the previous section arises also by the limit of aggregated random coefficient OU processes, the input of which is the same BM.
Let us consider the random coefficient stationary OU processes X k (t) , k ∈ N, i.e., the stationary solutions of the stochastic differential equations random variables on the space (Ω α , F α , P α ), and For example, they can be considered to be the increments of a Gamma field.We use the word "field" to indicate that the index k of the coefficients does not mean time.
We consider the BM B(t) and the random variables α k , k ∈ N to be extended versions to the space (Ω α , F α , P α ) × (Ω B , F B , P B ), i.e., they are independent.The ranges of h and λ are the same as everywhere in this paper, i.e., 0 < h < 1 2 , and λ > 0, respectivelly.We know both the time domain and the frequency domain spectral representations of the stationary OU process, i.e., P α -a.s., where W (dω) is the complex Gaussian spectral measure connected with B (t) through From (4) we have P α -a.s.By the law of large numbers P α -a.s.The above time domain transfer function has the very simple explicit form Also the frequency domain transfer function has an explicit representation by some special functions, but for our immediate purposes the trivial representation Definition 1 Let us define the process Y (t) will be called Gamma-mixed Ornstein-Uhlenbeck process with parameters h and λ, ΓMOU(h, λ) for short.We also use the notation Y λ (t) where it is necessary to emphasize the dependence on λ.
Remark 1 The appearance of the Gamma distribution in the role of mixing distribution is not an illusory assumption.E.g., there exist practical applications of the so called Gamma-mixed Poisson process, see, "mixed Poisson process" in [DVJ88].It is also called negative binomial process and Pólya process.
This process Y and the one in Theorem 1, denoted by Y also, are the same in weak sense, i.e., they have the same distributions on C[a, b], for arbitrary a < b ∈ R.Moreover, P α -a.s.
For the proof see the Appendix.
Remark 2 0 no longer holds.This is because the processes Thus the convergence holds only in the P α -a.s. in L 2 (Ω B , F B , P B ) sense.
Remark 3 While the conditional distribution of the processes X k (t) to aggregate, is Gaussian, the unconditional distribution of X k (t) is not Gaussian.That is, the unconditional probability density is our processes are all stationary, as the lower limit of the integral is −∞ in our case.Stationarity is necessary because long-range dependence is defined only for stationary processes.This is the main difference between the subject of [CCM], [CC98] and our paper, and the main correspondence is the following.Let µ be a probability measure on R + for which and let us define as in [CC98] for every fixed t ∈ R + the L 1 (R + , µ)-valued process Z t = Z t (•) by i.e., for every fixed x > 0, Z x (•) is the OU process with AR(1) coefficient x and starting from zero.Moreover let because it is a Markov process, though infinite dimensional, while the finite dimensional processes are not Markovian in general, where g is the Laplace-Stieltjes transform of ϕ.Now, it is easy to see that the ΓMOU process can be represented as the weak limit process, as r → ∞, of the processes v r defined by In other words the ΓMOU process Y t can be represented as i.e., for every fixed x > 0, X x (•) is the stationary OU process with AR(1) coefficient x.

The limit of aggregated random intensity Poisson alternating 0-1 reward processes
Let us take the extended Poisson process N (t), t ∈ R, on a probability space (Ω N , F N , P N ).We denote the intensity parameter by κ.The stationary Poisson alternating 0-1 reward process X(t) with intensity parameter κ is defined by where the random variable ν is independent of the process N (t) and symmetric Bernoulli distributed, i.e., P N (ν = 0) = P N (ν = 1) = 1/2.The addition of ν to N (t) is needed to achieve the stationarity of X(t).Indeed, and for s ≤ t, Thus, the spectral density is which is exactly that of an OU process with AR coefficient 2κ and innovation variance κ, so we can proceed in the same way as in Subsection 2.1.In order to get the same limit process, the parameterization of the distribution of the random parameter κ will be somewhat different from that of the AR coefficient α.
Thus, let us take the independent stationary Poisson alternating 0-1 reward processes X k (t) , k ∈ N on the probability space (Ω N , F N , P N ), and assume that the intensity parameters κ k , k ∈ N, are independent and identically distributed random variables on the probability space (Ω κ , F κ , P κ ).As in Subsection 2.1, assume also that the parameters and the processes are independent, i.e., we consider everything on the product space (Ω κ , Moreover let the distribution of the random parameters κ k , k ∈ N, be Pareto-mixed Gamma, namely Actually the density function f κ (x) of κ k is given clearly by Since P κ -a.s. the processes X k (t) , k ∈ N, are independent, the spectral density of Z n (t) is P κ -a.s.The last row contains exactly the spectral density of the standardized ΓMOU(h, λ) process, see (3).Thus we have obtained a further aggregational model for approaching the ΓMOU process.
Theorem 3 Let X k (t), k ∈ N, be a series of independent stationary Poisson alternating 0-1 reward processes the intensity parameters κ k of which are independent random variables with the Pareto-mixed Gamma distribution given by (8).Then the series Z n of the standardized sums of the processes X k (t) converges P κ -a.s. to the standardized ΓMOU(h, λ) process in the sense of weak convergence of finite dimensional distributions.
Remark 5 For each k ∈ N, the conditional distribution of the interevent times of the process X k (t), i.e., that of the interevent times of the background Poisson process N k (t), is exponential with intensity parameter κ k .Because of (8), the unconditional distribution is Pareto-mixed Gamma-mixed exponential, that is Pareto-mixed shifted Pareto.Specifically, if τ (j) denotes the time between the (j − 1)-th and j-th events, then Nevertheless, the process X k (t) can not be derived from a renewal process with this interevent time distribution, since τ (j) , j ∈ Z, are only conditionally independent.For this reason our setting is somewhat different from those described in [TWS97], [LT93] and [RL98].

Properties of the ΓMOU process Property 1. Conditional expectation
An expressive form of the ΓMOU process is where X α (t) is the random coefficient OU process defined in Subsection 2.2, the lower index is used to denote the dependence of the process on α, and F B (t) is the σ-field generated by the random variables B s , 0 < s ≤ t.

Property 2. Stationarity
The ΓMOU process Y (t) is stationary, regular or physically realizable, zero-mean and Gaussian.The frequency domain transfer function of Y (t) can be expressed by the upper incomplete complex-argument Γ function as An explicit form of the autocovariance function is The last equality is the consequence of an identity of Euler concerning the Gaussian hypergeo- so the autocorrelation function is just the hypergeometric function A form of the spectral density, which is useful for describing the next property, is

Property 3. Long-range dependence and asymptotically self-similarity
The most interesting property of the ΓMOU process Y (t) is the long-range dependence.Because the phenomenon of long-memory or long-range dependence is ubiquitous and already widely known, we only recall what it really is.The main point is that the dependence on the past values of the process decreases so slowly that the sum of the autocovariance series becomes infinite, or, equivalently, the spectral density S(ω) has a pole at zero.In the case of very small and very large values, one usually changes into logarithmic scale, so, because the linear function is the simplest, or, equivalently, the so-called power law becomes the characteristic when describing long-range dependence.(Generally, in (11) there is a slowly varying function in place of c, but it has no significant role, so we simply omit it.)The quantity h called long-range parameter must be in the range 0 < h < 1 2 .Now it follows from (10) that the ΓMOU(h, λ) process Y (t) is long-range dependent with parameter h, since (11) holds for S Y (ω).This also appears from the asymptotic behavior of the autocovariance function since where B denotes the Beta function at this point.
A property expressing the fractal-like feature of the process Y (t), closely related to long-range dependence, is asymptotical self-similarity.This means that, as T → ∞, the series of the autocorrelation functions ρ (T ) (s) of the processes arising by averaging Y (u) over the intervals (tT, (t + 1)T ), t ∈ R, converges to an autocorrelation function ρ (∞) (s), moreover ρ (∞) and ρ Y are equivalent at infinity, i.e., ρ (∞) (t) and ρ Y (t) converge to zero as t→ ∞ in the same order.That is, as s → ∞, and as s → ∞.The order of the convergence is s 2h−1 in both cases.This type of self-similarity mentioned in [Cox91], is the weakest, i.e., the most general among the various other self-similarity concepts.It is the one that always follows from long-range dependence.
Property 4. Convergence to the continuous time fractional Gaussian noise when λ → 0.
The fractional Brownian motion (FBM) where h ∈ − 1 2 , 1 2 , has a central role in the field of long-range dependent processes.Its difference process, B (h The BM motion is of almost sure unbounded variation and this is the reason for the existence of the stochastic integro-differential calculi.The FBM also has this property and it is not even semimartingale since its quadratic variation is zero.Hence, integration with respect to FBM, that is, stochastic differential equations with FBM input are not obvious things.There exist various approaches for them, see [Ber89], [Lyo98], [DH96], [D Ü99], [CD], [Zäh98], [KZ99] and [IT99]; however the most natural one remained hidden yet.Later in Subsection 5.2 we shall try to bring the reader around to our point of view and demonstrate the concept for treating stochastic differential equations with FBM input.
One possibility to find a process easier to handle, which is arbitrarily close to the continuous time FGN on the one hand, and preserves the long-range dependent feature on the other hand, is the following.Let us consider the continuous time FGN, i.e., the informal derivative of Unfortunately, b (h) (t) does not exist in L 2 , for the same reason that the usual continuous time white noise does not exist.However, if we cut off that part of the integral which causes the trouble, that is, the upper end, then we get the new process where λ > 0 is preferably small.The integral in (13) does exist in L 2 , and b 2 .This is because we cut off only the present and the recent past.The effect of the long past, from which the long-range dependence arises, is left untouched.From another point of view what we do is, in fact, the retardation of the effect of the noise.At point of time t, only the noise effects acting up to the point of time It is easy to calculate the standard deviation of the stationary process b and  13) is just the shifted ΓMOU(h, λ) process multiplied by a constant.This property can be described also in terms of the FBM and the integrated ΓMOU process as follows.
The ΓMOU(h, λ) process Y λ (t) is close to the continuous time FGN in the sense that the integral process of the properly scaled process Y λ (t) converges in L 2 if λ → 0, to the FBM, i.e.
Almost sure uniform convergence also holds, i.e., lim where the first equation arises by integrating by parts.

The weak convergence in
follows from (15).
In fact the convergence in ( 16) is also a consequence of Taqqu's time-aggregation limit theorem, which states that properly scaled time-averaged functions of the process converge weakly in C[a, b], to certain self-similar processes with stationary increments the Hermite order of which is the Hermite rank of the function, see [Taq79].The limit processes have the form (18).This theorem is applicable in full to our process because where d = means that the finite dimensional distributions are the same, thus by Taqqu's theorem in [Taq79], if G is a function for which is the m th coefficient in the Hermite expansion of G, and an Hermite order m self-similar process with stationary increments, subordinated to B (t).Here the integrals are multiple Wiener-Itô and multiple Wiener-Itô-Dobrushin integrals, see [Taq79] and for an introductory level [Ter99].Since Z (h) 16) is a special case of (17).It is interesting to note that Taqqu's theorem states what happens to the scaled aggregated process when the time unit expands to infinity, while (17) means that we arrive at the same limit process when compressing the intensity parameter λ to zero, i.e. when expanding its reciprocal, the space unit to infinity.Note that for the model of Subsection 2.2 λ is the intensity parameter of a Gamma field.Recall that it is the field the increments of which are the AR(1) coefficients α k .See also Property 7.
Remark 6 (7) is the key formula leading to most remarkable properties of the ΓMOU process.It shows that the ΓMOU process and the informal derivative (12) of the FBM are closely related.The idea leading to (7) and to finding out the proper distribution of the AR(1) coefficients, is based on two facts.The first is that the time domain transfer function of an OU process is almost the same as the exponential density function, and the time domain transfer function of ( 12) is almost the same as the Pareto density function.The second fact is that the exponential distribution with a Gamma distributed random intensity parameter is the same as the Pareto distrubution shifted to zero.

Property 5. Properties of paths and semimartingale
The ΓMOU process Y (t) is almost surely continuous, as Theorem 1 states.It is a.s.nowhere differentiable, because it is semimartingale with a nonzero martingale component.That is, the semimartingale decomposition to an a.s.bounded variable part V (t), and a martingale part M (t), both being adapted to the basic filtration (B (t) , F(t)) of the BM motion, is the following: where X α (t) is the random coefficient OU process defined by (4), and This can be seen if one considers the spectral representation of Y (t) and takes the L 2 -derivative The transfer function here, φ(ω)iω−1, is really square-integrable, since it is the Fourier-transform of a square-integrable function, viz.
. The a.s.differentiability follows from the fact that the second moment of the L 2 -derivative process (21) is constant, thus it is integrable over every finite interval, so the L 2 -derivative process (21) is a.s.integrable over every finite interval and its integral a.s.coincides with its L 2 -integral Y (t) − Y (0) − B (t).On the other hand, upper parts of the figures illustrate the trajectories of the standardized ΓMOU(h, λ) processes Y (t) /DY (t).The middle and lower parts demonstrate the martingale components M (t) = B (t), and the components V (t) of bounded variation (even absolutely continuous in this case), respectively.All of the processes in the four figures are generated from the same realization of B (t), therefore the differences between them arise from the different values of h and λ, alone.
Figure 1 refers to the case of a small h, thus slight long-range dependence, and small λ, i.e. closeness to dB (h) (t) /dt in the sense of Property 4. Therefore, in such a case the ΓMOU(h, λ) process Y (t) is close to the white noise.Figure 2 refers to the case of a large h, thus very long-range dependence, and small λ, i.e., when the properly scaled ΓMOU(h, λ) process is close to dB (h) (t) /dt, in the sense of Property 4. It seems the slow fluctuation, called pseudotrend, which is the manifestation of the high intensity of the low frequency components, just the spectral characteristic of long-range dependence.On the other hand, the fluctuation can be explained by the fact that the component V (t) of bounded variation, which means something like average feedback (see (20)), can not completely balance the random walk M (t) = B (t) .The decrease of the feedback is caused by the decreased modulus of the AR(1) coefficients α, we recall that or for the OU processes of Subsection 2.1 and 2.2, respectively.Thus, for both models the larger the h the smaller the feedback and the smoother the V (t).
The semimartingale decomposition makes it possible to obtain the asymptotic behavior of the ΓMOU(h, λ) process Y λ (t) as λ → ∞.In particular, the component V λ (t) of bounded variation converges to zero as λ → ∞, i.e., as it follows from (23), and ( 22) implies that because the function (1 + u) h−2 is square integrable.On the other hand, also from (22) we have hence by applying the Lebesgue theorem, from (26) we have for arbitrary a < b ∈ R.This means that the ΓMOU(h, λ) process, fixed to zero at t = 0, converges to the BM, i.e.
also easily follows.
Figure 3 seems to be very spectacular.Its origin is that the role of the parameter λ is the scaling both the Gamma and the Pareto distributions, and thus the AR coefficients of the micro level OU processes eventually.Actually, there is not too much special in (27), as the OU processes also have a similar property.What makes it interesting is that by Properties 4 and 6, as λ varies between 0 and ∞, the scaled ΓMOU(h, λ) process ranges from FGN(h) = I h (WN) to BM= I(WN), meanwhile it remains stationary.(Here I h means the fractional integral of order h, and WN denotes the white noise.)This means that from a certain viewpoint the process behaves as if its long-range parameter were larger, between h and 1.Thus in case of h ≈ 1 2 , the long-range parameter seems to be between 1 2 and 1.This phenomenon is really noticeable when we treat certain data sets.The ΓMOU process can be a possible interpretation for it.

Digression: taking the limits in reverse order
One of the interesting properties of the ΓMOU process is that it is an approximation of the continuous time FGN in the sense of Property 4, Section 3. It means that if we consider the random coefficient OU processes X k,λ (t), those of either Subsection 2.1 or Subsection 2.2, take their scaled infinite sum for k, take the scaled time-integral, and take the limit as λ → 0 in this order, we obtain the FBM.(In X k,λ (t) the index λ denotes the fact that the process also depends on it.)Starting from the stationary Poisson alternating 0-1 reward processes X k,λ (t) of Subsection 2.3, we can get the FBM similarly.The question is whether it is important to take the limits in this order or not.The answer is yes because the following statements hold.Let us denote the characteristic function of the α-stable distribution S α (γ, β, c) by ϕ Sα(γ,β,c) , i.e., where where the random variable ζ is independent of the BM B (t), and it is (1 − h)-stable, that is where the random variable η is independent of the BM B (t), and it is (1 − h)-stable, that is The above two limit processes are non-Gaussian, they have no finite second moment and ηB (T ) has no expectation even.They have dependent increments.They are still not too interesting because their paths are those of the BM.
In the case of λ → ∞, changing the order of limits does not matter.In fact for both processes X k,λ (t).
Taking the limits in reverse order in the case of the stationary Poisson alternating 0-1 reward processes X k,λ (t) of Subsection 2.3 yields even less interesting result as for all t ∈ R and k ∈ N, where the limit means stochastic convergence, and Thus, in practice the ΓMOU process occurs in all three cases typically when λ is neither too large, nor too small and n/(1/λ) = nλ is large.

Heart rate variability
One of the many stochastic processes in nature in which modelling fractal-like features by longrange dependence has proved to be successful is the time series of the heartbeat intervals.It [SBG + 94]) that healthy hearts and diseased hearts produce different patterns of interbeat interval variability, see Figure 5.The graph in the healthy case is patchy, the pseudotrend shows up, nevertheless the process appears to be stationary.On the contrary, the time series in the diseased case is rather nonstationary, even after filtering out some large frequency periodic components-the concomitants of various special heart failures.The trajectory is smoother and much more reminiscent of random walk than the healthy one.The estimated values of the longrange parameter are h ≈ 1 2 in the healthy case and 1 2 h ≤ 1 or even h ≈ 1 in the pathologic case.These facts are taken for granted in the literature.
Healthy heart activity requires strong feedbacks, well balanced among themselves in the various regulating processes of the autonomic peripheral nervous system ( [HP96], [PHSG95], [SBG + 94]).These regulating feedback processes can be random coefficient OU processes where the strength of the feedbacks depends on the AR(1) coefficients.[HP96] put the question of what the distribution of the moduli of the AR(1) coefficients should be for the aggregated OU processes to be long-range dependent with parameter h ≈ 1 2 ."Perhaps some relatively simple processes are responsible for this puzzling behavior."( [HP96]) In the authors' opinion these relatively simple processes could be either OU processes with Pareto-Gamma distributed moduli of the AR(1) coefficients and independent inputs, see Subsection 2.1, or OU processes with Gamma distributed moduli of the AR(1) coefficients and common inputs, see Subsection 2.2.We know that the limits of the properly scaled aggregated processes is the ΓMOU process in both cases.In Figure 5 the trajectory belonging to a healthy heart looks like a ΓMOU(h, λ) process for large h, small λ, see the upper graph in Figure 2, while that of a diseased heart is like a ΓMOU(h, λ) process for some large λ, see Figure 4.
Modelling the heart interbeat intervals time series by the discretized ΓMOU process makes it possible for us to explain the heart diseases simply by an unsatisfactory feedback.Thus, a small λ involves a large feedback and health and a large λ involves a small feedback and disease, see also Section 3, Properties 5 and 6, and the interpretation of the figures.Moreover, Property 7 means that after centralizing to zero, i.e. after subtracting the expectations, the difference between the healthy and the diseased heart interbeat interval time series arises only from the difference of scales.When the two coordinate axes are shrunk and streched out properly, the diseased heart time series seems just as stationary as the healthy one.The estimated values h ≈ 1 for the diseased data set are far too large since 1 2 < h means nonstationarity while h ≈ 1 is typical of a random walk.However, obtaining large estimated values for h agrees well with our finding that for a large λ the ΓMOU(h, λ) process is close to the BM, see Property 6 for the precise statement.As λ increases the ΓMOU(h, λ) process remains stationary and retains its fixed long-range parameter h ∈ 0, 1 2 in spite of the fact that it becomes increasingly similar to the BM, thus it becomes more and more difficult to estimate h.Furthermore, by the scaling Property 7 increasing λ means extending out the process along the time axis, losing information about large scale properties, while the estimation methods of the long-range parameter are based on just those large scale properties.Thus, the authors suggest that it is not h, i.e., not the degree of long-range dependence, but λ, i.e., the scaling that lies behind the difference between the healthy and the diseased heart functions.It is simply the insufficient feedback that is responsible for the pathologic heart rate variability.
The ΓMOU model can answer the question why the healthy heartbeat time series is so strongly long-range dependent, i.e., why h ≈ 1 2 .The argument is based on the semimartingale decomposition, see Property 5.If we also indicate the dependence on the parameters h and λ, the expected value of the total variation of the component (20) of bounded variation is Increasing λ involves a decrease of the process V h,λ (t) in the sense of (28), and thus the martingale component, i.e., the random walk process B (t) becomes the determining term in (19), and this happens to be the sign of disease.However (28) means also that the larger the h, the less sensitive the process V h,λ (t) and thus also the ΓMOU(h, λ) process, to an increase of λ. (From (24) and (25) one can come to a similar conclusion with respect to the AR(1) coefficient α and thus to the feedback.)Therefore, the strong long-range dependence may be the result of some evolutionary adaptation process, the effort of the organism to become more resistant to the pathologic condition of an increasing λ, in order to slow down the deterioration.

Stochastic differential equations with ΓMOU process input. Stochastic differential equations with FBM input
The fact that the ΓMOU process is semimartingale (see Property 5) can be utilized mainly for going beyond the linear processes to the chaotic ones.Now we are interested in stationary longrange dependent chaotic processes (see [IT99], [Ter99]).One of the simplest of these processes is Here Y (t)-as everywhere in this paper-is the ΓMOU process.We also define the exponential process of the semimartingale γY (t) as G(t) could also be called a geometric ΓMOU process, on the analogy of the geometric BM.The process F (t) appears because it is the stationary one.It is long-range dependent and chaotic at the same time.We briefly outline how the long-range dependence follows.Consider the spectral domain chaotic representation where ω (k) = (ω 1 , . . ., ω k ), Σω (k) = k j=1 ω j , and the integrals are multiple Wiener-Itô-Dobrushin integrals, see [IT99], [Ter99].The terms with different orders of k are orthogonal processes, thus the spectral density of F (t) is the spectral density of the first order term plus the sum of the spectral densities of the other terms.The first order term is constant times Y (t), the spectral density of which has a pole at zero.Since all of the spectral densities are nonnegative, its sum, i.e., the spectral density of F (t), also must also have a pole at zero, which means just the long-range dependence of the process F (t).The ΓMOU process also leads to an idea to find a natural meaning of stochastic differential equations with FBM input.Let B (h) (t) be the FBM process.The question is what should mean.There exist various definitions and relevant theories for (29), see [Ber89], [Lyo98], [DH96], [D Ü99], [CD], [Zäh98], [KZ99] and [IT99].However, in the authors' opinion the following interpretation of (29) is the most natural one.
We remind that by Property 4 x 0 is a random variable) and for every λ > 0, (30) a.s.has a well-defined unique solution x λ (t), for which x λ (t 0 ) = x 0 .And now comes the key, a result of Sussmann, [Sus78], Theorem 9, according to which-under mild growth and smoothness assumptions prescribed for the functions f and g-the a.s.convergence in C[a, b] of the input processes, as λ → 0, entails the a.s.convergence in C[a, b] of the solution processes.Hence we can clearly define the pathwise solution x(t) of (29) with initial condition x(t 0 ) = x 0 , as the pathwise uniform limit of the solutions of (30).In other words, the pathwise solution x(t) is an element of C[a, b] for which Strictly speaking, it is not necessary to include the ΓMOU process in this definition.Obviously, the family of processes {J(Y λ ) (t) : λ → 0} might be replaced by any series of a.s.continuously differentiable processes, a.s.converging to The stochastic differential equation (29) refers to the FBM rather than the ΓMOU process.The role of the ΓMOU process is that its set of scaled integral functions {J(Y λ ) (t) : λ → 0} serves as an example of an a.s.continuously differentiable approximation of the FBM.We notice here that for interpreting the stochastic differential equation (29) we did not have to define the integral with respect to B (h) (t).

Appendix
Lemma 4 Let 0 < α, 0 < β < 1 + α, 0 < λ be real numbers.The density function of the mixture of Γ(β, θ), θ > 0 distributions by the Pareto density function f P(α,λ) (θ) is given by where η 1 , η 2 ∼ Γ(1 − h, λ) and they are independent.But then because h < 1 2 .Thus, we have proved that (3) is a spectral density function.Moreover since P α -a.s.Z n (t) is a zero-mean and Gaussian process, whose finite dimensional distributions are determined by its autocovariance function, that is eventually by its spectral density.This also applies to Y (t).Hence (2) and (3) imply that P α -a.s. the finite dimensional distributions of Z n (t) converge to the corresponding finite dimensional distributions of Y (t).
We know that since the P α -a.s.OU processes X k (t) have P B -a.s.continuous modifications, so do P α -a.
P α -a.s.Because of (34), for the convergence of (33) to zero, it is necessary and sufficient that the series of functions to integrate should be uniformly integrable.Because it is uniformly bounded (by 0 and 1), for the uniform integrability only .Let us deal with the second statement.Because of the isometric isomorphism between the Hilbert spaces of the time domain transfer functions and the frequency domain ones, the first statement of this theorem, (5), and (6) together yield the frequency domain representation of Y (t), P α -a.s.Now, using the notation p $ 1 − h, we obtain the spectral density of Y (t) as Here the last equality is the consequence of Lemma 1.The resulting expression for the spectral density is exactly the spectral density of the process we studied in Subsection 2.1, see (3).
being the same.(The lower index indicates the dependence of Y (t) on λ.)In other words the process b (h) ) over every compact interval [a, b].The proof is straightforward so we include it here.lim λ→0 sup t∈[a,b]

Figure 5 :
Figure 5: Interbeat intervals from a diseased heart (up) and from a healthy heart (down) from [Buc98].

= c 1 |t| 1+ δ 2 PP
s. Z n (t) , n ∈ N. If we prove the P α -a.s.tightness of the series of distributions induced by Z n (t) , n ∈ N, on C[a, b], then, if we consider the P α -a.s.convergence of the finite dimensional distributions of Z n (t) , n ∈ N, to those of Y (t), both the P α -a.s.weak convergence Z n w → n→∞ Y and the P B -a.s.continuity of Y (t) follow.Hence, let us now prove the P α -a.s.tightness.The autocovariancer Zn (t) = 1 n n k=1 r X k (t) = 1 n n k=1 e α k |t| −2α k , therefore, E B (Z n (t) − Z n (0)) 2 = 2 (r Zn (0) − r Zn (t)) = 1 n n k=1 1 − e α k |t| −α k (31)P α -a.s, where E B is the expectation on (Ω B , F B , P B ).Because of the P α -a.s.Gaussianity and zero-mean of Z n (t), for any δ > 0,E B |Z n (t) − Z n (0)| 2+δ = c 1 E B (Z n (t) − Z n (0)) 2 1+ δ 2(32)P α -a.s., where the constant c 1 > 0 does not depend on t or n.Thus from (31) and (32) we haveE B |Z n (t) − Z n (0)| 2+δ = c 1 n −1− δ α -a.s.If we take the P α -a.s.stationarity into account, the P α -a.s.tightness follows from the last inequality.Proof of Theorem 2.E B (Y n (t) − Y (t)) 2 = α -a.s, where E B denotes the expectation on (Ω B , F B , P B ).It is already known that r − α s .Let us use the inequality between the arithmetic and geometric means to get 1 The P α -a.s.weak convergenceY n (t) w → n→∞ Y in C[a,b] follows from the P α -a.s.continuity of Y (t) and from the convergence of the finite dimensional distributions.The latter convergence is the consequence of the P α -a.s.Gaussianity and the pointwise L 2 -convergence.
λx dx < ∞, P α -a.s.We have now proved that E B (Y n (t) − Y (t)) 2 → n→∞ 0 P α -a.s.As the mean square here does not depend on t, the first statement of the theorem is also proven.