Fractional Poisson field and Fractional Brownian field: why are they resembling but different?

. The fractional Poisson ﬁeld (fPf) is constructed by considering the number of balls falling down on each point of R D , when the centers and the radii of the balls are thrown at random following a Poisson point process in R D × R + with an appropriate intensity measure. It provides a simple description for a non Gaussian random ﬁeld that is centered, has stationary increments and has the same covariance function as the fractional Brownian ﬁeld (fBf). The present paper is concerned with speciﬁc properties of the fPf, comparing them to their analogues for the fBf. On the one hand, we concentrate on the ﬁnite-dimensional distributions which reveal strong diﬀerences between the Gaussian world of the fBf and the Poissonnian world of the fPf. We provide two diﬀerent representations for the marginal distributions of the fPf: as a Chentsov ﬁeld, and on a regular grid in R D with a numerical procedure for simulations. On the other hand, we prove that the Hurst index estimator based on quadratic variations which is commonly used for the fBf is still strongly consistent for the fPf. However the computations for the proof are very diﬀerent from the usual ones.


Introduction
In the last decades a lot of papers have been dedicated to the sum of an infinite number of Poisson sources.The seminal ideas of Mandelbrot of adding Poisson sources in order to get a fractional limit are described for instance in [6].More recently this subject became popular for the modeling of Internet traffic and telecommunication (see [7,12]) providing processes with heavy tails or long range dependence.In higher dimension, throwing Euclidean balls at random following a specific Poisson repartition for the centers and the radii, and counting how many balls fall down on each point, provides a random field defined on R D .In [11], with an appropriate scaling, a generalized random field is obtained as an asymptotics.It has a Poisson structure and exhibits a kind of self-similarity index H greater than 1/2.The case H less than 1/2 is studied in [2] and a pointwise representation (F H (y)) y∈R D of the generalized field is given.It is proved that F H may be written as an integral with respect to a Poisson random measure and F H is called fractional Poisson field (fPf).At this point, the reader should be aware that the fPf we are dealing with has no relation -except the name-with the fractional Poisson process introduced in [4] for instance as a 1D Poisson process in a random time.
Actually the fPf is of own interest since it is centered, has stationary increments and the same covariance function as the fractional Brownian field (fBf), but is not Gaussian.Moreover let us mention the opportunity of obtaining many other models following the same scheme.For instance one can build anisotropic fields by replacing the Euclidean balls by more general convex sets [3] and natural images can be simulated [5].
The present paper focuses on the comparison between both fractional fields, fPf and fBf.It is organized as follows.In the first section, we concentrate on the finite-dimensional distributions of the fPf and on its moments.From this point of view, there are obvious differences between fPf and fBf.We exhibit a representation of F H similar to the Chentsov one (see [16], Chapter 8).In particular, we establish that all the finite dimensional distributions are determined by the (D + 1)-dimensional marginal distributions.We also give a representation of the fPf on a finite regular grid Γ ⊂ R D .We use it to get simulations of the fPf in dimension D = 1.In the second section, we investigate the estimation of the Hurst index H.We prove that a ratio of two different quadratic variations of F H yields an a.s.estimator of H.Note that a similar result holds for the fractional Brownian field, but that our proof needs new arguments since we are not dealing with a Gaussian framework any more.
To end this section let us give the notations used in the sequel.We consider R D endowed with the Euclidean norm • .We write B(x, r) for the closed ball of center x and radius r > 0 with respect to the Euclidean norm.Without any risk of confusion, the notation | • | will either denote the absolute value of any real number, or the D-dimensional Lebesgue measure of any measurable subset of R D .In what follows, we will write V D for |B(0, 1)|, the volume of the unit Euclidean ball in R D , and S D−1 for the unit sphere in R D .

Stochastic integral representation.
Let us recall the precise definition of the fractional Poisson field as introduced in [2].Let H ∈ (0, 1/2) and λ ∈ (0, +∞).We consider Φ λ,H a Poisson point process in and associate with Φ λ,H a Poisson random measure N λ,H on R D ×R + with the same intensity measure.
For any y in R D , we consider the stochastic integral and finally we introduce the fractional Poisson field with Hurst index H and intensity λ as the random field F λ,H = (F λ,H (y)) y∈R D , which is clearly centered with stationary increments.Heuristically, F λ,H (y) may be seen as the difference between the number of balls B(x, r) with (x, r) ∈ Φ λ,H covering the point y, and the number of balls covering the origin.However the number of balls covering one particular point is infinite.Nevertheless, the stochastic integral (2) is well defined since (x, r) Actually, for any y ∈ R D , one can find a constant C(y) ∈ (0, +∞) such that for any r ∈ R + , where A△B stands for the symmetric difference between A and B, two subsets of R D .Furthermore, for any y ∈ R D , (x, r) → 1 I B(x,r) (y) − 1 I B(x,r) (0) also belongs to L 2 (R D × R + , ν λ,H (dx, dr)) and by using the rotation invariance of the Lebesgue measure, we obtain with c H = R + |B(e 1 , r)△B(0, r)| r −D−1+2H dr and e 1 being any point in S D−1 .The constant c H can be explicitly computed in dimension D = 1 as c H = 2 1−2H H(1−2H) .In higher dimension, explicit formulas for |B(e 1 , r)△B(0, r)| can be found for instance in [17].
Equation (4) shows that the covariance of F λ,H is as follows which, up to a constant, is the covariance of the fractional Brownian field.Consequently, one can get the fBf with a central limit theorem procedure by starting from copies of the fPf.For such an approach see [9].
On the other hand, using a Gaussian measure with control measure ν λ,H instead of the Poisson measure N λ,H in (2), would provide directly a Gaussian field that, up to a constant, is the fractional Brownian field of index H.Let us denote it for a while by B λ,H .Contrarily to this last field, the fPf is neither Gaussian nor self-similar.However it is second-order self-similar and presents what is sometimes called an aggregate similarity property (see [11]): λ,H k 1 are iid copies of F λ,H .The fPf also clearly satisfies the following Identities ( 6) and ( 7) are also shared by B λ,H , whereas the next proposition concerning higher moments orders does not.Actually, for any positive even integer q, E (B λ,H (y) q ) = (λ c H y 2H ) q/2 and for any real number r 2, y rH (where the notation f (ε) ≍ ε→0 g(ε) means that there exist two constants 0 < c < C < +∞ such that cg(ε) f (ε) Cg(ε) for all ε > 0 small enough, f and g being two positive functions).
(i) For all integer q 2, one has where P q is a polynomial of degree q/2 and valuation 1.
(ii) For all real number r 2, one has Proof.(i) Note that the random variable F λ,H (y) has a symmetric distribution whatever y ∈ R D is so that one has E(F λ,H (y) q ) = 0 if q is odd.Suppose that q = 2p is even.Let us write, for all (y, x, r Then, according to [1] (with the convention that 0 0 = 1), we have where and > 0 for all n 1.Thus, there is a polynomial P q such that E (F λ,H (y) q ) = P q (λc H y 2H ).Note that 1 p k=1 r k p for all (r 1 , . . ., r p ) ∈ I(p).Thus, P q (0) = 0 and, by choosing (r 1 , . . ., r p ) = (0, . . ., 0, 1) ∈ I(p) we see that the valuation of P q is 1.Finally, by choosing (r 1 , . . ., r p ) = (p, 0, . . ., 0) ∈ I(p) we see that the degree of P q is p.
If r is an even integer, the result follows from point (i).To continue, notice that for 1 p s p ′ and α ∈ [0, 1] such that 1 s = α p + 1 − α p ′ , Hölder's inequality gives Let us prove the rhs of (8).Let q be the even integer such that q r < q + 2. By applying (9) with s = r, p = q and p ′ = q + 2, then (i), we obtain for all y ∈ R D such that y min(δ q , δ q+2 ).Now, let us prove the lhs of (8).Let q be an even integer such that 2 r q q + 2. By applying (9) with s = q, p = r and p ′ = q + 2, then (i), we obtain for all y ∈ R D such that y min(δ q , δ q+2 ).Hence Finally, we have ( 8) by taking δ r = min(δ q , δ q+2 ).
Since the values of H and λ are fixed in this section, we will not mention the dependence on H and λ anymore and we will drop all the H and λ indices writing Φ, N, ν, F instead of Φ λ,H , N λ,H , ν λ,H , F λ,H .

Chentsov representation.
We notice that for x, y ∈ R D and r ∈ R + we have when defining C(y), the cone over y, by A similar computation as the one in (4) gives Then, we can write and observe that F (y) follows a Skellam distribution: it is equal to the difference of two iid Poisson random variables with parameter λc H 2 y 2H .
This formulation invites us to link the fPf to more general fields G which can be written as where M is any random measure on R D such that (12) makes sense and C(y) is the cone over y as in (10).When M is a symmetric α-stable random measure, the resulting field is a so-called 'H − sssis (H self-similar with stationary increments in the strong sense) SαS Chentsov field' as introduced in [16], with the resulting consequence that H 1/α. Going further, M still being a symmetric α-stable random measure, and replacing the difference in (12) by the sum, then the resulting field would be a Takenaka random field [18].
We borrow some tricky notations from [15] and use them in the case of M being a Poisson random measure.Meanwhile, we get a representation for the fdd's of F .For any positive integer m, we define Let y 1 , y 2 , . . ., y m be fixed in R D \ {0}.Then, writing T = (y 1 , y 2 , . . ., y m ) we denote for any where C(y) still stands for the cone over y and the following convention is used C(y) 1 We also denote T = (0, y 1 , y 2 , . . ., y m ) and Em = {e : [[0, m]] → {0, 1}}, so that using (13), for any k = 1, . . ., m, C( T , e) .
Hence, using (11), we obtain a representation of the random vector (F (y 1 ), . . ., F (y m )) as stated in the next proposition.Actually, for the random fields defined by ( 12) with a Poisson random measure M , the following proposition holds.It should be compared with the fact that all the fdd's of a Gaussian field are determined by the family of the 2-dimensional marginal distributions.
Proposition 1.3.Let G be defined by (12) where M is a Poisson random measure.Let y 1 , y 2 , . . ., y m be m points in R D \{0} with m > D. Then the distribution of (G(y 1 ), . . ., G(y m )) is determined by the (D + 1)-dimensional marginal distributions of G.
A similar result was originally established by Sato in [15] for Takenaka fields.We will not detail the proof of Proposition 1.3 since similar ideas to the original ones can be used in our case.As a consequence of Proposition 1.3, if a field G associated with an unknown Poisson measure M has the same (D + 1)-dimensional marginal distributions as the fPf F , then realizations of G may be obtained by choosing M as the particular Poisson measure with intensity (1).

Representation on a grid.
Let us fix 0 < δ < R and consider the finite set of R D with J R,δ ∈ N points Γ R,δ = B(0, R) ∩ δZ D = {y j ; 1 j J R,δ }. ( We discuss here the possibility to represent the discrete field (F (y)) y∈Γ R,δ by a simpler field which could be more relevant for the structure of F .The idea is to come back to the number of balls B(x, r) falling down on the points of Γ R,δ .For any fixed y ∈ R D , the function (x, r) → 1I B(x,r) (y) is not integrable with respect to ν given by (1) due to the high number of very large balls.It is possible to classify the balls according to their influence on the finite set Γ R,δ .Notice that 0 ∈ Γ R,δ .One can check that, for all ) does not cover B(0, R) but is with a non empty intersection with B(0, R) so 1I B(x,r) (y) − 1I B(x,r) (0) ∈ {−1, 0, 1}.Each type of balls corresponds to a Poisson point process (PPP) with a suitable intensity and by superposition, the original PPP Φ corresponds to their independent union.Only the balls that have a non-trivial intersection with B(0, R) are interesting.They are related to a PPP in R D × R + of intensity measure In order to deal with the balls with large radii (greater than δ/2) we use independence and superposition property by splitting the intensity ν 0 as ν (1) + ν (2) with ν (1) the restriction of ν 0 to R D × [δ/2, +∞) and ν (2) the reminder.
Balls with large radii.Let us consider a global PPP Φ (1) of intensity ν (1) .The number of associated balls is a.s.finite and Poisson distributed with parameter Note that, since R is fixed, as r tends to infinity, C 1 (r)r −D−1+2H behaves like r −2+2H .Hence, since H < 1/2, the last integral converges and λ 1 < ∞.Therefore we can decompose the intensity measure ν (1) (dx, dr) as distribution of the centers conditionally to the radii Thus we define a random field T (1) by n is distributed in R D according to the probability distribution with conditional density with respect to [R Balls with small radii.Now we focus on the intensity measure ν (2) (dx, dr).Let (x, r) ∈ C(y j ) c and the ball B(x, r) has no contribution on the set Γ R,δ , or (x, r) ∈ C(y j ).Since y j − y i δ for all pairs (y i , y j ) of different points in Γ R,δ , the J R,δ sets (R D × [0, δ/2)) ∩ C(y j ) are disjoint sets.Therefore the PPP of intensity ν (2) is the superposition of J R,δ independent PPP's Φ (2) 1 , . . ., Φ J R,δ where Φ (2) and the balls B(x, r) associated with Φ (2) where Λ (2) j is a Poisson random variable with parameter λ To conclude we define a random field T (2) over Γ R,δ by T (note that the T (2) j are independent).Finally, by superposing all the previous independent PPP's and by adding their related fields, we obtain the following proposition.
This description shows that the restriction to Γ R,δ of the field F is essentially made up with -a field T (1) which is a simple 'balls counting field': random balls are built picking-up the radii first in [δ/2, ∞), the centers next, then T (1) (y j ) counts the number of these balls above each y j , -a field T (2) whose values at each point y j form a collection of iid Poisson random variables with parameter λ V D 2H (δ/2) 2H .
In Figure 1 we show exacts simulations of fPf and fBf on [0, 1] ∩ δZ with δ = 2 −11 for different values of H.The fPf is simulated by using Proposition 1.4 and the fBf is obtained with the Circulant Embedding Method (see [14]).

Estimation of the H index
Quadratic variations are successfully used in the fractional Brownian motion framework to build estimators of the Hurst index [10,8].When considering B H a fractional Brownian field on R D , D 2, the results in the one dimensional setting may be used using the fact that the line processes {B H (t 0 + tθ) − B H (t 0 ); t ∈ R} are also one-dimensional fractional Brownian motions.We consider the same estimators in our non-Gaussian context.Similarly, by computing its characteristic function, one can prove that the line process {F λ,H (t 0 + tθ) − F λ,H (t 0 ); t ∈ R} is equal in law to a one-dimensional fractional Poisson process of Hurst parameter H and intensity λ R D−1 (1 − y 2 ) 1/2−H 1I y 1 dy.Therefore in the rest of this section we assume that D = 1.

Quadratic variations.
For a positive integer u, we consider the quadratic variations of F λ,H with step u: Note that, by stationarity, one has Then, there exist v 1,u (H) > 0 and v 2,u (H) > 0 such that as n → +∞, and consequently Proof.In order to compute the variance of V n (u), we follow the framework of [13].We can write F λ,H (k + u) − F λ,H (k) = I 1 (ψ u,k ) as the Wiener-Itô integral with respect to the compensated Poisson random measure N λ,H − ν λ,H on R × R + of the kernel function where C(•) is the cone defined by (10).Since H is fixed we simply write (λν) for ν λ,H in the sequel.According to the product formula (see Equation ( 14) of [13]) we have Therefore, by linearity, with ) .Let us compute the first term: We set T = (0, u, k, u + k) so that, according to (13), we can write the integrand as the sum of indicator functions of the sets C( T , (0, 1, 0, 1)), C( T , (0, 1, 1, 0)), C( T , (1, 0, 0, 1)) and C( T , (1, 0, 1, 0)).When |k| > u, each of them is empty except C( T , (0, 1, 1, 0)) (see the figure below) and hence uψ 2 u,0 and then, by using the fact that ψ H). Now, let us compute the second term: H).This finishes to prove the first assertion of Theorem 2.1.Now, let us be concerned with the almost sure convergence.By Markov inequality we have, for all ε > 0, Therefore, by Borel-Cantelli Lemma, (V λ,n (u)) n 1 converges a.s. to λc H u 2H .This concludes for the proof.

Estimation of the H index on a fixed interval.
We assume here to observe F λ,H on a fixed interval.Instead of considering V λ,n (u) we work with Observe that E(W λ,n (u)) = λc H 2 −2nH u 2H → 0 as n tends to infinity.However we can build an estimator of H and state the following theorem.We illustrate numerically Theorem 2.2 by performing on [0, 1] ∩ δZ, δ = 2 −11 , 100 realizations of the fields F λ,H and B λ,H with λ = 1, with 9 values of H from 0.05 to 0.45 and with two different choices of (u, v): (u, v) = (1, 2) and (u, v) = (1, 4) (see Figure 2).We remark that (u, v) = (1, 4) seems to be a better choice.Moreover, contrarily to the fBf, the standard deviation obtained for the fPf depends on H, which is in adequacy with the fact that the variance given by (20) also depends on H.In particular, the standard deviation increases when H goes to 1/2.