Noise Reinforced L\'evy Processes: L\'evy-It\^o Decomposition and Applications

A step reinforced random walk is a discrete time process with memory such that at each time step, with fixed probability $p \in (0,1)$, it repeats a previously performed step chosen uniformly at random while with complementary probability $1-p$, it performs an independent step with fixed law. In the continuum, the main result of Bertoin in [7] states that the random walk constructed from the discrete-time skeleton of a L\'evy process for a time partition of mesh-size $1/n$ converges, as $n \uparrow \infty$ in the sense of finite dimensional distributions, to a process $\hat{\xi}$ referred to as a noise reinforced L\'evy process. Our first main result states that a noise reinforced L\'evy processes has rcll paths and satisfies a $\textit{noise reinforced}$ L\'evy It\^o decomposition in terms of the $\textit{noise reinforced}$ Poisson point process of its jumps. We introduce the joint distribution of a L\'evy process and its reinforced version $(\xi, \hat{\xi})$ and show that the pair, conformed by the skeleton of the L\'evy process and its step reinforced version, converge towards $(\xi, \hat{\xi})$ as the mesh size tend to $0$. As an application, we analyse the rate of growth of $\hat{\xi}$ at the origin and identify its main features as an infinitely divisible process.


Introduction
The Lévy-Itô decomposition is one of the main tools for the study of Lévy processes.In short, any real Lévy process ξ has rcll sample paths and its jump process induces a Poisson random measure -called the jump measure N of ξ -whose intensity is described by its Lévy measure Λ.Moreover, it states that ξ can be written as the sum of tree process t ≥ 0, of radically different nature.More precisely, the continuous part of ξ is given by ξ (1) = (at + qB t : t ≥ 0) for a Brownian motion B and reals a, q, while ξ (2) is a compound Poisson process with jump-sizes greater than 1 and ξ (3) is a purely discontinuous martingale with jump-sizes smaller than 1.Moreover, the processes ξ (2) , ξ (3) can be reconstructed from the jump measure N .It is well known that N is characterised by the two following properties: for any Borel A with Λ(A) < ∞, the counting process of jumps ∆ξ s ∈ A that we denote by N A is a Poisson process with rate Λ(A), and for any disjoint Borel sets A 1 , . . ., A k with Λ(A i ) < ∞, the corresponding Poisson processes N A1 , . . ., N A k are independent.We refer to e.g.[5,16,23] for a complete account on the theory of Lévy processes.
In this work, we shall give an analogous description for noise reinforced Lévy processes (abbreviated NRLPs).This family of processes has been recently introduced by Bertoin in [7] and correspond to weak limits of step reinforced random walks of skeletons of Lévy process.In order to be more precise, let us briefly recall the connection between these discrete objects and our continuous time setting.Fix a Lévy process ξ and denote, for each fixed n, by X (n) k := ξ k/n − ξ (k−1)/n the k-th increment of ξ for a partition of size 1/n of the real line.The process for k ≥ 1 is a random walk, also called the n-skeleton of ξ.Now, fix a real number p ∈ (0, 1) that we call the reinforcement or memory parameter and let Ŝ(n) 1 .Then, define recursively Ŝ(n) k for k ≥ 2 according to the following rule: for each k ≥ 2, set Ŝ(n) where, with probability 1 − p, the step X(n) with law ξ 1/n -and hence independent from the previously performed steps -while with probability p, X k is an increment chosen uniformly at random from the previous ones X(n) 1 , . . ., X(n) k .
When the former occurs, the step is called an innovation, while in the latter case it is referred to as a reinforcement.The process ( Ŝ(n) k ) is called the step-reinforced version of (S (n) k ).It was shown in [7] that, under appropriate assumptions on the memory parameter p, we have the following convergence in the sense of finite dimensional distributions as the mesh-size tends to 0 towards a process ξ identified in [7] and called a noise reinforced Lévy process.It should be noted that the process ξ constructed in [7] is a priori not even rcll, and this will be one of our first concerns.
We are now in position to briefly state the main results of this work.First, we shall prove the existence of a rcll modification for ξ.In particular, this allow us to consider the jump process (∆ ξs ); a proper understanding of its nature will be crucial for this work.In this direction, we introduce a new family of random measures in R + × R of independent interest under the name noise reinforced Poisson point processes (abbreviated NRPPPs) and we study its basic properties.This lead us towards our first main result, which is a version of the Lévy-Itô decomposition in the reinforced setting.More precisely, we show that the jump measure of ξ is a NRPPP and that ξ can be written as where now, ξ(1) = (at + q Bt : t ≥ 0) for a continuous Gaussian process B, the process ξ( 2) is a reinforced compound Poisson process with jump-sizes greater than one, while ξ(3) is a purely discontinuous semimartingale.The continuous Gaussian process B is the so-called noise reinforced Brownian motion, a Gaussian process introduced in [8] with law singular with respect to B, and arising as the universal scaling limit of noise reinforced random walks when the law of the typical step is in L 2 (P) -and hence plays the role of Brownian motion in the reinforced setting, see also [4] for related results.Needless to say that if the starting Lévy process ξ is a Brownian motion, the limit ξ obtained in (1.1) is a noise reinforced Brownian motion.As in the non-reinforced case, ξ(2) and ξ(3) can be recovered from the jump measure N , but in contrast, they are not Markovian.The terminology used for the jump measure of ξ is justified by the following remarkable property: for any Borel A with Λ(A) < ∞, the counting process of jumps ∆ ξs ∈ A that we denote by NA is a reinforced Poisson process and, more precisely, it has the law of the noise reinforced version of N A (hence, the terminology NA is consistent).Moreover, for any disjoint Borel sets A 1 , . . ., A k with Λ(A i ) < ∞, the corresponding NA1 , . . ., NA k are independent noise reinforced Poisson processes.Informally, the reinforcement induces memory on the jumps of ξ, and these are repeated at the jump times of an independent counting process.When working on the unit interval, this counting process is the so-called Yule-Simon process.
The second main result of this work consists in defining pathwise, the noise reinforced version ξ of the Lévy process ξ.We always denote such a pair by (ξ, ξ).This is mainly achieved by transforming the jump measure of ξ into a NRPPP, by a procedure that can be interpreted as the continuous time analogue of the reinforcement algorithm we described for random walks.More precisely, the steps X (n) k of the n-skeleton are replaced by the jumps ∆ξ s of the Lévy process; each jump of ξ is shared with its reinforced version ξ with probability 1 − p, while with probability p it is discarded and remains independent of ξ.We then proceed to justify our construction by showing that the skeleton of ξ and its reinforced version (S ) converge weakly towards (ξ, ξ), strengthening (1.1) considerably.Section 6 is devoted to applications: on the one hand, in Section 6.1 we study the rates of growth at the origin of ξ and prove that well know results established by Blumenthal and Getoor in [9] for Lévy processes still hold for NRLPs.On the other hand, in Section 6.2 we analyse NRLPs under the scope of infinitely divisible processes in the sense of [21].We shall give a proper description of ξ in terms of the usual terminology of infinitely divisible processes, as well as an application, by making use of the so-called Isomorphism theorem for infinitely divisible processes.
Let us mention that in the discrete setting, reinforcement of processes and models has been subject of active research for a long time, see for instance the survey by Pemantle [19] as well as e.g.[6,3,1,18,2,11] and references therein for related work.However, reinforcement of time-continuous stochastic processes, which is the topic of this work, remains a rather unexplored subject.
The rest of the work is organised as follows: in Section 2 we recall the basic building blocs needed for the construction of NRLPs and recall the main results that will be needed.Notably, we give a brief overview of the features of the Yule-Simon process and present some important examples of NRLPs.In Section 3 we show that a NRLP has a rcll modification.In Section 4 we construct NRPPPs, study their main properties of interest, and in Section 4.3 we prove that the jump measure of a NRLP is a NRPPP -a result that we refer to as the "reinforced Lévy-Itô decomposition".In Section 5 we show that the pair conformed by the n-skeleton of a Lévy process and its reinforced version converge in distribution, as the mesh size tends to 0, towards (ξ, ξ).To achieve this, first we start by proving in Section 5.1 that a NRLP can be reconstructed from its jump measure -a result that we refer to as the "reinforced Lévy Itô synthesis".Making use of this result in Section 5.2 we define the joint law (ξ, ξ) and in Section 5.3 we establish our convergence result.Finally, Section 6 is devoted to applications.Particular attention is given through this work at comparing, when possible and pertinent, our results for NRLPs to the classical ones for Lévy processes.2 Preliminaries

Yule-Simon processes
In this section, we recall several results from [7] concerning Yule-Simon processes needed for defining NRLPs.These results will be used frequently in this work and are re-stated for ease of reading.
A Yule-Simon process on the interval [0, 1] is a counting process, started from 0, with first jump time uniformly distributed in [0, 1], and behaving afterwards as a (deterministically) time-changed standard Yule process.More precisely, for fixed p ∈ (0, 1), if U is a uniform random variable in [0, 1] and Z a standard Yule process, Y (t) := 1 {U ≤t} Z p(ln(t)−ln(U )) , t ∈ [0, 1], ( is a Yule-Simon process with parameter 1/p.Its law in D[0, 1], the space of R-valued rcll functions in the unit interval endowed with the Skorokhod topology, will be denoted by Q.It readily follows from the definition that this is a time-inhomogeneaous Markov process, with time-dependent birth rates given at time t by λ 0 (t) = 1/(1 − t) and λ k (t) = pk/t for k ∈ {1, 2, . . .}. Remark as well that we have P (Y (t) ≥ 1) = t.In our work, only p ∈ (0, 1) will be used, and it always corresponds to the reinforcement parameter.The Yule-Simon process with parameter 1/p is closely related to the Yule-Simon distribution with parameter 1/p, i.e. the probability measure supported on {1, 2, . . .} with probability mass function given in terms of the Beta function B(x, y) by 2) The relation with the Yule process is simply that Y (1) is distributed Yule-Simon with parameter 1/p.In this work, we refer to p ∈ (0, 1) as a reinforcement or memory parameter, for reasons that will be explained shortly.In the following lemma we state for further use the conditional self-similarity property of the Yule-Simon process, a key feature that will be used frequently.
In particular, conditionally on {Y (t) ≥ 1}, Y (t) is distributed Yule-Simon with parameter 1/p and it follows that for every t ∈ [0, 1], Y (t) has finite moments only of order r < 1/p.Moreover, by the previous lemma and the Markov property of the standard Yule process Z, we deduce that if Y is a Yule-Simon process with parameter 1/p with p ∈ (0, 1) and k ≥ 1, we have More details on these statements can be found in Section 2 of [7].

Noise reinforced Lévy processes
Now, we turn our attention to the main ingredients involved in the construction of NRLPs.For the rest of the section, fix a real valued Lévy process ξ of characteristic triplet (a, q 2 , Λ), where Λ is the Lévy measure, and recall that its characteristic exponent Ψ(λ) := log E e iλξ1 is given by the Lévy-Khintchine formula The constraints on the reinforcement parameter p are given in terms of the following two indices introduced by Blumenthal and Getoor: the Blumenthal-Getoor (upper) index β(Λ) of the Lévy measure Λ is defined as β(Λ) := inf r > 0 : while the Blumenthal-Getoor index β of the Lévy process ξ is defined by the relation When ξ has no Gaussian component, we have β = β(Λ) and both notations will be used indifferently.We say that a memory parameter p ∈ (0, 1) is admissible for the triplet (a, q 2 , Λ) if pβ < 1.Now, fix p an admissible memory parameter for ξ.If (S k ) is the n-skeleton of the Lévy process ξ, the sequence of reinforced versions with parameter p, converge in the sense of finite dimensional distributions, as the mesh-size tends to 0, towards a process whose law was identified in [7] and called the noise reinforced Lévy process ξ of characteristics (a, q 2 , Λ, p).
In the sequel, when considering a NRLP with parameter p, it will be implicitly assumed that p is admissible for the corresponding triplet.For instance, when working with a memory parameter p ≥ 1/2 it is implicitly assumed that q = 0.It was shown in [7,Corollary 2.11] that the finite-dimensional distributions of ξ can be expressed in terms of the Yule-Simon process Y with parameter 1/p and the characteristic exponent Ψ as follows: . Now we turn our attention at defining NRLPs in R + .Notice that the construction given in the unit interval in [7] can not be directly extended to the real line since it relies on Poissonian sums of Yule-Simon processes, and these are only defined on the unit interval.
Proposition 2.2.(NRLPs in R + ) Let (a, q 2 , Λ) be the triplet of a Lévy process of exponent Ψ and consider an admissible memory parameter p ∈ (0, 1).There exists a process ξ = ( ξs ) s∈R + whose finite dimensional distributions satisfy that, for any 0 < s where the right-hand side does not depend on the choice of t.The process ξ is called a noise reinforced Lévy process with characteristics (a, q 2 , Λ, p).
Proof.First, let us show that the right-hand side of (2.9) does not depend on t.To prove this, pick another arbitrary T > t and write From conditioning on {Y t/T ≥ 1}, an event with probability t/T , by Lemma 2.1 we get proving our claim, and where in the second equality we used that Ψ(0) = 0. Now, let us establish the existence of a process with finite-dimensional distributions characterised by (2.9).Remark that by Kolmogorov's consistency theorem, it suffices to show that for arbitrary 1 ≤ S < T , there exists processes XS = ( XS t ) t∈[0,S] , XT := ( XT t ) t∈[0,T ] with finite dimensional distributions characterised by the identity (2.9) for (s i ) in [0, S], t = S and (s i ) in [0, T ], t = T respectively -and hence satisfying that ] the reinforced version of the Lévy process (ξ tS ) t∈[0,1] , remark that the latter has characteristic exponent SΨ, and set ( XS t ) t∈[0,S] := ( ξS t/S ) t∈[0,S] .From the identity (2.8), we deduce that, for any 0 < s 1 < • • • < s k in the interval [0, S], we have: .11)In particular XS restricted to the interval [0, 1] has the same distribution as ( ξt ) t∈[0,1] by the first part of the proof and (2.8).If we consider the restriction of ( XT ) t∈[0,T ] to the interval [0, S], we obtain similarly and by applying (2.10) that, for any 0 and it follows that XT restricted to [0, S] has the same distribution as XS .Since this holds for any 1 ≤ S < T , we deduce by Kolmogorov's consistency theorem the existence of a process satisfying for any 0 < s 1 < • • • < s k ≤ t, the identity (2.9).In particular, from taking the value t = 1, it follows that this process satisfies that its restriction to [0, 1] has the same law as ξ by (2.8).
For later use, notice from (2.9) that for any fixed t ∈ R + , we have the following equality in law where the right-hand side stands for the noise-reinforced version of the Lévy process (ξ st ) s∈[0,1] .In particular, ( ξst ) s∈[0,1] is the NRLP associated to the exponent tΨ with same reinforcement parameter.

Building blocks: noise reinforced Brownian motion and noise reinforced compound Poisson process
The characteristic exponent Ψ can be naturally decomposed in tree terms, where respectively, we write This decomposition yields that the Lévy process ξ can be written as the sum of tree independent Lévy process of radically different nature.Namely, we have t , for t ≥ 0, where B is a Brownian motion, ξ (2) is a compound Poisson process with exponent Φ (2) and ξ (3) is the so-called compensated sum of jumps with characteristic exponent Φ (3) .In the reinforced setting, it readily follows from the identity (2.9) that an analogous decomposition holds for NRLPs.More precisely, the NRLP ξ of characteristics (a, q 2 , Λ, p) can be written as a sum of three independent NRLPs, the equality holding in law, and where we denoted respectively by B, ξ(2) , ξ(3) , independent reinforced versions of the Lévy processes B, ξ (2) , ξ (3) .Notice that their respective characteristics are given by (a, q 2 , 0, p), (0, 0, 1 (−1,1) c Λ, p) and (0, 0, 1 (−1,1) Λ, p).Let us now give a brief description of these three building blocks separately: • Noise reinforced Brownian motion: Assume p < 1/2, consider a Brownian motion B and set ξ := B.
In that case, we simply have Ψ(λ) = −λ 2 /2 and we write B for the corresponding noise reinforced Lévy process ξ.The process B is the so-called noise reinforced Brownian motion (abbreviated NRBM) with reinforcement parameter p, a centred Gaussian process with covariance given by: Indeed, recalling (2.4), observe first that for any 0 ≤ t, s < T the covariance (2.15) can be written in terms the Yule-Simon process Y with parameter 1/p as follows: It is now straightforward to deduce from (2.9) with Ψ(λ) = −λ 2 /2 that the noise reinforced version of B corresponds to the Gaussian process with covariance (2.15).The noise reinforced Brownian motion admits a simple representation as a Wiener integral.More precisely, the process has the law of a noise reinforced Brownian motion with parameter p. Remark that when p = 0, there is no reinforcement and we recover a Brownian motion in (2.17).As was already mentioned, noise reinforced Brownian motion plays the role of Brownian motion in the reinforced setting, since it is the scaling limit of noise reinforced random walks under mild assumptions on the law of the typical step.We refer to [8,4] for a detailed discussion.
• Noise reinforced compound Poisson process: If ξ is a compound Poisson process with rate c > 0 and jumps with law P X , then its Lévy measure is just Λ(dx) = cP X (dx), and any p ∈ (0, 1) is admissible.When working in [0, 1], the noise reinforced compound Poisson process ξ admits a simple representation in terms of Poissonian sums of Yule-Simon processes.In this direction, let Q be the law of the Yule Simon process with parameter 1/p and consider a Poisson random measure has the law of the noise reinforced version of ξ with reinforcement parameter p -as can be easily verified by Campbell's formula and was already established in [7,Corollary 2.11].Notice that (2.18) is a finite variation process and its jump sizes are dictated by P X (dx).Getting back to (2.14), it readily follows form our discussion that the NRLP ξ(2) associated with the exponent Φ (2) is a reinforced compound Poisson process and its jumps-sizes are greater than one.Finally, notice that if P X = δ 1 , the Lévy process ξ is just a Poisson process with rate c and we deduce from the last display a simple representation for the reinforced Poisson process N in [0, 1].Observe that it is a counting process, since the atoms x i are then identically equal to 1.
• Noise reinforced compensated compound Poisson process: Let us now introduce properly ξ(3) , viz. the noise reinforced version of the compensated martingale ξ (3) .When working in [0, 1], this process also admits a representation in terms of random series of Yule-Simon processes.In this direction, consider In the terminology of [7, Section 2], the process ξ(3) a,1 is a Yule-Simon compensated series 1 , and note that E[ ξ(3) a,1 (t)] = 0 for every t ∈ [0, 1].Moreover, the following family indexed by a ∈ (0, 1), is a collection of NRLPs with memory parameter p, Lévy measure 1 {a≤|x|<1} Λ(dx) and the corresponding exponent writes: Notice that for each a > 0, the process ξ(3) a,1 is rcll and with jump-sizes in [a, 1].Now, the process defined at each fixed t as the pointwise and L 1 (P)-limit ξ(3) is a NRLP with characteristics (0, 0, 1 {|x|<1} Λ).In contrast with ξ (3) , the noise reinforced version ξ(3) is no longer a martingale, we shall discuss this point in the next section in detail.For latter use, we point out from [7, Section 2] that the convergence in the previous display also holds in L r (P), for r chosen according to In particular, we have ξ(3) t ∈ L r (P) and E[ ξ(3) t ] = 0 for every t.We refer to [7] for a complete account on this construction and for a proof of the convergence in (2.21).The convergence in (2.21) will be strengthen in the sequel, by showing that it holds uniformly in [0, 1].At this point, we have introduced the main ingredients needed for this work.
ξ of size greater than ε are precisely the jumps of ξ(2) + ξ(3) ε,1 .Hence, when working in [0, 1], the jumps of ξ(3) are precisely the jumps of the weighted Yule-Simon processes x i Y i (t) -heuristically, this is the continuous-time analogue of the dynamics described for the noise reinforced random walk.This fact will be used in Section 4.3.Moreover, (3.1) allow us to improve the convergence stated in (2.21) towards ξ(3) .Namely, it follows that for some subsequence (a n ) with a n ↓ 0 as n ↑ ∞, the convergence holds a.s.uniformly in [0, 1].Remark that the convergence in the previous display was only stated when working in [0, 1] since, so far, the only explicit construction of NRLPs is the one in the unit interval we recalled from [7].In Section 5.1 we shall address this point.The rest of the section is devoted to the proof of Theorem 3.1.Recalling the building blocks introduced in Section 2.3 and the identity in distribution (2.14), ξ(2) is a reinforced compound Poisson process and hence has finite variation rcll trajectories, while B is continuous.It is then clear that the only difficulty consists in establishing the regularity of the process ξ(3) and we rely on a remarkable martingale associated with centred NRLPs, that we now introduce.This martingale will play a key role in this work.Proposition 3.2.Consider a Lévy process ξ with characteristic exponent Ψ satisfying Ψ (0) = 0 and Lévy measure fulfilling the integrability condition {|x|≥1} xΛ(dx) < ∞.Then, the process M = (M t ) t∈R + defined as M 0 = 0 and for t > 0, as M t = t −p ξt , is a martingale.Consequently, M has a rcll modification.
Proof.Recall from (2.14) that in that case, ξ can be written as a sum of two independent processes ξ = q B + ξ(3) , where B is a noise reinforced Brownian motion.Recalling the representation (2.17) for B, it follows that (t −p Bt ) t∈R + is a continuous martingale and we assume therefore that q = 0.
Turning our attention to ξ(3) , notice that M t is in L r (P) for r chosen according to (2.22) and that E[M t ] = 0 since, as we discussed after (2.22), we have ξ(3) t ] = 0. Now, it remains to show that (M t ) t∈(0,1] satisfies the martingale property.In this direction it is enough to check that for any 0 On the one hand, under our standing assumptions, the left-hand side of (3.2) corresponds to the derivative at λ k = 0 of (2.9) multiplied by −it −p k and hence equals: for H defined as Remark that this is a σ(Y (s) : s ≤ t k−1 /t)-measurable random variable.On the other hand, the righthand side of (3.2) corresponds to the derivative with respect to λ k−1 of (2.9) multiplied by −it p k−1 for λ k = 0 and similarly, we deduce that the right-hand side of (3.2) writes: Now, it only remains to show that: proving the claim.
Let us now conclude the proof of Theorem 3.1.
Proof of Theorem 3.1.The first assertion is now a consequence of the following simple observation: denoting by M the rcll modification of the martingale M = (t −p ξ(3) t ) t∈R + , it is then clear that the process Ĵ(3) := t p M t , for t ≥ 0, is a rcll modification of ξ(3) .Notice by intergrating by parts that consequently, the process ξ(3) is a semimartingale, this will be needed in Section 4.3.To prove the second claim, remark that by the observation right after (2.12), it suffices to work on the time interval [0, 1].Moreover, by Proposition 3.2, for each ε > 0, the process with M (ε) 0 = 0, is a L r (P) rcll martingale in [0, 1], for r chosen according to (2.22).Since r > 1, by Doob's inequality at time t = 1 we have for some constant C r > 0, and it remains to show that the right-hand side converges to 0 as ε ↓ 0. However, this is a consequence of (2.21).More precisely, recalling the construction detailed in (2.19), note that ξ(3) ε,1 (t) has the same distribution as ξ(3) 0,ε (t) for every t ∈ [0, 1] and ε > 0. Since the convergence (2.21) still holds in L r (P), the result follows by taking the limit as ε ↓ 0. Now that we have established that a NRLP is a rcll process, in the next section we study the structure of its jump process (∆ ξt ).Since it will share striking similarities with the jump process of a Lévy process, before concluding the section we recall well known results on (∆ξ t ).Namely, if ξ is a Lévy process with Lévy measure Λ, its jump measure is a homogeneous Poisson point process (abbreviated PPP) with characteristic measure Λ(dx).Such a PPP can be constructed by decorating the point process of jumps of a Poisson process, and it is classic that (3.5) is determined by the following two properties: (i) For any Borelian A with Λ(A) < ∞, the counting process of jumps ∆ξ s ∈ A occurring until time t, defined as is a Poisson process with rate Λ(A).
In particular, from (i), it follows that (N A (t) − Λ(A)t) t∈R + is a martingale.

The jumps of noise reinforced Poisson processes
Let us start by introducing the basic building block of this section.
• Noise reinforced Poisson process: When ξ is a Poisson process N with rate c, any reinforcement parameter p ∈ (0, 1) is admissible and recall from the discussion following (2.18) that N is a counting process.Moreover, the corresponding noise reinforced Poisson process (abbreviated NRPP) with rate p has finite dimensional distributions characterised, for any 0 < s 1 < • • • < s k ≤ t and λ j ∈ R, by the identity A Poisson process with rate c has associated to it the random measure dN s , also called its point process of jumps.This is a Poisson random measure in R + with intensity cdt and it has a natural reinforced counterpart: namely, the random measure d Ns , that we shall now study in detail.
To do so, we start by introducing some standard notation for point processes.We shall identify discrete random sets D = {t 1 , t 2 , . . .} ⊂ R with counting measures t∈D δ t and for f : R → R, we use the notation D, f for t∈D f (t).The collection of counting measures in R is denoted by M c .We will make use of the following two basic transformations: for x ∈ R, we denote by T x D the translated point process {t + x : t ∈ D} and for f : R → R, we write D • f −1 the push-forwarded point process {f (t) : t ∈ D}.Now, consider an increasing sequence of random times 0 = T 0 < T 1 < T 2 < ..., such that the increments (T n −T n−1 : n ≥ 1) are independent and for any n ≥ 1, T n −T n−1 is exponentially distributed with parameter pn.Write D := {0, T 1 , T 2 , ...} the point process associated to this family and we denote its law in M c by D(dµ).From these ingredients, we define a decorated measure as follows: first, consider E a Poisson point process with intensity c(1 − p)e t dt in R and, for each atom u ∈ E , let D u be an independent copy of D. Then, we set Remark that if (Z t ) is a standard Yule process started from 1, D has the same law as the point process induced by the jump-times of (Z tp ), with a Dirac mass at 0. The next proposition shows that the law of the point process of jumps of a noise reinforced Poisson process with rate c is precisely L • exp −1 , the pushforward of L by the exponential function.
Proposition 4.2.The following properties hold: (i) Let N be a noise-reinforced Poisson process with rate c and write P := d Ns the point process of its jump-times in R + .Then, we have the equality in distribution P L = L • exp −1 .We will still refer to P as a reinforced Poisson process with rate c and reinforcement parameter p.
(ii) If Y is a Yule-Simon process with parameter 1/p, for any f : R + → R + we have In particular, from (4.3) and (i) we deduce the following identity in distribution: if P is a Poisson process in R + with intensity c(1 − p)dt, we have Roughly speaking, the jumps of N consist in Poissonian jumps u ∈ P which -in analogy with the discrete setting -we refer to as innovations, and each u has attached to it a family {ue t : t ∈ D u , t = 0} which should be interpreted as repetitions of the original u through time.Notice that the time at which u occurs affects the rate of the subsequent repetitions, slowing the rate down as u grows.This is closely related to what happens to the rate at which a step is repeated in a step reinforced random walk, depending on its first time of appearance.For later use, remark that for fixed u ∈ R + , the atoms of t∈D δ ue t are distributed as the jump times of the counting process Proof.To establish the identity in distribution stated in (i), we compute the respective Laplace functional of both random measures.Starting with P, fix t ≥ 0 and recall from the identity in distribution (2.12) that ( Nts ) s∈[0,1] has the same law as a noise reinforced Poisson process with same reinforcement parameter p and rate tc, say ( . This NRLP is defined in [0, 1] and hence admits a simple representation in terms of Poisson random measures: by (2.18) Putting everything together, we deduce (4.4) by making use of the Laplace formula for integrals with respect to Poisson random measures -we invite the reader to compare (4.4) with the identity (2.9) for the finite-dimensional distributions of NRLPs -and it remains to show that the Laplace functional of L • exp −1 coincide with this expression.
In this direction, recall the observation made in (4.6) and denote by Z the law of the standard Yule process Z.It follows that the law of L • exp −1 , 1 (0,t] f can be expressed in terms of the Poisson random measure where the integrals in the previous expression are respectively with respect to the Stieltjes measure associated to the counting process s → 1 {ui≤s} Z (i) p(ln(s)−ln(ui)) .It now follows also by the exponential formula that where we denoted in the last line by Finally, for later use we state the following equivalent expression for the Laplace functional associated to the random measure L • exp −1 .Lemma 4.3.For any measurable f : R + → R + , we have Proof.The proof follows from the equality L • exp −1 , f = L, f • exp and the identity: holding for any measurable h : R + → R + .The proof of the later is just a straightforward consequence of (4.3) and the exponential formula for Poisson random measures.
Remark 4.4.Notice from (4.5) that the reinforced Poisson process with rate c can be interpreted as a Yule-Simon process with immigration: this is, a process modelling the evolution of a population where new independent immigrants arrive according to a Poisson point process with intensity (1 − p)c • dt and reproduce according to a time changed Yule process, independent of the rest.

Construction of noise reinforced Poisson point processes by decoration
This section is devoted to the construction of noise reinforced Poisson point processes and to establishing their first properties.From here, we fix p ∈ (0, 1).This is just the point process L from (4.3) with c := Λ(R), marked by a collection of i.i.d.random variables with law Λ(dx)/Λ(R).Formula (4.9) defines a random measure in R × R and if we consider its push forward by (t, x) → (exp(t), x), that we denote as N := L x • (exp, Id) −1 , we obtain the measure in R + × R given by N (ds, dx) := where We refer to the measure in the previous display as a NRPPP with (finite) characteristic measure Λ and reinforcement parameter p.
• Step 2: If we no longer assume Λ(R) < ∞, we proceed by superposition.More precisely, let (A j ) j∈I be a disjoint partition of R \ {0} such that Λ(A j ) < ∞.Consider a collection of independent NRPPPs ( Nj (ds, dx) : j ∈ I) with respective characteristic measures (Λ( • ∩ A j ) : j ∈ I) constructed as in (4.10), respectively in terms of: -independent collections (D u ) u∈Pj of i.i.d.copies of D. Finally, set P := j P j .Now we are in position to introduce NRPPPs with sigma-finite characteristic measures: From the identity in the previous display and recalling that the first element of D is just 0, the measure N naturally decomposes as N = N + N , where N is a PPP with intensity (1 − p)dt ⊗ Λ.Moreover, the following properties readily follow from our construction: Lemma 4.6.Let N be a NRPPP with characteristic measure Λ and reinforcement parameter p.
The following lemma shows that the intensity measure of a NRPPP with characteristic measure Λ and parameter p, coincides with the one of a PPP with characteristic measure Λ.
Lemma 4.7.Let N be a NRPPP with characteristic measure Λ and reinforcement parameter p.For any measurable f : R Proof.Suppose first that Λ(R) < ∞ and recall from (4.6) that for fixed u ∈ R + , the atoms of the measure t∈D δ ue t are precisely the jumps of the time-changed Yule process (4.6).Hence, if u∈P δ (u,xu) is a Poisson random measure with intensity (1 − p)dt ⊗ Λ(dx) and (Z (u) ) u∈P is an independent collection with law Z, it is then clear from our construction in the finite case (4.10) that we can write and we deduce that the intensity measure of N is given by dt ⊗ Λ.When Λ(R) = ∞, we can proceed by superposition.
We now identify the law of N by computing its exponential functionals.Proposition 4.8.Let N be a NRPPP with characteristic measure Λ and reinforcement parameter p.
(i) For every measurable f : R + × R → R + and t ≥ 0 we have (ii) If we no longer assume that f is non-negative, under the condition Proof.(i) We start by considering the finite case Λ(R) < ∞ and we make use of the notations introduced in (4.9); for instance, recall that N , f = L x , f • (exp, Id) .We start showing the result for f of the form f (s, x) = h(s)g(x), for non-negatives h : R + → R + and g : R → R + , in which case we can write Now, we deduce from the formula for the Laplace transform of Poisson integrals and a change of variable that If we now replace h by h1 {•≤t} , making use of the equivalent identities (4.8) and (4.4), we obtain that the previous display writes: Recall from Lemma 4.6 that the restrictions 1 A1 N , . . ., 1 An N are independent NRPPPs with respective characteristic measures Λ(• ∩ A i ).By independence and applying the previous case to each g j , we deduce that and once again we recover (4.12).Finally, if f is non-negative and bounded with support in [0, t] × R, it can be approximated by a bounded sequence of functions (f n ) of the form (4.15), the convergence holding dtΛ(dx) a.e.For each n, we have and by Lipschitz-continuity, it follows that In the last equality we used Lemma 4.7.From the same arguments we also obtain that as n ↑ ∞.Now, we deduce from taking the limit as n ↑ ∞ in (4.16) that the identity (4.12) also holds for f .If we suppose that Λ(R) = ∞, the proof follows by superposition.Namely, with the same notation used for constructing (4.11), the random measures ( Nj ) j∈I are independent NRPPPs with respective finite characteristic measures Λ( • ∩ A j ) and by definition we have N = j Nj .From the formula for the Laplace transform we just proved in the finite case and independence it follows that , proving (i).Now (ii) follows from similar arguments, by making use of the formula for the characteristic function for Poissonian integrals and the inequality |e ib −e ia | ≤ |a−b| for a, b ∈ R, we omit the details.
The following result is the reinforced analogue of the well known characterisation result for Poisson point processes.The arguments we use are similar to the ones in the non-reinforced case.Proposition 4.9.Let N be a point process in R + × R and for any Borelian A ⊂ R, set Then, N is a noise reinforced Poisson point process with characteristic measure Λ and parameter p if and only if the two following conditions are satisfied: (i) For any Borelian A with Λ(A) < ∞, the process NA is a noise reinforced Poisson process with rate Λ(A) and reinforcement parameter p.
Proof.First, let us prove that NRPPP do satisfy (i) and (ii).Remark that (ii) is just a consequence of Lemma 4.6 -(ii) and we focus on (i).Fix A as in (i) as well as times 0 < t 1 < • • • < t k ≤ t, and we proceed by computing the characteristic function of the finite dimensional distributions of NA .This can now be done by considering the function f (s, x) := k i=1 λ i 1 {s≤ti} 1 A (x) and applying the exponential formula (4.13), yielding Recalling the identity (4.2), we deduce that NA is a noise reinforced Poisson process with rate Λ(A) and reinforcement p. Now, we argue that if N is a random measure satisfying (i) and (ii), then it is a NRPPP.We will establish this claim by showing that N satisfies the exponential formula (4.13).First, observe that (i) implies that E[ NA (t)] = tΛ(A), for example by making use of Lemma 4.7 and the fact that if M is a NRPPP with characteristic measure Λ and parameter p, then ( M([0, t] × A) : t ≥ 0) is a reinforced Poisson process with rate Λ(A) and parameter p.We deduce by a monotone class argument that N satisfies, for any measurable f : R + × R → R + , the identity: (4.17) Still for A as in (i) and for an arbitrary collection of times 0 Since by hypothesis ( NA (t)) t∈R + is a NRPP with rate Λ(A), by the formula (4.2) for the characteristic function of the finite dimensional distributions of reinforced Poisson processes, we obtain that Remark that this is precisely the identity (ii) of Proposition 4.8 for our choice of g.Making use of the independence hypothesis of NA1 , . . .NA k for disjoints A 1 , . . ., A k with Λ(A i ) < ∞, we can also show that the identity holds for f as in (4.15) for such collection of sets.Now, if f is non-negative, bounded and supported on [0, t] × A with Λ(A) < ∞, making use of (4.17), we can proceed as in (4.16) for the proof of Proposition 4.8, approximating f by a bounded sequence of the form (4.15), and show that the exponential formula (4.13) still holds.The general case follows by sigma finiteness of Λ and we deduce that N is a NRPPP with the desired parameters.

Proof of Theorem 4.1 and compensator of the jump measure
Let us now establish Theorem 4.1.Remark that paired with Proposition 4.9, it entails that the role of the counting process of jumps ∆ ξs ∈ A for fixed A ∈ B(R) is played precisely by noise-reinforced Poisson processes, in analogy with the non-reinforced setting.
Proof of Theorem 4.1.The result will follow as soon as we establish (i) and (ii) of Proposition 4.9 for where A is an arbitrary Borelian satisfying Λ(A) < ∞.By the identity in distribution (2.12), we can restrict our arguments to the unit interval and hence we can make use of the explicit construction of NRLPs in [0, 1] that we recalled in Section 2.3, in terms of Yule-Simon series.Denote by M := i δ (xi,Yi) the Poisson random measure with intensity (1 − p)Λ ⊗ Q and recall the discussion following Theorem 3.1.If (x i , Y i ) is an atom of M, then at time U i = inf{t ≥ 0 : Y i (t) = 1}, the process ξ performs the jump x i for the first time, i.e. ∆ ξUi = x i and this precise jump x i is repeated in the interval [0, 1] at each jump time of Y i .It follows that for any f : R → R + we have: and in particular, we get: Hence, by the independence property of Poisson random measures, the processes NA1 , . . ., NAn are independent as soon as we deduce from the formula for the characteristic function for Poisson integrals the equality: Comparing with (4.2), we get that the right-hand side in the previous display is precisely the characteristic function of the finite dimensional distributions at times t 1 , . . ., t k of a reinforced Poisson process with rate Λ(A) and parameter p.
Recalling the explicit construction of NRPPPs from Definition 4.5, we stress that Theorem 4.1 formalises the idea that the jumps of NRLPs are jumps that are repeated through time, similarly to the dynamics of noise reinforced random walks -we refer to the beginning of Section 5.2 for a brief introduction to the later.Our terminology and notation for the reinforced measure μ can now be justified by the following: if µ is the jump measure of ξ, the counting process (µ([0, t] × A) : t ≥ 0) is a Poisson process with rate Λ(A) while (μ([0, t] × A) : t ≥ 0) is a reinforced Poisson process with rate Λ(A).Said otherwise, the following identity holds in distribution: Now that the key result of the section has been established, we continue our study of the jump process of NRLPs.In this direction, we start by briefly recalling notions of semi-martingale theory that will be needed.Let X be a semimartingale defined on a probability space (Ω, F , (F t ), P).Its jump measure µ X is an integer valued random measure on (R Denote the predictable sigma-field on Ω × R + by Pr.If H is a Pr ⊗ B(R + )-measurable function, we simply write H * µ X for the process defined at each t ∈ R + as and ∞ otherwise.Both notations for the integral will be used indifferently.Further, we denote by A + the class of increasing, adapted rcll finite-variation processes (A t ), with and by A + loc its localisation class.The jump measure µ X posses a predictable compensator, this is, a random measure µ p X on (R + × R, B(R + ) ⊗ B(R)) unique up to a P-null set, characterised by being the unique predictable random measure (in the sense of [12, Chapter II-1.6]) satisfying that for any non-negative H ∈ Pr ⊗ B(R), the equality Recall that by Proposition 3.2, the process ξ is a semimartingale.Hence, we can consider μp , the predictable compensator of its jump measure μ, and our purpose is to identify explicitly μp .In contrast, it might be worth mentioning that if ξ is a Lévy process with Lévy measure Λ, the compensator of its jump measure µ is just the deterministic measure µ p = dt ⊗ Λ(dx).The first step consists in observing the following: Lemma 4.10.Let A ∈ B(R) be a Borel set that doesn't intersect some open neighbourhood of the origin.If we denote by (F A t ) the natural filtration of NA , then the process M A = (M A (t)) t∈R + defined as M A (0) = 0 and Remark that this is just a special case of Proposition 3.2 for a Lévy measure of the form Λ(A)δ 1 with q = 0. Now we can state: Proposition 4.11.(Compensation formula) Denote by (F t ) the natural filtration of ξ and by μ its jump measure.The predictable compensator μp of μ is given by where E t (dx) = s<t δ ∆ ξs (dx) is the empirical measure of jumps that occurred strictly before time t.
Consequently, for any predictable process H ∈ Pr ⊗ B(R) such that |H| * μ ∈ A + loc , we have |H| * μp ∈ A + loc and the following process is a local martingale: The first compensating term appearing in (4.24) is compensating innovations, i.e. atoms appearing for the first time, while the second one should be interpreted as the compensator of the memory part of μ.Notice that Proposition 4.11 holds if p = 0. Indeed, in that case ξ is a Lévy process and its jump process µ is the Poisson point process (3.5).The compensator (4.23) is just the deterministic compensator dt ⊗ Λ(dx) for the Poisson point processes with characteristic measure Λ and in (4.24) we recover the celebrated compensation formula, see e.g.[5, Chapter 1].Remark that since the intensity of both µ and μ is dt ⊗ Λ(dx), we have, for both X a Lévy process and its associated NRLP, the equality for any f : R×R → R + .When X := ξ, by the compensation formula, this identity holds also if we replace f by a non-negative predictable process H ∈ Pr ⊗ B(R), viz. (4.25) However, we point out that if we replace in (4.25) the Lévy process by its reinforced version ξ, the identity no longer holds.Indeed, if such formula was satisfied, the exact same proof for the exponential formula of PPPs of XII-1.12 in [20] would hold in our reinforced setting, and since random measures are characterised by their Laplace functional, this would lead us to the conclusion that the law of μ coincides with the law of µ.
Proof.(i) In order to establish (4.23), by (i) of Theorem II-1.8 of [12], it suffices to show that for any nonnegative predictable process and the first step consists in showing the result for deterministic Maintaining the notation introduced in Lemma 4.1 for the process NB , consider B an arbitrary interval not containing a neighbourhood of the origin as well as the associated martingale, Integrating by parts, we get and consequently, is a martingale.Since (N B (ω; s)) s∈R + and (N B (ω; s−)) s∈R + differ in a set of null Lebesgue measure, the equality still holds replacing t 0 NB (s)ps −1 ds by t 0 NB (s−)ps −1 ds and we obtain precisely (4.24) for H s (ω, x) = 1 B (x).Now we can proceed as in the proof of II-2.21 from [12].Concretely, pick any positive Borel-measurable deterministic function h = h(x), x ∈ R such that h * μ − h * μp is a local martingale and let T be an arbitrary stopping time.With the same terminology as in I.1.22 of [12] denote by 0, T the subset of Ω × R + defined by 0, T = {(ω, s) : 0 ≤ s ≤ T (ω)}.
In particular, (h * μ) T = 1 0,T h * μ where the process 1 0,T is predictable (since left continuous) and moreover, by Theorem I 2.2 of [12], the sigma field generated by the collection {A × {0} where A ∈ F 0 , and 0, T where T is any (F t )-stopping time } is precisely the predictable sigma field Pr.Then, if (T n ) is a localising sequence for the local martingale h * μ − h * μp , it follows from Doob's stopping theorem that for each n, Consequently, taking the limit as n ↑ ∞, we deduce by monotone convergence that which in turn implies that (4.26) holds for any predictable process H = 1 B 1 0,T where B is any closed interval not containing the origin and T an arbitrary stopping time.Now the claim follows by a monotone class argument.
We close our discussion on the jump process of NRLPs with the property at the heart of the infinite divisibility of ξ as a stochastic process, a topic that will be studied in Section 6.2.We claim that, for is an infinitely divisible point process.More precisely, the measure ν A is a reinforced Poisson point process P with rate Λ(A) in R + and if we consider n independent copies ν 1 A , . . ., ν n A of the reinforced Poisson process (4.27) but with rate n −1 Λ(A), we have the equality in distribution To see this, consider f : R + → R + a positive function with support in [0, t], and observe that Now the claim follows by computing the Laplace functional of ν A , ν i A respectively, by applying the exponential formula (4.12) and from comparing with (4.4).For a more detailed discussion on infinitely divisible point processes we refer to page 5 of [17].

Weak convergence of the pair of skeletons
Before stating the first result of the section, let us briefly recall the statement of the Lévy-Itô synthesis for Lévy processes: a Lévy process ξ with triplet (a, q 2 , Λ) can be written as ξ = ξ (1) + ξ (2) + ξ (3) , where ξ (1) = (at+qB t : t ≥ 0) is a Brownian motion with drift while ξ (2) +ξ (3) is a purely discontinuous process that can be explicitly built from the jump measure µ defined in (3.5).More precisely, if we denote by µ (sc) the compensated measure of jumps µ (sc) = µ − dtΛ(dx), we can write The reinforced Lévy-Itô synthesis, which is the first main result of the section, states that the analogous result holds for NRLPs where now, the PPP µ in (5.1) has been replaced by the reinforced version μ, and the Brownian motion B by its reinforced version B (if p < 1/2).More precisely, after properly defining the "space-compensated" measure μ(sc) , we prove: Theorem 5.1.(Reinforced Itô's synthesis) Let μ be the jump measure of a NRLP ξ of characteristics (a, q 2 , Λ, p).Then, a.s.we have xμ (sc) (ds, dx), t ≥ 0, for some noise reinforced Brownian motion B, with the convention that if p ≥ 1/2 the process B is null.Moreover, the integrals in the previous display are NRLPs with respective characteristics (0, 0, 1 (−1,1) c Λ, p), (0, 0, 1 (−1,1) Λ, p).
Remark 5.2.Beware of the notation, μ(sc) stands for the space-compensated jump measure μ and should not be confused with the time-compensated measure (µ − µ p ) in the sense of [12, Chapter II-1.27].For instance, we stress that ξ(3) is not a local martingale.Remark that for Lévy processes, the time and space compensation of its jump measure coincide, since the compensating measure is the same.
After proving this result, we start settling the ground for the main result of the section.First, making use of Theorem 5.1, we define the joint law, of a Lévy process and its reinforced version, by introducing an appropriate coupling (ξ, ξ).We then characterise its law by computing the characteristic function of its finite dimensional distributions: Proposition 5.3.There exists a pair (ξ, ξ), where ξ has the law of a NRLP with characteristics (a, q 2 , Λ, p), with law determined by the following: for all k ≥ 1, λ 1 , . . ., λ k , β 1 , . . .β k real numbers, and where U is a uniform random variable in [0, 1].A pair of processes with such distribution will always be denoted by (ξ, ξ).Now, we connect the distribution of the pair (ξ, ξ) with the discrete setting.In this direction, consider the Lévy process ξ and for each fixed n ∈ N we set ( For each n, the sequence (X k ) is identically distributed with law ξ 1/n and the random walk = 0 built from these increments for a mesh of length 1/n is referred to as the n−skeleton of the Lévy process ξ.This process consists in the positions of ξ observed at discrete time intervals and, if we write D(R + ) for the space of R + indexed rcll functions into R with the Skorokhod topology, we have → ξ as n ↑ ∞.Now, fix a memory parameter p ∈ (0, 1) and for each n, consider the associated noise reinforced random walk ( Ŝ(n) k ) with parameter p built from the same collection of increments: where we set Ŝ(n) 0 := 0. For a detailed account on the noise reinforced random walk, we refer to the beginning of Section 5.2.The main result in [7] states that Ŝ n• f.d.d.
→ ξ, the convergence holding in the sense of finite-dimensional distributions, and we shall now strength this result.To simplify notation, write D 2 (R + ) the product space D(R + ) × D(R + ) endowed with the product topology.Now we can state the main result of the section: Theorem 5.4.Let ξ be a Lévy process with characteristic triplet (a, q 2 , Λ), fix p ∈ (0, 1/2) an admissible memory parameter and for each n, let (S be the pair of the n-skeleton of ξ and its reinforced version.Then, there is weak convergence in where (ξ, ξ) is a pair of processes with law (5.2).
The section is organised as follows: In Section 5.1, after introducing the (space) compensated integral with respect to NRPPPs, we shall establish Theorem 5.1.Making use of this result, in Section 5.2 we define the joint law of a Lévy process and its reinforced version (ξ, ξ).More precisely, by Lévy-Itô Synthesis and its reinforced version of Theorem 5.1, it will suffice to define the joint law of (µ, μ) and (B, B).This is respectively the content of the construction detailed in 5.2.1 and Definition 5.8.The construction of μ is done explicitly in terms of the jump measure of ξ by a procedure that should be interpreted as the continuous-time reinforcement analogue of the reinforcement algorithm for random walks.We then introduce the joint law (ξ, ξ) in Definition 5.10 and prove Proposition 5.3.Finally, Section 5.3 is devoted to the proof of Theorem 5.4.

Proof of Theorem 5.1
Let us start by introducing the (space)-compensated integral with respect to NRPPPs.Recall the identity of Lemma 4.7 for the intensity measure of NRPPPs and for fixed t ∈ R, let f : R + × R → R be a measurable function satisfying, for all 0 < a < b, the integrability condition Next, we set f (s, x)dsΛ(dx).
(5.6)This is a centred random variable and if we denote it by Σ ,∞) has independent increments, and hence is a martingale.When the limit of this martingale exists, we will write (5.7) Recall that the characteristics of a NRLP are being considered with respect to the cutoff function x1 {|x|<1} as well as the notation f * N from (4.22).The following lemma shows that the sums of atoms of NRPPPs are precisely purely discontinuous NRLPs: Lemma 5.5.Fix a Lévy measure Λ, a parameter p ∈ (0, 1) such that β(Λ)p < 1 and let N be a NRPPP with characteristic measure Λ and reinforcement parameter p.
Proof.(i) If we consider ξ a reinforced compound Poisson process with such characteristics and μ is its jump measure, it is a pure jump process and we can write it as the sum of its jumps.Our claim can now be proved directly from the identity ξ = (x * μ) L = (1 (a≤|x|<b) x * N ), since by Proposition 4.6 -(i), the restriction 1 (a≤|x|<b) N has the same distribution as μ.Alternatively, this can be established by means of the exponential formulas we obtained in Proposition 4.8, by fixing 0 < t 1 < • • • < t k < t and computing the characteristic function of the finite-dimensional distributions at times t 1 , . . ., t k of The claim follows by comparing with the identity for the characteristic function of the finite-dimensional distributions (2.12) of ξ .(ii) Recall the notation introduced before (5.7) for the martingale (Σ (c) e −r ,1 (f, t)) r≥0 .In our case, we have f (s, x) = x and we just write (Σ (c) e −r ,1 (t)) r≥0 .The fact that the martingale (Σ (c) e −r ,1 (t)) r≥0 converges as r ↑ ∞ and that the limit is a NRLP with characteristics (0, 0, 1 (−1,1) Λ) can be achieved by similar arguments as in [7] after a couple of observations.Starting with the former, recall the definition of N from (4.11), and remark that for each r > 0 we have From the discussion right after Proposition 4.5, we infer that if we we consider (Z u ) u∈P an independent collection of independent, standard Yule processes, the family {ue s : s ∈ D u } has the same distribution as the collection of jump times of the counting process 1 {u≤t} Z u p(ln(t)−ln(u)) , t ≥ 0. Hence the previous display can also be written as and now the proof of the convergence as r ↑ ∞ of (Σ (c) e −r ,1 (t)) r≥0 follows by the same arguments as in [7, Lemma 2.6].Alternatively, one can make use of (2.12) to restrict our arguments to the interval [0, 1] and apply [7,Lemma 2.6].Next, to see that the process 1 (−1,1) x * N (sc) defines a NRLP with characteristics (0, 0, Recalling the formula (4.13) for the characteristic function of integrals with respect to NRPPPs, we deduce from considering the function f (s, x) := k j=1 λ j 1 {s≤tj } x1 {ε≤|x|<1} that we have Now we can apply the exact same reasoning as in the proof of Corollary 2.8 in [7] by writing s j = t j /t ∈ [0, 1] and taking the limit as ε ↓ 0. The uniform convergence in compact intervals towards the rcll modification of 1 (−1,1) x * N (sc) follows from the second statement of Theorem 3.1, since for every ε ∈ (0, 1), the process x N (sc) (ds, dx), t ≥ 0, is a NRLP with characteristics (0, 0, 1 {|x|<ε} Λ).
It immediately follows from the previous lemma that if N is a NRPPP with characteristic measure Λ, parameter p and, if p < 1/2, we consider Ŵ an independent NRBM with same parameter, then x N (sc) (ds, dx), t ≥ 0, (5.9) defines a NRLP with characteristics (a, q 2 , Λ, p).To obtain the a.s.statement of Theorem 5.1 we still need a short argument.

The joint law (ξ, ξ) of a Lévy process and its reinforced version
In this section we construct explicitly, for an arbitrary fixed Lévy process ξ, the process ξ in terms of ξ that will be referred to as the noise reinforced version of ξ.This will yield a definition for the joint law (ξ, ξ).Our construction will be justified by the weak convergence of Theorem 5.4.Let us start by recalling the discrete setting, since our construction is essentially the continuous-time analogue of the dynamics that we now describe.
• The noise reinforced random walk.Given a collection of identically distributed random variables (X n ) with law X, denote by S n := X 1 + • • • + X n , for n ≥ 1 the corresponding random walk.We construct, simultaneously to (S n ), a noise reinforced version using the same sample of random variables and performing the reinforcement algorithm at each discrete time step.In this direction, consider (ε n ) and (U [n]) independent sequences of Bernoulli random variables with parameter p ∈ (0, 1) and uniform random variables on {1, . . ., n} respectively.Set X1 := X 1 and, for n ≥ 1, define Finally, we denote the corresponding partial sums by Ŝn := X1 + • • • + Xn , n ≥ 1.The process ( Ŝn ) is the so-called noise reinforced random walk with memory parameter p, and we refer to this particular construction of ( Ŝn ) as the noise reinforced version of (S n ).The process ( Ŝn ) can be written in terms of the individual contributions made by each one of the steps.In this direction, let us introduce a counting process keeping track of the number of times each step X k is repeated up to time n.Since if the law of X has atoms, we have P(X 1 = X 2 ) > 0, and we need to perform a slight modification to our algorithm.Namely, for each n ≥ 1 we write X n := (X n , n) and we perform the reinforcement algorithm to the pairs (X n ).This yields a sequence that, with a slight abuse of notation, we denote by ( X n ).If for every k, n ≥ 1 we set: we can write: (5.13) For convenience, we always set S 0 = 0 = Ŝ0 , and when working with pairs of the form (S, Ŝ) it will always be implicitly assumed that the noise reinforced version has been constructed by the algorithm we described.For instance, it is clear that at each discrete time step n, with probability 1 − p, S n and Ŝn share the same increment, while with complementary probability p, they perform different steps.
Roughly speaking, in the continuum, the steps (X n ) are replaced by jumps ∆ξ s of the Lévy process ξ.With probability 1 − p, the jump is shared with its reinforced version ξ while with complementary probability p it is discarded and remains independent of ξ.The jumps that are not discarded by this procedure are then repeated at each jump time of an independent counting process that will be attached to it.The process of discarding jumps with probability p is traduced in a thinning of the jump measure of ξ.Let us now give a formal description of this heuristic discussion.

Construction of the pair (N , N )
For the rest of the section, we fix a Lévy process ξ with non-trivial Lévy measure Λ, denote the set of its jump times by I := {u ∈ R + : ∆ξ u = 0} and let N (ds, dx) := u∈I δ (u,∆ξu) , be its jump measure.By the Lévy-Itô decomposition, this is a PPP with characteristic measure Λ and we can write ξ = ξ (1) + J, where ξ (1) is a continuous process while J is a process that can be explicitly recovered from N , as we recalled in (5.1).
If ξ has the law of its reinforced version, by Theorem 5.1 it can also be written as ξ = ξ(1) + Ĵ, where Ĵ is a functional of a NRPPP N with characteristic measure Λ.Hence, the main step for defining the law of the pair (J, Ĵ) consists in appropriately defining (N , N ).However, recalling the construction of NRPPPs by superposition detailed before Definition 4.5, this can be achieved as follows: first, set A 0 := {1 ≤ |x|} and for each j ≥ 1, let A j := {1/(j + 1) ≤ |x| < 1/j}.Next, for j ≥ 0 consider the point process remark that I j is a PPP with intensity Λ(A j )dt and write I := ∪ j I j .Maintaining the notation of Section 4, consider (D u ) u∈I a collection of i.i.d.copies of D and for each j ≥ 0 we set N j (ds, dx) := u∈Ij t∈Du δ (ue t ,∆ξu) .
The measure N j is a NRPPP with characteristic measure (1 − p) −1 Λ( • ∩ A j ), and we can now proceed as in Section 4.2 to construct the following NRPPP with parameter p by superposition of (N j ) j≥1 , u∈I t∈Du δ (ue t ,∆ξu) . (5.14) Notice however that its characteristic measure is (1 − p) −1 Λ.In this direction, we consider a sequence of independent Bernoulli random variables (ε u ) u∈I with parameter 1 − p and apply a thinning: Now, N is a NRPPP with characteristic measure Λ and reinforcement parameter p built explicitly from the jump process of ξ.From the construction, if a jump ∆ξ u occurs at time u, with probability 1 − p it is kept and repeated at each ue t for t ∈ D u , while with complementary probability p, it is discarded and remains independent of N .From now on, we always consider the pair (N , N ) constructed by this procedure.Then, by definition of N we can write xN (sc) (ds, dx), t ≥ 0, while on the other hand, by Theorem 5.1 the process defined as x N (ds, dx) + [0,t]×(−1,1) x N (sc) (ds, dx), t ≥ 0, ( is a NRLP with characteristics (0, 0, Λ, p).From our construction, the random measures N , N can be encoded in terms of a single Poisson random measure u∈I δ (u,∆ξu,Du,εu) , allowing us to compute explicitly the characteristic function of the finite dimensional distributions of (ξ (2) , ξ(2) ) and (ξ (3) , ξ(3) ).
Let us briefly comment on this expression.The first exponential term in (5.18) corresponds to the characteristic function of the finite dimensional distributions of a Lévy process with law (ξ where U is a uniform random variable in [0, 1] (recall that the first jump time of a Yule-Simon process is uniformly distributed in [0, 1]).More precisely, this Lévy process is built from the discarded jumps u 1 {εu=0} δ (u,∆ξu) and consequently is independent of ξ(2) and u 1 {εu=1} δ (u,∆ξu) , which explains the form of the identity (5.18).
Proof.We can assume that t k < 1 by working with t 1 /t < • • • < t k /t and with the pair (ξ st , ξst ) s∈[0,1] , which now has Lévy measure tΛ.Now, the proof follows by a rather long but straightforward application of the formula for the characteristic function of integrals with respect to Poisson random measures.
Proof.By the usual scaling argument we can suppose that t k < 1 = t.Now, the proof is similar to the one of Corollary 2.8 in [7].In this direction, notice that the processes ξ (3) = 1 (−1,1) x * N (sc) and ξ(3) = 1 (−1,1) x * N (sc) are respectively the limit as ε ↓ 0 of the convergence holding uniformly in compact intervals.The characteristic function of the finitedimensional distributions of the pair (1 {ε≤|x|<1} x * N , 1 {ε≤|x|<1} x * N ) can be computed by the same arguments as in Lemma 5.6 and we obtain for each 0 < ε < 1 that In order to establish that this expression converges as ε ↓ 0 towards (5.20), we recall that since It follows that for all 0 < ε < 1, λ ∈ R, we can bound Moreover, by the remark following Lemma 2.1, the random variable Y (t) ∈ L r (P) for any r < 1/p and it follows that the term is in L r (P).Hence, by dominated convergence, (5.23) converges towards (5.20) as ε ↓ 0. On the other hand, since (ξ ) tj ) as ε ↓ 0, we obtain the desired result.

5.2.2
The distribution of (B, B) and proof of Proposition 5.3 The last ingredient needed to define the joint distribution of (ξ, ξ) is the joint distribution of a Brownian motion B and its reinforced version B, that we denote as (B, B).Recall from [8] that B has the same law as the solution to the SDE dX t = dB t + p t X t dt, (5.24) and that X can be written explicitly in terms of the stochastic integral (2.17) with respect to the driving Brownian motion B. We also recall from (2.16) that for 0 < s, t < T the covariance of B can be expressed in terms of the Yule Simon process as follows: and for later use, we observe that We stress that the right-hand side in the previous display do not depend on the choice of T .The proof of this identity is a consequence of the representation (2.1) of Y in terms of a standard Yule process and an independent uniform random variable.
Definition 5.8.Let (B, B) be a pair of Gaussian processes and fix a parameter 0 < p < 1/2.We say that the pair (B, B) has the law of a Brownian motion with its reinforced version if the respective covariances are given by for any s, t ∈ R + .
Let us briefly explain where this definition comes from: for fixed p, by [4, Theorem 1.1] the law of the pair (B, B) is universal, in the sense that it is the weak joint scaling limit of random walks paired with its reinforced version with parameter p for p < 1/2, when the typical step is in L 2 (P).For more details, we refer to [8,4].
Given a fixed Brownian motion B, it is clear that we can not expect to have an explicit construction of the reinforced version B in terms of B similar to the one performed for (J, Ĵ).However, we can make use of the SDE (5.24) to get an explicit construct of (B, B) with the right covariance structure.This can be easily achieved as follows: first, let W be an independent copy of B; if we set B has the law of a noise reinforced Brownian motion with reinforcement parameter p, and can be written explicitly as Bt = t p t 0 s −p dβ s .Moreover, it readily follows that the covariance of the pair of Gaussian processes (B, B) satisfies (5.27).The decorrelation applied for constructing β is playing the role of the thinning in the construction of (J, Ĵ).
Finally, we will need for the proof of Proposition 5.3 the following representation of the characteristic function of the finite-dimensional distributions of the pair (B, B) in terms of the Yule-Simon process: Lemma 5.9.Let (B, B) be a Brownian motion with its reinforced version for a memory parameter p < 1/2.For all k ≥ 1, λ 1 , . . ., λ k , β 1 , . . .β k real numbers and 0 < t (5.30) Proof.Since NRBM satisfies the same scaling property of Brownian Motion (see page 3 of [8]), from (5.27) we deduce (B tc , Btc ) t∈R Hence, as usual we can suppose that t k < 1 and we take t := 1.To simplify notation we also suppose that q = 1.Now, the left hand side of (5.30) writes where we used respectively for each one of the covariances in order of appearance that: the first jump time of a Yule-Simon process is uniformly distributed, (5.25) and (5.26).However, this is precisely the right hand side of (5.30).Now that all the ingredients have been introduced, we define the law of (ξ, ξ).
Recipe for reinforcing Lévy processes: consider a starting Lévy process ξ with triplet (a, q 2 , Λ) and denote by ξ t = at + qB t + J t for t ≥ 0 its Lévy Itô decomposition, where B and J are respectively a Brownian motion and a Lévy process with triplet (0, 0, Λ).Further, fix p ∈ (0, 1) an admissible parameter for the triplet, denote the jump measure of ξ by N = δ (u,∆ξu) and consider the NRPPP N with characteristic measure Λ and reinforcement parameter p as constructed in (5.15) in terms of N .Denote by Ĵ := 1 (−1,1) x * N (sc) + 1 (−1,1) c x * N the corresponding NRLP of characteristics (0, 0, Λ, p) and finally, consider a NRBM B independent of (J, Ĵ), such that (B, B) has the law of a Brownian motion with its reinforced version -for example by proceeding as in (5.29).Definition 5.10.We call the noise reinforced Lévy process ξt := at+q Bt + Ĵt for t ≥ 0 of characteristics (a, q 2 , Λ, p) the noise reinforced version of ξ, the unicity only holding in distribution.From now on, every time we consider a pair (ξ, ξ), it will be implicitly assumed that ξ has been constructed by the procedure we just described in terms of ξ.
Let us now conclude the proof of Proposition 5.3.
From the construction of (N , N ), we can sketch a sample path of (ξ, ξ), where the jumps that are not appearing on the path of ξ are precisely the ones deleted by the thinning:

Proof of Theorem 5.4
Let us outline the proof of Theorem 5.4.First, by (2.12), it suffices to prove the convergence in [0, 1] and we therefore work with ξ = (ξ t ) t∈[0,1] .Next, since we are working in D 2 [0, 1], it suffices to establish tightness coordinate-wise to obtain tightness for the sequence of pairs.The first coordinate in (5.5) converges a.s.towards ξ in D[0, 1] (and in particular is tight) and hence it remains to establish tightness for the sequence of n-skeletons.This is the content of Section 5.3.2 and more precisely, of Proposition 5.13.This is achieved by means of the celebrated Aldous tightness criterion and our arguments rely on the discrete counterpart of the remarkable martingale from Proposition 3.2.This discrete martingale is introduced in Lemma 5.11 and we recall from [4,3] its main features.This is the content of Section 5.3.1.Finally, the joint convergence in the sense of finite-dimensional distributions towards (ξ, ξ) is proved in Proposition 5.16, by establishing the convergence of the corresponding characteristic functions.

The martingale associated with a noise reinforced random walk
• The elephant random walk and its associated martingale.Let us start with some historical context.In [2], Bercu was interested in establishing asymptotic convergence results for a particular random walk with memory, called the elephant random walk.This process is defined as follows: for a fixed q ∈ (0, 1) that we still call the reinforcement parameter, we set E 0 := 0 and let Y 1 be a random variable with Y 1 ∈ {−1, 1}.Then, the position of our elephant at time n = 1 is given by E 1 = Y 1 and for n ≥ 2, it is defined recursively by the relation E n+1 := E n + Y n+1 , for Y n+1 constructed by selecting uniformly at random one of the previous increments {Y 1 . . .Y n }, and changing its sign with probability 1 − q.The analysis of Bercu relies on a martingale associated to the elephant random walk, defined as M 1 = E 1 and for n ≥ 2, as and where Γ stands for the Euler-Gamma function.This martingale had already made its appearance in the literature in Coletti, Gava, Schütz [10].As was pointed out by Kürsten [15], the key is that when q ∈ [1/2, 1), the elephant random walk is a version of the noise reinforced random walk when the typical step X has distribution P (X = 1) = P (X = −1) = 1/2 with memory parameter p = 2q − 1.
Getting back to our setting, we maintain the notation introduced at the beginning of Section 5.2 for the noise reinforced random walk for a memory parameter p ∈ (0, 1).Our first observation is that the martingale (5.31) associated to the elephant random walk is still a martingale in our setting -we stress that the reinforcement parameter q in [2] corresponds to the parameter p = 2q − 1 in our context.This martingale plays a fundamental role in our reasoning, and also played a central role in [4,3].More precisely, let a 1 := 1 and for n ≥ 2 we set for γ n := n+p n .We write F n := σ( X1 , . . ., Xn ) the filtration generated by the reinforced steps.The following lemma is taken from [4].
Lemma 5.11.[4, Proposition 2.1] Suppose that the typical step X is centred and in L 2 (P).Then, the process M defined by M 0 = 0 and M n = a n Ŝn for n ≥ 1 is a square-integrable martingale with respect to the filtration (F n ).Indeed, for each n, let M n be the continuous time version of martingale of Lemma 5.11 associated with the n-reinforced skeleton ( Ŝ(n) k ) k∈N , i.e.
and remark that by Lemma 5.12, the predictable quadratic variation of M n n• is given by It follows that for each n ∈ N, the following process is also a martingale, and its predictable quadratic variation writes: (5.36)Moreover, by Stirling's formula, we have which gives: This yields the claimed equivalence between (5.34) and ( 5.35) under our current restrictions for ξ.For technical reasons, we shall prove first that the convergence of the martingales (N n ) towards N holds in the interval [ε, 1], for any ε > 0. This leads us to the following lemma: Lemma 5.14.For any ε > 0, the sequence (N n t ) t∈[ε,1] for n ∈ N is tight.
Proof.We denote by (F n t ) the natural filtration of N n .By Aldous's tightness criterion (see for e.g.Kallenberg [14] Theorem 16.11), it is enough to show that for any sequence (τ n ) of (bounded) (F n t )stopping times in [ε, 1] and any sequence of positive real numbers (h n ) converging to 0, we have By Rebolledo's Theorem (see e.g.Theorem 2.3.2 in Joffre and Metivier [13] ) it's enough to show that the sequence of associated predictable quadratic variations ( N n , N n ) satisfies Aldous's tightness criterion, i.e. that lim In this direction, by (5.36), we have and it remains to show that both terms in the right hand side converge to 0 in probability as n ↑ ∞.
The key now is in the asymptotic behaviour of the series n k=1 a 2 k .As was already pointed out in [2], for p ∈ (0, 1/2), we have Furthermore, since the Lévy measure of ξ is compactly supported, it holds that as n ↑ ∞.
(5.39) Now, from (5.38) and (5.39) it follows that lim n↑∞ and a fortiori in probability, which entails that the first term in (5.37) converges in probability to 0 as n ↑ ∞.In order to show that the second term in (5.37) also converges in probability to 0, we need to proceed more carefully.First, since τ n ∈ [ε, 1], we can bound the second term in (5.37) by Next, since n 2p nε ∼ n 2p−1 ε −1 , in order to proceed as before we need to show that sup nε ≤k≤n i.e. that the sequence is stochastically bounded.To do so we proceed as follows: for each n, notice that the process where ( X(n) i ) 2 are i.i.d.variables with law ξ 2 1/n .In order to have a centred noise reinforced random walk, for k ≥ 1 set /n ] and we introduce: Now, the process ( Ŵ (n) k ) k∈N is the noise reinforced version of the centred random walk defined for k ≥ 1 as Once again, since ξ has compactly supported Lévy measure, E[ξ 2 1/n ] and E[ξ 4 1/n ] are both O(1/n) as n ↑ ∞ and we deduce that Hence, by Markov's inequality the sequence (sup k≤n in probability and we can conclude as before by bounding as follows for L > 0: We shall now conclude the proof of Proportion 5.13 under our standing assumptions, and in this direction recall our discussion prior to Lemma 5.14.To extend the convergence to the interval [0, 1] we shall use a truncation argument similar to the one employed in Section 4.3 of [8].For each ε > 0, we identically as the constant N 1 ), we deduce by metrisability of the weak convergence that there exists some sequence (ε(n)) n∈N , converging to 0 slowly enough as n ↑ ∞ such that ( 1] and we only need to show: In this direction, notice the inequality n , an application of Doob's inequality and the previous display yield that, for any δ > 0, we have From the asymptotics, we deduce that, as n ↑ ∞, the convergence (5.40) holds and we can conclude by an application of Lemma 3.31 -VI from Jacod and Shiryaev [12].
Remark 5.15.Before proceeding, we point out that our proof no longer works for p ≥ 1/2: indeed, one might notice that the change in the asymptotic behaviour of the series n k=1 a 2 k for p ≥ 1/2 makes the preceding reasoning unfruitful.Let us be more precise: these series possess three different asymptotic regimes depending on p and are the reason behind the different regimes appearing in the behaviour of the Elephant random walk, see e.g.[2].More generally, they are behind the three regimes appearing in the invariance principles [8,4].When p ≥ 1/2, there is no Brownian component and the martingale t −p ξ(3) is no longer in L 2 (P) because Y (t) ∈ L q (P) for q < 1/p.Since N n is converging weakly towards t −p ξ(3) t by (5.35), working with the sequence of quadratic variations N n , N n might not be the right approach to obtain tightness.
Proof of Proposition 5.13, general case.
Let us start by introducing some notation.First, if N is the jump measure of ξ, we will shorten our notation for the compensated integrals and simply write ξ It will also be convenient to introduce the following notation for the sums of jumps: for fixed 0 < a < b, we write Σ a,b (t) := s≤t so that in particular we have ξ(2) = Σ 1,∞ .Next, if ξ can be decomposed into ξ = L (1) + L (2) , for independent Lévy processes L (1) , L (2) , we denote its reinforced skeleton by Ŝ(n for the decomposition that is naturally induced.More precisely, the two noise reinforced random walks in the right-hand side of the previous display are made with the same sequence of Bernoulli random variables as Ŝ(ξ), and just result from decomposing each increment as i .
Now, we proceed by lifting progressively our restriction imposed in 5.3.2 as follows: Step 1: First, if ξ satisfies that ξ = M ≤K where M ≤K is the sum of a Brownian motion with diffusion q and a compensated martingale with jumps smaller than K, by 5.3.2 the following convergence holds in distribution: Step 2: If b is a deterministic constant, let b • Id := (bt : t ≥ 0) and suppose now that ξ can be written as ξ = b • Id + M ≤K .Then, we can write where the sequence of processes ( Ŝ(n) (b • Id) : n ≥ 1) is deterministic and converges uniformly to the continuous function b • Id.Indeed, notice that the reinforcement doesn't affect the drift term since Ŝ(n) (b • Id) t = b nt /n.We deduce from [12,Lemma 3.33] that, as n ↑ ∞, we still have From here, we work with the Lévy process ξ with triplet (a, q 2 , Λ), with Lévy-Itô decomposition given by: and we denote its jump measure by N -in particular, we have ξ we can rearrange the triplet by compensating and modifying appropriately the drift coefficient, in such a way that we have: where ξ ≥K := 1 (−K,K) c x * N .Before moving to Step 3, let us make the two following remarks.
• First, notice that for each fixed n, S (n) ξ ≥K P → 0 uniformly in probability as K ↑ ∞.Indeed, we have where the right-hand side can be written in terms of the jump process N of ξ as The right-hand side in the previous display converges to 0 as K ↑ ∞ and notice that the bound does not depend on n.
• Let ξ be the noise reinforced Lévy process of characteristics (a, q 2 , Λ, p) and write its jump measure by N .Again, we can rewrite ξ, by compensating appropriately and modifying the drift coefficient, as follows: Arguing as before, we have the uniform convergence in probability b K Id + q B + ξ(3) 0,K P → ξ as K ↑ ∞, since, by the description of N given in Definition 4.5, we have Step 3: To conclude, for K > 1, we write respectively the Lévy process and the corresponding NRLP without their jumps of size greater than K as In (5.42), we already proved that for each fixed K, we have while by our second remark, it holds that ξ≤K L → ξ, as K ↑ ∞.
Since the convergence in distribution is metrisable, there exists an increasing sequence (K(n) : n ≥ 1) converging to infinity slowly enough as n ↑ ∞, such that Moreover, we can write where for each ε > 0, by (5.43) we have: We can now apply [12, Lemma 3.31, Chapter VI] to deduce that the convergence Ŝ(n) (ξ) With this last result we conclude the proof of Proposition 5.13.

Convergence of finite-dimensional distributions
We maintain the notation and setting introduced at the beginning of Section 5.
Proposition 5.16.Let ξ be a Lévy process of characteristic triplet (a, q 2 , Λ) and denote its characteristic exponent by Ψ. Fix p ∈ (0, 1) an admissible memory parameter, and for each n, let (S be the sequence of n-skeletons and its corresponding reinforced versions as defined in (5.4).Then, there is the weak convergence in the sense of finite-dimensional distributions, where we denoted by (ξ, ξ) a pair of processes with law characterised by (5.2).
Remark that since the convergence is in the sense of finite dimensional distributions, the restriction p < 1/2 is dropped.Our proof will rely on two results taken respectively from [7] and [9].We state them without proof for ease of reading: Corollary 3.7 of [7] Let F be a continuous functional on counting functions such that F (0) = 0 where, with a slight abuse of notation we still write 0 for the identically 0 trajectory.Further, suppose that there exists c > 0 and 1 ≤ γ < 1/p such that |F (ω)| ≤ cω(1) γ for every counting function ω : [0, 1] → N.Then, if Y is a Yule-Simon process with parameter 1/p, the following convergence holds in L 1 (P): (5.45) The second result concerns the asymptotic behaviour of Ψ.
Now we have all the ingredients needed for the proof of Proposition 5.16.
Proof.We fix k ≥ 1, 0 < λ 1 < • • • < λ k ≤ 1, and let β 1 , . . .β k be real numbers.In order to establish the finite dimensional convergence, it suffices to show that converges as n ↑ ∞ towards (5.2).In this direction, for each n, we write (N (n) (k)) k≥1, ≥1 the counting process of repetitions of Ŝ(n) introduced in (5.12).Recalling the identity (5.13), we can write, and S (n with E[e iλX (n) ] = e 1 n Ψ(λ) for every .Then, by independence of the counting processes (N from the sequence (X (n) ) ≥0 , the characteristic function (5.46) can be written as follows Remark that since the law of (N (n) (k)) k≥1, ≥1 doesn't depend on n, we can drop the up-script (n) in the last display.Next, recall that N ( nt ) = 0 for all t ∈ [0, 1] if ε = 1 while on the other hand, if ε = 1, N ( ns ) = 0 for ns < l, and N ( ns ) ≥ 1 if ns ≥ l .Hence, we have: By the previous observations, we can write: Now, let us establish the convergence in probability of both terms in the previous display separately.
Starting with the first one, we introduce the functional F : D[0, 1] → C defined as follows: for ω : [0, 1] → N a generic counting function.This is a Q -a.s.continuous functional, since ω → 1 {ω(s)∈[1,∞]} can be written as ω → ω(s) ∧ 1, which is a composition of a Q-a.s.continuous functional with the continuous mapping x → x ∧ 1.Moreover, we have F (0) = 0, and notice that we can bound: by monotonicity of ω and the inequality 1 {ω(s)∈[1,∞]} ≤ ω(s).Now, by Lemma 3.1 of [9], we deduce that F satisfies the hypothesis of Corollary 3.7 from [7], since for a constant K that only depends on β(Λ) and q.From an application of Corollary 3.7 of [7], we obtain the following convergence: (5.50) Turning our attention to the second term, similarly, we claim that: (5.51) Indeed, if for each n we denote by u(n) a uniform random variable on {1, . . ., n} independent of the i.i.d.sequence (ε n ) n of Bernoulli with parameter p, we have since ε u(n) is independent of u(n) for each n.Further, since u(n)/n converges in law towards a uniform random variable in [0, 1], the sequence of step processes (1 {u(n)≤ n• } ) n∈N converges weekly towards 1 {U ≤•} .Consequently, as n ↑ ∞, (5.52) converges towards where we recall that 1 {U ≤t} has the same distribution as 1 {Y (t)≥1} by the description (2.1).Finally, recall the identity of Proposition 5.3 for characteristic function of the finite dimensional distributions of the pair (ξ, ξ).It follows from (5.47) and the limits (5.50), (5.51) that as n ↑ ∞, we have the convergence towards the characteristic function of the finite-dimensional distributions of (ξ, ξ), This result paired with the tightness established in Proposition 5.13 proves Theorem 5.4.

Applications
We conclude this work with two sections devoted to applications.

Rates of growth at the origin
In this section we turn our attention to the trajectorial behaviour of noise reinforced Lévy processes at the origin.In this direction let us start by recalling a well known result established by Blumenthal and Getoor [9] for Lévy processes.Let ξ be a Lévy process with characteristic triplet (a, q 2 , Λ) with no Gaussian component, viz.q = 0; in particular β(Λ) = β.Further, we make the following hypothesis: • If {|x|≤1} |x|Λ(dx) = ∞, the characteristic exponent can be written as follows: Observe that in this case, we have β(Λ) ∈ [1,2].
• If {|x|≤1} |x|Λ(dx) < ∞, which can happen for β(Λ) ∈ [0, 1], we suppose Ψ takes the following form: This is, when [0,1] |x|Λ(dx) < ∞ we are supposing that the Lévy process has no linear drift, the reason being that in that case the behaviour at 0 is dominated by the drift term.We insist in the fact that when β(Λ) = 1 the integral {|x|≤1} |x|Λ(dx) can be finite or infinite.We will be working for the rest of the section under these hypothesis, and we will refer to them as hypothesis (H).It was established by Blumenthal and Getoor in [9] that under (H), the behaviour at zero of a Lévy process is dictated by the Blumenthal-Getoor index of the Lévy measure Λ.More precisely, almost surely, we have: We will show that the same result still holds if we replace the Lévy process ξ by its noise reinforced version.Concretely, the main result of the section is the following: Proposition 6.1.Let ξ be a Lévy process with triplet (a, q 2 , Λ) satisfying hypothesis (H), and consider ξ its noise reinforced version for an admissible parameter p.Then, almost surely, we have while lim sup 2) The rest of the section is devoted to the proof of Proposition 6.1 and it is achieved in several steps.We start by proving the second statement (6.2), in Lemma 6.2 we prove (6.1) for β(Λ) ≥ 1, |x|≤1 |x|Λ(dx) = ∞ and the case β(Λ) ≤ 1, |x|≤1 |x|Λ(dx) < ∞ is treated separately in Lemma 6.4.
Proof of (6.1).It suffice to prove that for some r > 0 and ε > 0 a.s.there exists a sequence of jumps occurring in [0, ε] at times, that we denote by (t i ), satisfying Now, recall from the discussion following (4.11) that the jump measure N of ξ dominates a Poisson point process with intensity (1 − p)(du ⊗ Λ), say N .If we denote the atoms of N by (u i , x i ), we deduce that Now, take r > 0 small enough such that the inequality 1/(γ − r) < β(Λ) still holds.For such a choice of r, the integral (6.3) is infinite by definition of the index β(Λ) and the claim follows.
Now we focus on showing that lim t↓0 t −γ | ξt | = 0 for γ ∈ (0, 1/β(Λ)).In this direction, let us start introducing some notation and with some preliminary remarks.First, notice that since we are interested in the behaviour of ξ at the origin, we can rely on the original construction in [7] in terms of Poissonian sums of Yule-Simon processes that we recalled in Section 2.3.Next, under (H), ξ can be written either as a sum of a compensated integral ξ(3) and a reinforced compound Poisson process ξ(2) viz.
The statement (6.1) of Proposition 6.1 is incomplete only when the Lévy measure fulfils the integrability condition {|x|≤1} |x|Λ(dx) < ∞.Recalling the discussion prior to Lemma 6.2, we henceforth assume that the Lévy process is a driftless subordinator with jumps smaller than one, say (T t ), and we denote by ( Tt ) the corresponding reinforced version for a memory parameter p ∈ (0, 1).It is then convenient to work with its Laplace transform at time t ∈ [0, 1], for λ ≥ 0, for Φ(λ) := (1 − p) R + 1 − e −xλ Λ(dx) and Y is a Yule-Simon process with parameter 1/p.The following result from [9] will be needed and we state it for the reader's convenience: Theorem 6.3.[Blumenthal, Getoor][9] If Φ(λ) is the Laplace exponent of a driftless subordinator with Lévy measure Λ, then for any ε > 0, Let ε > 0, fix λ > 0 and observe from Theorem 6.3 that for t ∈ (0, 1), there exists positive constants K and R such that Consequently, for t ∈ (0, 1) the following bound holds: Proof.Consider t ∈ [0, 1] and fix a > 0. An application of Markov's inequality for g(r) = 1 − e −r and the inequality g(r) ≤ r for r ≥ 0 yield Since Φ(0) = 0 and Y (t) conditioned to Y (t) ≥ 1 follows the Yule-Simon distribution with parameter 1/p, for a constant C we deduce the bound: where we denoted by η a Yule Simon random variable with parameter 1/p.Now, let h be an increasing function with lim t↓0 h(t) = 0, and consider a = h(2 −n ), t = 2 −(n−1) .Then, by (6.12) and from summing over n ∈ N, we deduce In order to apply a Borel-Cantelli argument, we specialise in our case of interest: we set h(t) := t γ and we show that the right hand side of (6.13) is finite.From the first inequality in (6.11) with λ = 1, we get For ε small enough, we have both η β(Λ)+ε ∈ L 1 (P) (since η is in L q (P) for any q < 1/p and β(Λ) < 1/p) and 1 − γβ(Λ) − γε > 0, by our standing assumption 1 > γβ(Λ).Consequently, we have which entails by Borel-Cantelli that T2 −(n−1) < (2 −n ) γ holds for all n large enough, a.s.From a monotony argument, it follows that a.s.Tt < t γ for all t small enough and in consequence lim sup t↓0 t −γ Tt ≤ 1.If we now take h(t) = δt γ for δ ∈ (0, 1), by the same reasoning we obtain lim sup t↓0 t −γ Tt ≤ δ which leads to the desired result.
Finally, our proof of Proposition 6.1 is complete.

Noise reinforced Lévy processes as infinitely divisible processes
As was already mentioned in Section 4.3, NRLPs are infinitely divisible processes -abbreviated ID processes.In this final section, we study their properties under this new scope.In this direction, we start by giving a brief overview of the theory; our exposition mainly follows Rosinksi [21] and Chapter 3 of Samorodnitsky [22].Then, we identify the features of NRLPs in this setting and more precisely, we identify the functional triplet of NRLPs, in the sense of ID processes.The objective here is hence to put Lévy processes and their NRLPs counterparts in the context of ID processes and compare then through this new lens.As an application, making use of the Isomorphism Theorem for ID processes [21,Theorem 4.4] we establish the following result: Proposition 6.5.Let ξ be a noise reinforced Lévy process with characteristics (a, 0, Λ, p).Let f : R → R + be a bounded, continuous function with f (x) = O(x 2 ) at 0. Then, we have Note that the probability distribution appearing in the previous display is the Yule-Simon distribution (2.2).For an analogous result in the setting of Lévy processes, we refer to [21, Proposition 4.13] and we shall use in our proof similar type of arguments.To simplify notation, for the rest of the section we work with NRLPs in [0, 1], but our exposition can be adapted to R + with some slight changes.Hence, we can make use of the construction of NRLPs from [7] in terms of Poissonian Yule-Simon series that we recalled at the end of Section 3.This construction will be used for the rest of the section.

Preliminaries on infinitely divisible processes
Let us introduce some standard notation mostly taken from [21].For T a nonempty set, we denote by R T the set of R-valued functions indexed by t ∈ T .If S ⊂ T is an arbitrary subset and e = (e(t)) t∈T ∈ R T , we write e S for the restriction of e to S. Further, let π S be the canonical projection π S : R T → R S from R T into R S , viz. the function defined as π S (e) := e S .For finite subsets of T of the form I := {t 1 , . . ., t k } ⊂ T , the space R I is identified with R k and we write: e I = (e(t 1 ), . . ., e(t k )) ∈ R I .
As usual, the space R T is equipped with the cylindrical sigma field B T := σ(π t : t ∈ T ) generated by the projection mappings.For any arbitrary S ⊂ T , we denote by 0 S the 0 element of R S and we write B S 0 := {A ∈ B S : 0 S / ∈ A}.Consequently, Notice however that this subset is not B T measurable when S is uncountable.Finally, for x ∈ R we set x := x1 {|x|≤1} and if x = (x 1 , . . ., x k ) ∈ R k , the term x should be interpreted component-wise, viz.
x := ( x 1 , . . ., x k ).Now let us start with the following definition: Definition 6.6.An R-valued stochastic process X = (X t ) t∈T is said to be infinitely divisible (in law) if for any n ∈ N, there exist independent and identically distributed processes Y (n,1) , . . .Y (n,n) such that When T = {1} is a singleton, this is just the definition of a real valued infinitely-divisible random variable, in which case, the characteristic function of X 1 takes the Lévy-Khintchine form: for q, b ∈ R, ν a Lévy measure.Further, it is well known that the set of infinitely divisible random variables and distributions of Lévy processes are in bijection and it is clear that if X is a Lévy process with characteristic exponent as in the previous display, we have where for each i ∈ {1, . . ., n}, Y (n,i) is an independent copy of a Lévy process with characteristic triplet (b/n, q/n, ν/n).Said otherwise, Lévy processes are infinitely divisible processes.Moreover, from the formula for the characteristic function of Proposition 2.2, it is clear that NRLPs are in turn infinitely divisible.Now, recall that a Gaussian process X = (X t ) t∈T is a T -indexed process satisfying that, for any I = {t 1 , . . ., t k } ⊂ T , the vector X I = (X t1 , . . ., X t k ) is Gaussian.In the sequel we also assume that the Gaussian processes we work with are centred.Gaussian processes are characterised by their covariance function, in the sense that the law of X is completely determined by the semi-definite positive function Γ : The following characterisation of infinitely divisible stochastic processes shows that they are the natural generalisation of Gaussian processes: Proposition 6.7.[Proposition 3.1.3][22]An R-valued stochastic process X = (X t ) t∈T is infinitely divisible if and only if for any finite collection of indices I = {t 1 , . . .t k } ⊂ T , the random vector Hence, if X is an infinitely divisible process, by Lévy-Kintchine representation and the previous proposition, for every I = {t 1 , . . ., t k } there exists: an R k -valued measure ν I (dx) verifying and ν I ({0 I }) = 0, a semi-definite positive I × I matrix Γ I and an R k vector, that we denote as b(I), satisfying for every θ ∈ R I the identity: where ν satisfies some regularity and integrability conditions that we now introduce: Definition 6.8.A measure ν on R T is called a path Lévy measure if it satisfies the following two conditions: Moreover, we consider the following third condition: (iii) There exists a countable subset T 0 ⊂ T such that ν(π −1 T0 (0 T0 )) = 0.
Then, (iii) is a stronger statement than (ii) and it has been shown that a path Lévy measure is σ-finite if and only if (iii) holds -see e.g.[21].Condition (ii) states roughly speaking that ν "does not charge the origin".As we already mentioned, in general 0 T is not measurable and hence we can not state this condition as in the finite-dimensional case of Lévy measures.One of the main results of the theory states that infinitely divisible processes are in bijection with functional triplets (b, Γ, ν), we refer to [21] for the proof: Theorem 6.9.For every infinitely divisible stochastic process X = (X t ) t∈T there exists a unique generating triplet (b, Γ, ν) consisting of a path b ∈ R T , a covariance function Γ in T × T and a path Lévy measure ν in R T such that for any finite I ⊂ T , e i θ,e I − 1 − i θ, e I ν(de) .(6.16) Conversely, for every generating triplet (b, Γ, ν) there exists an infinitely divisible process satisfying (6.16).
Maintaining the notation of Theorem 6.9, it follows in particular that the law of any ID process X can be written as a sum of two independent processes X L = G + P, where G is Gaussian with covariance Γ and P is a so-called Poissonian ID process.When the equality X = G + P holds almost surely, we call respectively G and P the Gaussian part and the Poissonian part of X.Let us conclude our presentation with the following notion that will be of use: Definition 6.10.A process V = (V t ) t∈T defined in a measure space (S, S , n) is called a representant of a path Lévy measure ν if for any finite I ⊂ T , we have This is, if V is only a representant, the measure ν • V −1 might not be a Lévy measure since it might "charge the origin".In the situations we will be interested in the representations will always be exact, and we only enunciate the weaker definition to write the results we need in their full generality.Representants allow to build explicitly Poissonian ID processes in terms of Poisson random measure, for more details we refer to [21], see also our brief discussion before the proof of Proposition 6.5 below.

The characteristic triplet of a NRLP
We can now start investigating Lévy processes and their reinforced counterparts as ID processes, and we start with a basic analysis of the former.More precisely, we identify the path Lévy measure of Lévy processes as well as an exact representant.These results are known [21,Example 2.23 ] and the statements are only included to contrast with the analogous results for NRLPs -see Lemma 6.13 below.Lemma 6.11.The following assertions hold: (i) Let ξ be a Lévy process with characteristic triplet (a, q, Λ).The path Lévy measure ν of ξ is given by, ν(de) := (dt ⊗ Λ) • V −1 (de), where we denoted by V the mapping V : R + × R → R R + defined as V (s, x) := x1 {s≤•} .
It now follows that we can take q close enough to 1/p such that the integral in (6.19) is finite and we deduce (6.17).Now, we identify the path Lévy measure of NRLPs.
In particular, from (i) we get that V is an exact representant of ν, in (S, S , n) = (D[0, 1] × R, B(D[0, 1]) ⊗ B(R), Q ⊗ Λ(1 − p)).On other hand, (ii) gives a natural interpretation in the terminology of ID processes for the admissibility of p for Λ.
Proof.To identify the Lévy measure, let us write the characteristic function of the finite dimensional distributions of ξ in the form (6.16) and to simplify notation, we suppose that a, q = 0.In this direction, consider a finite I ⊂ T , θ = (θ t1 , . . ., θ t k ) ∈ R I , and denote by y = (y(t)) t∈[0,1] an arbitrary counting function.Recall the formula for the finite dimensional distributions of ξ from Proposition 2.2, for t = 1.It now follows by Lemma 6.12 and the triangle inequality that we have: To conclude, let us show that ν satisfies the integrability condition (i) of Definition 6.8 if p is an admissible memory parameter for Λ, viz.if β(Λ) < 1/p, while when β(Λ) > 1/p, the condition fails.By definition of ν, we have  Let us state the two last result that we need for the proof of Proposition 6.5.First, the Poissonian part of ID processes consists, roughly speaking, in Poissonian sums of i.i.d.trajectories -for instance, remark that for NRLPs those trajectories are the weighted Yule-Simon processes -for more examples see e.g.[21,Section 3].More precisely, let X = (X t ) t∈T be an infinitely divisible process with characteristic triplet (b, Σ, ν) and suppose that V = (V t ) t∈T is a representant of ν defined on a σ-finite measure space (S, S , n).To simplify notation, set χ(u) := 1 {|u|≤1} and consider M a Poisson random measure in (S, S ) with intensity n.Then, the following process has the same distribution as X, b t + G t + S V t (s) M(ds) − χ(V t (s))n(ds) , t ∈ T, (6.24) where G = (G t ) t∈T is an independent Gaussian process with covariance Σ.The integration in the previous display should read as a compensated integral, and for a detailed statement we refer to [21, Proposition 3.1].For example, notice that if X is a Lévy process, M is Poisson with intensity dt⊗Λ(dx) and replacing V by x1 {s≤•} yields a Lévy-Itô representation.Finally, we give one of the statements that we use of the Isomorphism Theorem of infinitely divisible processes needed for our proof.
This allows for instance to study the law of X under different conditionings, for appropriate elections of q.This will be used in our reasoning below.Now, let us conclude the proof of Proposition 6.5.
Proof of Proposition 6.5.To simplify notation, we will perform a slight abuse of notation by writing Λ instead of (1 − p)Λ.We start by fixing δ ∈ (0, 1) small enough such that m = Λ(|x| > δ) > 0. Now, let h > 0 and as usual, write y = (y(t)) t∈[0,1] for a generic counting trajectory in D[0, 1].Recall the result of Lemma 6. for G h (z) = E f ( ξh + z) 1 S h +1 and notice that lim h↓0 G h (z) = f (z) by right-continuity -remark that the previous display can be interpreted as the law of ξh conditioned at having at least one jump before time h of size greater than δ.If we let η be a random variable distributed Yule-Simon with parameter 1/p under P, this entails that we can write: where in the last equality we used that the law of y(h) under is the Yule-Simon distribution with parameter 1/p by Lemma 2.1 and that Q(y(h) ≥ 1) = h.Consequently, we deduce that Now, we study the limit as h ↓ 0 of these three terms separately and we start with K 1 (h, δ).Recall the notation introduced in 5.3.2 for the compensated integrals as well as Σ δ,∞ := 1 (−δ,δ) c x * N for the process obtained by adding jumps of size greater that δ > 0. Recall that on {S h = 0}, the process ξ doesn't have jumps of seize greater than δ before time h.It now follows that, restricted to {S h = 0}, the following equality holds: for c δ := −a + (1 − p) −1 {δ≤|x|≤1} xΛ(dx) and denote the right hand side of (6.25) by ξδ h .Now let us first consider the case β(Λ) < 2. Since f is bounded and O(|x| 2 ) at the origin, for any q ∈ (β(Λ) ∨ 1, 1/p ∧ 2) satisfying q < r we can bound |f (x)| ≤ C|x| q for all x ∈ R, for some constant C large enough.Then, for a constant C that only depends on q we have Now, arguing as in (6.7), (6.8), recall that for q ∈ (β(Λ) ∨ 1, 1/p ∧ 2) we have the following bound for the compensated sum of Yule-Simon processes: {|x|≤δ} |x| q Λ(dx) < ∞. (6.26)Since q − 1 > 0, we have lim sup h↓0 K 1 (h, δ) ≤ E [η q ] {|x|≤δ} |x| q Λ(dx) which can be made arbitrarily small for an appropriate choice of δ.Remark that the same reasoning applies for K 2 (δ), by making use once again of the bound |f (x)| ≤ C|x| q .Finally, since for any choice of δ, K 3 (h, δ) ↓ 0 as h ↓ 0, we obtain the desired result.If β(Λ) = 2, we set q = 2 and once again recall from page 9 of Bertoin [7] that the inequality (6.26) still holds.In this case, since pβ(Λ) < 1, p must be smaller than 1/2 and consequently E η 2 < ∞, while of course {|x|≤δ} |x| 2 Λ(dx) < ∞ by definition of a Lévy measure.We can then proceed as before.

Appendix
This short section is devoted to proving a technical identity needed for the proof of Lemma 4.2.The proof was omitted from the main discussion for readability purposes.
Fix a Lévy measure Λ in R, p ∈ (0, 1) and denote the law of the standard Yule process Z = (Z(t)) t∈R + started at Z 0 = 1 by Z.We write D[0, ∞) the space of R + indexed, R-valued rcll functions.Since Z is supported on the subset of D[0, ∞) of counting functions, z = (z t ) t∈R + in the sequel stands for a generic counting function.Moreover, if F : R + × D[0, ∞) → R + is a measurable function, we write Z • for the measure in R + × D[0, ∞) defined as Z • (F ) := R + du E[F (u, Z)].Roughly speaking, the objective is to describe the law of the following "process": (u, z) → 1 {u≤t} z p(ln(t)−ln(u)) : t ∈ R + ∈ D[0, ∞) (7.1)

Figure 1 :
Figure 1: Sketch of the jumps of a noise reinforced Poisson process.We marked by x the jumps corresponding to innovations, while each linked o is a repetition of the former.
With the same notation of Section 4.1, denote by E a Poisson random measure in R with intensity Λ(R)(1 − p)e t dt and consider the Poisson point process Σ u∈E δ (u,xu) in R × R with intensity (1 − p)e t dt ⊗ Λ(dx).Now, for each u ∈ E , consider an independent copy D u of D and set L x (ds, dx) := u∈E t∈Du δ (u+t,xu) .(4.9)

Figure 2 :
Figure 2: Sample path of a Lévy process and its reinforced version.
) and remark that E [Y (u) r ] = u • E [η r ] where η stands for a Yule-Simon random variable with parameter 1/p.It now follows that we can bound E | ξ

. 15 )
It is possible to show that one can recover the collection of triplets ((b(I), Γ I , ν I ) : I ⊂ T, |I| < ∞) from a so called functional triplet (b, Γ, ν), consisting in a path b ∈ R T , a covariance function Γ : T × T → R and a path-valued measure ν defined in R T , satisfying for any finite I ⊂ T , that b