Numerical aspects of shot noise representation of infinitely divisible laws and related processes

The ever-growing appearance of infinitely divisible laws and related processes in various areas, such as physics, mathematical biology, finance and economics, has fuelled an increasing demand for numerical methods of sampling and sample path generation. In this survey, we review shot noise representation with a view towards sampling infinitely divisible laws and generating sample paths of related processes. In contrast to many conventional methods, the shot noise approach remains practical even in the multidimensional setting. We provide a brief introduction to shot noise representations of infinitely divisible laws and related processes, and discuss the truncation of such series representations towards the simulation of infinitely divisible random vectors, L\'evy processes, infinitely divisible processes and fields and L\'evy-driven stochastic differential equations. Essential notions and results towards practical implementation are outlined, and summaries of simulation recipes are provided throughout along with numerical illustrations. Some future research directions are highlighted.


Introduction
Infinitely divisible laws have long been investigated actively in the literature due to their fascinatingly rich structure. The corresponding class of stochastic processes is the class of Lévy processes, and moreover, infinitely divisible laws also bear deep relations to infinitely divisible processes and fields and Lévy-driven stochastic differential equations. Over the last half-century, such stochastic processes have grown to become widely popular across many domains. The attractiveness of such stochastic processes may be attributed to two reasons. Firstly, they are able to capture jump discontinuities with great versatility. Through the theory of these stochastic processes, one can construct stochastic processes with flexible jump structures under mild technical conditions. Secondly, stochastic processes related to infinitely divisible laws can capture dynamics beyond Gaussianity. For example, the subclass of such stochastic processes with heavy-tailed marginal laws offers an easy solution for the modelling of heavy-tailed dynamics. To mention only a handful of applications, these stochastic processes have appeared in physical modelling for diffusion and transport [17,98], degradation modelling [1,49] and various applications in finance and insurance [20,23,77].
As stochastic processes relating to infinitely divisible laws find increasing applications in the literature, there is a clear demand for methods of sampling infinitely divisible laws and generating sample paths of related processes. One potential solution is the approximation of sample paths based on classical deterministic time discretisation. However, a major drawback is that many application contexts require simulation techniques which specify individual jumps [49,77]. For example, in insurance, claims can be modelled as downward jumps, thus preserving jumps is necessary to observing the ruin time. This necessity for preserving all or some of the jumps of the stochastic process serves as an additional hurdle in our quest for appropriate numerical schemes, which eliminates such conventional methods based upon increments from consideration. Therefore, simulation of infinitely divisible laws and related stochastic processes by generating individual jumps seems not only ideal, but perhaps necessary in many practical scenarios.
In physics, the term shot noise is used to describe noise resulting from the discreteness of charge carriers. In electrical circuits, shot noise manifests as the sporadic fluctuations of current, particularly when the current is low [7,82]. In optics, shot noise manifests as the fluctuations of the number of photons detected, most apparent in a low light environment [5,29]. Such discrete noise is modelled by the shot noise process, in which the arrival times of the shots or jumps follow a Poisson process. More precisely, if X t denotes the system's state at time t, then it is most commonly expressed through one of the two following representations: • where µ(dz, ds) is a marked Poisson random measure which has a weight at (z, s) if there is a jump at time s ≥ 0 with size z ∈ R d \{0}, we write H(t, s, z) µ(dz, ds); • where {Γ k } k∈N is the sequence of standard Poisson arrival times independent of the iid marks {Z k } k∈N corresponding to µ(dz, ds), we write Here, the kernel H(t, s, z) of the shot noise process describes the level, at an observation time t, of the shot that occurred at a previous time s. In physical applications, it often makes sense that the influence of the shot on the physical system decays with the passage of time. In those settings, the magnitude of the kernel is nonincreasing in the observation time t, and often taken as exponential decay. The shot noise phenomenon has been investigated in a wide variety of applications, for example, in metallic conductors [7], quantum systems [6,82] and optics [5,29,97,105]. The modelling of shot noise has also been extended mathematically, for example, to Cox processes [15], capturing long-range dependence [13] and nonlinearity [30]. Shot noise processes have been shown to relate to other important stochastic processes in the asymptotic regime, such as the fractional Brownian motion [59] and nonstationary Gaussian processes [76]. Crucially, these shot noise processes have a profound connection to infinitely divisible laws [65]. By considering the shot noise kernel as a cumulative integral of a Lévy measure and the underlying Poisson arrival times as random steps over the domain of the kernel rather than over time, one obtains a shot noise series representation of the infinitely divisible law characterised solely by a Lévy measure. This straightforwardly leads to shot noise representations for Lévy processes and infinitely divisible processes, which are in some sense analogous to the well-known Karhunen-Loève expansion for Gaussian processes.
Paralleling advancements in the study of shot noise processes [24,35,40,83,103], shot noise representations of infinitely divisible laws and processes have been investigated as early as the 1970s. Ferguson and Klass [31] led the seminal effort in establishing an initial method of representing independent increment processes without Gaussian components as random series. In response, Kallenberg [50] investigated their convergence properties, while Resnick [81] demonstrated their derivation via the Lévy-Itô decomposition of such stochastic processes.
Their theoretical development and appearance have since gradually expanded, for instance, [36,93,101,102]. In particular, Rosiński established general necessary and sufficient conditions for almost sure convergence of shot noise series to infinitely divisible random vectors without Gaussian components [85]. Shot noise representation was used in [84] to study path properties of Lévy-driven stochastic integrals. Almost sure uniform convergence for shot noise representations of Lévy processes was investigated in the general setting in [87]. Since then, shot noise representation has been at the forefront of the study of a variety of relevant stochastic processes, such as the stable process [26,68], its generalisations via tempering [11,42,88], fractional stable motions [19,41,53,67] and Lévy processes of type G [104].
Perhaps, the greatest gift of shot noise representation to the study of stochastic processes in the age of unprecedented computational power is its elegant and simple solution to sample path generation. In the case of Lévy and infinitely divisible processes, by truncating the series representation to a finite sum, one obtains a straightforward approximation of the stochastic process from which approximate sample paths can be generated. Among the few sample path generation schemes [89], this has been the go-to numerical method for contemporary applications involving Lévy processes, [49,52,66,77] to name a few more examples. Moreover, truncation of shot noise can be naturally extended to the multidimensional setting, including Lévy copulas [37,100]. As one would expect, the widespread use of the numerical technique demands analyses of the associated truncation error. To give some examples of particular stochastic processes, error analyses have been performed for the stable process [12], gamma process [49], tempered stable process [45], higher order fractional stable motion [53] and Lévy-driven CARMA processes [54]. More general treatments of error analysis have been studied, for example, in terms of moments [46] and Gaussian approximation [3,22]. As such, using numerical schemes based on shot noise representations carries the advantages of tractable generalisability to multidimensional settings and error analysis. Through the present survey, we hope to clearly establish the practical value of numerical methods for simulating infinitely divisible laws and related processes based on shot noise representations. In doing so, we demonstrate the viability of using jump models in applied contexts, and encourage further enrichment of the technique.
This survey aims to summarise shot noise representation with a view towards sampling infinitely divisible laws and generating sample paths of related processes. We review some preliminary notions of infinitely divisible laws and related processes in Section 2, and offer several important examples of infinitely divisible laws. Section 3 outlines shot noise representations of infinitely divisible laws via the Lévy-Itô decomposition. Examples with various infinitely divisible laws are provided. We describe the approximation of infinitely divisible laws via truncation of shot noise representation in Section 4, along with important results on the error. In Section 5, we discuss the truncation scheme for Lévy processes, infinitely divisible processes and fields and Lévy-driven stochastic differential equations. Various examples of error analysis and numerical illustrations are presented, along with summaries of simulation recipes. Section 6 briefly visits some practical numerical topics for computing expectations via shot noise representations, such as various variance reduction methods and using quasi-Monte Carlo methods. Finally, we summarise our discourse in Section 7 along with brief suggestions for future research directions.

Preliminaries
We begin by reviewing some preliminaries of the theory of infinitely divisible laws and related processes. Some well-known examples of particular interested to the literature are given.
We introduce some notations which will be used throughout. In what follows, we will be working under the probability space (Ω, F , P). We denote the Borel σ -algebra over a space S as B(S). Let N := {1, 2, · · · } and N 0 := {0, 1, 2, · · · }. Denote ·, · as the inner product, · as the Euclidean norm on R d for any d ∈ N and R d 0 := R d \{0}. The Dirac delta measure concentrated at x ∈ R d is denoted by δ x . We denote the positive part of functions as ( f (x)) + := 0 ∨ f (x). Let L = represent equality in law and L (·) the law of a random vector. We denote the indicator function of a set A as 1 A (·) and sometimes as 1(· ∈ A). We refer to the uniform distribution over (0, 1) and the exponential distribution with unit rate as the standard uniform and exponential distributions, respectively. The sequence {Γ k } k∈N will be used throughout to denote the arrival times of the standard Poisson process.

Infinitely divisible laws and related processes
Perhaps the most familiar definition of infinite divisibility is as follows. A law F is said to be infinitely divisible if for every n ∈ N, there exists a sequence of iid random vectors {X k,n } k=1,...,n such that F = L (∑ n k=1 X k,n ). Immediately from this definition, we see that if ϕ is the characteristic function of a random vector X, then X is infinitely divisible if and only if there exists a characteristic function ϕ n for every n ∈ N such that (ϕ n ) n ≡ ϕ. This criterion is often useful in determining the infinite divisibility of a law given that we know its characteristic function. Furthermore, the characteristic function of infinitely divisible laws can provide even more insight through the following celebrated result on their characterisation. Theorem 2.1 (Lévy-Khintchine representation). A probability law F is infinitely divisible if and only if there exists a triple (a, S, ν), where a ∈ R d , S ∈ R d×d is a symmetric nonnegative-definite matrix and ν(dz) is a measure on R d 0 such that (1 ∧ z 2 ) ν(dz) < +∞, (2.1) and the characteristic function of F is given by Moreover, if it exists, the triple (a, S, ν) is unique.
A rigorous proof of the Theorem 2.1 can be found in [95,Section 8]. We call the measure ν(dz) which satisfies the integrability condition (2.1) a Lévy measure. A stochastic process {X t : t ≥ 0} in R d with X 0 = 0 a.s. is a Lévy process if (i) it has stationary and independent increments, (ii) it is continuous in probability, that is, for any ε > 0 and t ≥ 0, it holds that lim ∆→0 P( X t+∆ − X t > ε) = 0, and (iii) t → X t is càdlàg.
While the general infinitely divisible law and Lévy process may contain a Gaussian component, our focus is restricted to the absence of such components, that is, when S = 0 in the Lévy-Khintchine formula (2.2). Among the most elementary examples of Lévy processes are the Poisson process and its generalisation, the compound Poisson process. The (compound) Poisson process is a Lévy process with a (compound) Poisson marginal law. A random vector X is distributed under a compound Poisson law if and only if it can be expressed as the random sum X = ∑ N k=1 Y k , where N is a Poisson random variable with rate λ > 0 and {Y k } k∈N is a sequence of iid random vectors with law ρ(dz) independent of N, with the characteristic function The connection between Lévy processes and infinitely divisible laws runs deep. Specifically, the relationship is a correspondence. Where {X t : t ≥ 0} is a Lévy process, it holds that ϕ X t ≡ (ϕ X 1 ) t , so every increment of a Lévy process is infinitely divisible. Conversely, every infinitely divisible law admits the existence of a Lévy process with a matching marginal law. To interpret the Lévy-Khintchine representation in the context of Lévy processes, note that the characteristic function ϕ (2.2) corresponds to a convolution of a Brownian motion and many Poisson processes of different jump sizes and intensities governed by the Lévy measure ν(dz). The vector a corresponds to a linear drift while the matrix S corresponds to the diffusion matrix of the Brownian motion. With respect to the Poisson component, the Lévy measure ν(B), for any B ∈ B(R d 0 ), corresponds to the expected number of jump sizes in B in a unit time interval. The term i θ θ θ , z in the integrand is a centring term for the convolved Poisson processes with small jump sizes, which ensures that the integral exists in the case where ν(dz) is an infinite Lévy measure with a heavy density near the origin. Specifically, since e i θ θ θ ,z − 1 ∼ | θ θ θ , z | as z → 0, if linear functions are not ν-integrable near the origin, then the compensation terms are necessary and cannot be integrated separately in general. An exception is the so-called subordinator where the Lévy measure has positive support and finite first moment around the origin. The subordinator forms an important subclass of Lévy processes, which can be employed to express random clocks. In this survey, the stable process (Lévy measure (2.5)) with stability α ∈ (0, 1) without negative jumps, tempered stable process (Lévy measure (2.6)) with stability α ∈ (0, 1) without negative jumps and the gamma process (Lévy measure (2.9)) are examples of subordinators.
To summarise, a Lévy process is generally comprised of a diffusion component, which is Gaussian, and a jump component, which can be decomposed into compound and compensated Poisson components for large and small jumps, respectively. For a comprehensive review of the theory of Lévy processes, we refer the reader to [10,95]. An important takeaway relevant to our discussion later is as follows: an infinitely divisible random vector X characterised by the triplet (0, 0, ν) can be understood as the (possibly infinite) sum of the jumps of a Lévy process characterised by the same Lévy-Khintchine triple over the unit time interval. This allows us to make sense of the concept of jumps in the context of an infinitely divisible random vector.
A related concept to infinitely divisible laws and Lévy processes is the notion of infinitely divisible processes [80,84,99]. A stochastic process is infinitely divisible if its finite dimensional distributions are infinitely divisible. Naturally, all Lévy processes are infinitely divisible. However, the class of infinitely divisible processes is more general, as it includes the following. A stochastic process {X t : t ≥ 0} in R d is a stochastic integral process if it can be represented in the stochastic integral form as where f is a suitable deterministic function and Λ(ds) is an independently scattered infinitely divisible measure on a suitable space. We refer the reader to Theorems 4.11 and 5.2 in [80] for details regarding the stochastic integral form (2.4). An important example of such a random measure is that generated by the increments of an additive process {Z t : t ∈ S}, where S is a possibly unbounded interval. Moreover, by imposing stationary increments, the infinitely divisible measure corresponds to a Lévy process, which is most relevant to our discussion. Stochastic integral processes driven by Lévy processes of infinity jump activity are of interest in Section 5.2. We exclude the case of finite jump activity from the discussion, as approximations are not required in that setting. We mention that stochastic integral processes of the form (2.4) where the integrator is not independently scattered are also of interest in the literature, such as in the case of a cluster compound Poisson random measure [92].

Examples of infinitely divisible laws without Gaussian components
We provide some examples of infinitely divisible laws, and equivalently, of Lévy processes. In particular, our definitions will be provided at the level of the Lévy measure, as this not only exemplifies the immense utility of the Lévy-Khintchine formula (2.2), but also sets up for their later use as demonstrations of shot noise representations in Section 3.
We begin by noting that a compound Poisson law is an infinitely divisible law without Gaussian components where the Lévy measure ν(dz) is finite. For example, if ν ≡ δ 1 , then we have the standard Poisson distribution. Of course, we obtain the compound Poisson process and standard Poisson process when we consider the Lévy-Khintchine triple in the context of Lévy processes. The compound Poisson distribution and process have found use in a variety of applications [49,77].
We now define the stable law according to its Lévy measure [95,Section 14]. A stable law with stability α ∈ (0, 2) and scale a > 0 is an infinitely divisible law without Gaussian components and with its Lévy measure given by where S d−1 is the unit sphere of R d and σ (dξ ξ ξ ) a probability measure on S d−1 . The stable law, which contains and generalises the Cauchy (α = 1) and Gaussian (α → 2) laws, has received widespread attention in the literature due to its usefulness as a heavy-tailed distribution. Similarly, the corresponding stable process has also appeared in wide-ranging applications [77,98], to name a couple. See [94] for a comprehensive review of its properties. The simulation of stable processes have been thoroughly studied in the literature, for example, in [47,94]. It is important to note that the Lévy measure (2.5) is expressed in polar form, that is, the integrand a/r α+1 captures the expected number of jumps with magnitude r over any unit time interval, while σ (C) captures the proportion of jumps in the set of directions C ⊆ S d−1 .
The shot noise representation for the stable law is provided in Example 3.3 and error analysis of its truncation is given in Example 4.1.
As a heavy-tailed distribution, the stable law does not have second-order moments for α ∈ (0, 2), and no first-order moment for α ∈ (0, 1]. One can construct a law which preserves all moments and yet resembles many of the properties of the stable law by truncating the Lévy density [71]. Another elegant approach is through the exponential tempering [62,88] of the Lévy measure (2.5), of which we summarise in the following. Suppose we apply a polar decomposition on the stable Lévy measure ν S (dz) to obtain ν S (dr, dξ ξ ξ ) = h(dr, ξ ξ ξ )σ (dξ ξ ξ ), where {h(·, ξ ξ ξ )} ξ ξ ξ ∈S d−1 is an appropriate family of Lévy measures defined on (0, +∞) and σ (dξ ξ ξ ) remains a probability measure on S d−1 . Then, the infinitely divisible law without Gaussian components with Lévy measure ν(dr, dξ ξ ξ ) = q(r, ξ ξ ξ )ν S (dr, dξ ξ ξ ) is called a tempered law, where r → q(r, ξ ξ ξ ) is completely monotone and lim r→+∞ q(r, ξ ξ ξ ) = 0 for every ξ ξ ξ ∈ S d−1 . By the complete monotonicity, the tempering function can be represented as q(r, ξ ξ ξ ) = +∞ 0+ e −rs Q(ds, ξ ξ ξ ), where {Q(ds, ξ ξ ξ )} ξ ξ ξ ∈S d−1 is a family of finite Borel measures on (0, +∞). Tempering the stable Lévy measure (2.5) with the function q(r, ξ ξ ξ ), we obtain the Lévy measure where the measure ρ(dv) satisfies We refer the reader to [88] for the proofs for the formulas (2.6) and (2.7). Thus, we call an infinitely divisible law without Gaussian components with Lévy measure of the form (2.6) a tempered stable law, where α ∈ (0, 2) is the stability parameter and the measure ρ(dv) satisfies R d 0 v α ρ(dv) < +∞. A fascinating attribute of the tempered stable process is that it behaves like a stable process in short time and a Brownian motion in long time. The tempered stable process has found applications in financial modelling [20,66] and statistical mechanics [17,19]. Shot noise representations of the tempered stable law are presented in Example 3.7 and comparison of truncation errors for the representations are provided in Example 4.3.
In the one-dimensional setting, the CGMY law is related to the tempered stable law, which is associated with the CGMY process introduced in [20] as a model for asset returns, governed by the Lévy measure where C > 0, G ≥ 0, M ≥ 0 and Y < 2. Clearly, the CGMY Lévy measure (2.8) with G, M > 0 and Y ∈ (0, 2) can be expressed in terms of the tempered stable Lévy measure (2. . We mention here that shot noise representations for the tempered stable law (Example 3.7) do not cover CGMY laws with Y ≤ 0. An important case of the CGMY law is when Y = 0. A gamma law with shape a > 0 and scale β > 0 is an infinitely divisible law without Gaussian components and with the Lévy measure defined on (0, +∞). The gamma process has seen applications in various areas including degradation modelling [49] and statistical mechanics [17]. We emphasize the distinctness of properties between the gamma and tempered stable laws, despite both being cases of the CGMY law with finite moments of all polynomial orders. For example, we will later see immense differences in their shot noise representation (Examples 3.8 and 3.7), and the inapplicability of Gaussian approximation for the truncation error in the case of the gamma law (Section 4.2).
Following a similar idea of tempering the stable Lévy measure (2.5), another direction of generalising the stable law is through the layered stable law [42]. An infinitely divisible law without Gaussian components is a layered stable law if its Lévy measure satisfies where σ (dξ ξ ξ ) is a probability measure on the unit sphere S d−1 of R d , and q : (0, ∞) × S d−1 → (0, ∞) is a locally integrable function such that for almost every ξ ξ ξ ∈ S d−1 , where c 1 and c 2 are σ -integrable positive functions on S d−1 , where (α, β ) ∈ (0, 2) × (0, ∞) are the inner and outer stability indices, respectively. Similarly to the tempered stable process, the layered stable process exhibits transient behaviour across different time scales. Specifically, it behaves as a α-stable process in short time, and as a β -stable process in long time. When β > 2, the long time behaviour resembles a Brownian motion. Shot noise representations of the layered stable law are presented in Example 3.9.

Shot noise representation of infinitely divisible laws
We work towards shot noise series representations of infinitely divisible laws without Gaussian components. That is, we seek to represent such laws using series with summands dependent on Poisson arrival times and random shot markings. We build up our understanding of series representations from the least general case to the most, with the most general formulation being the generalised shot noise method [85] (Theorem 3.4). We motivate series representations of infinitely divisible laws via the Lévy-Itô decomposition [81,87], and demonstrate that such representations are possible precisely because the laws can be summarised entirely by the jumps of the corresponding Lévy process.

The case of finite Lévy measure
As a motivating example, we begin by offering a simple shot noise representation for the infinitely divisible random variable characterised by a finite and absolutely continuous Lévy measure with bounded positive support [31]. Recall that the case of a finite Lévy measure corresponds to a compound Poisson distribution. Such a random variable can be thought of as the position of the corresponding compound Poisson process at unit time. Let {Γ k } k∈N be the arrival times of a standard Poisson process. Consider a nonnegative intensity function h such that T 0 h(s) ds < +∞ for a fixed truncation time T > 0, and define H I (t) := inf{u ∈ [0, T ] : u 0 h(s) ds < t} as the generalised inverse of the corresponding mean value function. Then, the random variable is well-defined and infinitely divisible with Lévy measure h(z)dz defined on (0, T ]. This result can be verified by checking the characteristic function of the series, which is easily computed by conditioning on the number of jumps of the standard Poisson process over [0, T 0 h(s) ds]. With the shot noise representation (3.1), we have represented every univariate infinitely divisible law with a finite and absolutely continuous Lévy measure defined on (0, T ] as a series based on Poisson arrival times. Specifically, if X is an infinitely divisible random variable in R without Gaussian components and with a finite Lévy measure ν(dz) = h(z) dz whose density has support over a positive bounded interval (0, T ], then X is equal in distribution to the series (3.1). So indeed, by setting H I (t) as the cumulative integral of the Lévy measure ν(dz) and interpreting the Poisson arrival times as random steps over the domain of H I (t), a transformed state space, rather than over time, we obtain a shot noise representation of the infinitely divisible law without Gaussian components. This idea naturally leads the more general inverse Lévy measure method discussed later in Section 3.3. The subscript for the kernel H I is used to distinguish between the kernel of the inverse Lévy measure method and that of the generalised shot noise method in Section 3.4.

Lévy-Itô decomposition
We would like to introduce the Lévy-Itô decomposition as a means to work towards a general approach for shot noise series representation, which extends to infinitely divisible random vectors with infinite Lévy measures. Recall that we can interpret infinitely divisible random variates as the unit-time increment X 1 − X 0 of its corresponding Lévy process. This interpretation will prove to be fruitful for more general shot noise representations for infinitely divisible laws through the Lévy-Itô decomposition for Lévy processes, as first demonstrated by Resnick [81].
For a fixed T > 0, consider a Lévy process {X t : t ∈ [0, T ]} in R d without Gaussian components and with a general Lévy measure ν(dz) satisfying (2.1). Let us define X 0− := X 0 and a measure µ(dz, ds) on R d 0 × [0, +∞) such that That is, µ(B, J) is a random variable that counts the number of jumps with sizes in B during the time interval J. Define S t,n := {s ∈ [0,t] : |X s − X s− | > 1/n} to be the set of points at which jumps with sizes greater than 1/n occur in [0,t]. Since X is almost surely finite on [0,t], we must have that |S t,n | < +∞ almost surely, so the set of all jumps S t := ∪ n∈N S t,n over [0,t] must be almost surely at most countable. Thus, we can express for every B ∈ B(R d 0 ). Clearly, {µ(·, [0,t])} t≥0 forms a family of random counting measures. We state a couple of results from [2]. Firstly, if {N t : t ≥ 0} is a Lévy process that is nondecreasing and N t − N t− takes values in {0, 1} for every t ≥ 0, then it is a Poisson process. This naturally leads to the following result. Let B ∈ B(R d 0 ) such that ν(B) < +∞. Then, {µ(B, [0,t]) : t ≥ 0} is a Poisson process in R d 0 with intensity ν(B). Additionally, we can include the case of infinite Lévy measures by replacing the assumption ν(B) < +∞ with B not including zero in its closure, as this avoids the accumulation of infinitely many small jumps. The upshot is that µ(dz, ds) is a Poisson random measure on R d 0 × [0, +∞) with intensity measure (ν × m)(dz, ds), where m(ds) is the Lebesgue measure on R. Note that this verifies the interpretation of ν(B) as the expected number of jumps with sizes in B ∈ B(R d 0 ) over the unit interval, as Moreover, the stated result on counting jump discontinuities shows that every Lévy process admits a Poisson random measure on R d 0 × [0, +∞) with intensity measure (ν × m)(dz, ds). With this understanding of the connection between Poisson random measures and Lévy processes, it is reasonable now to present the following case of the Lévy-Itô decomposition.
Theorem 3.1 (Lévy-Itô decomposition). Let {X t : t ≥ 0} be a Lévy process in R d without Gaussian components with Lévy measure ν(dz). Where µ(dz, ds) is the Poisson random measure on R d 0 × [0, +∞) with intensity measure (ν × m)(dz, ds) associated with X, it holds that We refer the reader to [95,Chapter 4] for details. One must take care when working with the double integral term in the Lévy-Itô decomposition; the linearity of the integral cannot be applied if the integrand z is not integrable with respect to the relevant measures. This highlights the importance of the compensation term v(dz)ds. In the case that z is integrable, linearity of the integral can be applied to (3.2) to obtain In the case where z is not integrable, define Then, we are guaranteed that z is integrable for every n ∈ N and hence we can apply linearity of the integral to obtain It is clear that X (n) t → X t pointwise as n → +∞. Thus, we can derive a series representation for Lévy processes by finding an appropriate representation of µ(dz, ds) as a random sum, which is possible in principle as it is a random counting measure. For the moment, suppose that µ(dz, ds) = ∑ +∞ k=1 δ (J k ,T k ) (dz, ds) almost surely, where {J k } k∈N and {T k } k∈N are suitable independent sequences of random vectors in R d and [0, T ], respectively. Then, we have that Finally, by letting n → +∞, we obtain the shot noise series representation [87, Section 1] where {c k } k∈N is a sequence of suitable compensation vectors in R d (sometimes referred to as centres if the expectation of each term is used) which guarantee the convergence of the series. In the case where the Lévy process is a subordinator, convergence without compensation vectors is ensured, as its Lévy measure has finite first moment about the origin. Where X 1 is our infinitely divisible random vector of interest, its shot noise representation is given by ∑ +∞ k=1 (J k − c k ) in law. As demonstrated, the crux of this series representation based upon the Lévy-Itô decomposition is the representation of the Poisson random measure associated with a Lévy process as a random sum of Dirac delta measures. That is to say, this approach has reduced the problem of deriving the shot noise representation of Lévy processes in general settings to finding expressions of µ(dz, ds) as random sums. We explore one such method in the following.

Inverse Lévy measure method
We would like to generalise the shot noise representation in Section 3.1 to infinitely divisible random variables with infinite Lévy measures with support extending to negative real numbers. In Section 3.1, we assumed T 0 h(s) ds < +∞ so that the generalised inverse of the mean value function H I is well-defined. However, a simple extension to infinite Lévy measures is to instead define the kernel H I (r) := inf{u ∈ (0, +∞) : ν((u, +∞]) < r} to run down from infinity instead. This kernel is well-defined even in the case of infinite Lévy measures since, by definition, the tail of the Lévy measure is finite. Let {X t : t ∈ [0, 1]} be a Lévy process in (0, +∞) without Gaussian components with Lévy measure ν(dz). Unlike the setting of Section 3.1, we do not assume the Lévy measure ν(dz) to admit a density on (0, +∞). As mentioned in the previous section, our goal is to represent the Poisson random measure µ(dz, ds) with intensity measure ν × m associated with the Lévy process X as a random sum. We state the following result from [87, Proposition 2.1], which will shortly prove to be essential.
If, in addition, the Poisson random measure N(dz) is defined on a probability space which admits the existence of a standard uniform random variable independent of N(dz), and there exists a sequence of random elements {Y k } k∈N on S such that then there exists a sequence of random elements {X k } k∈N defined on a common probability space as N(dz) that is identical in law to {Y k } k∈N and With this, we will now present the inverse Lévy measure method for computing a shot noise representation in the one-dimensional setting. Let {Γ k } k∈N be a sequence of standard Poisson arrival times. Then, its corresponding Poisson random measure with intensity measure m(dz) can be expressed by ∑ +∞ k=1 δ Γ k (dz). Define the marked Poisson random measure Substituting this representation for µ(dz, ds) into the Lévy-Itô decomposition of X t and following the derivation of (3.3), we obtain a shot noise representation for X via the inverse Lévy measure method as [31,87] where {c k } k∈N is a sequence of suitable centres. Where the unit-time marginal X 1 is the infinitely divisible random variable of our interest, its shot noise representation is given by ∑ +∞ k=1 (H I (Γ k ) − c k ) in law. This is almost identical to the series (3.1) except for the appearance of compensating constants, which should be expected given the possibility of heavy intensity of the Lévy measure about the origin.
We wish to extend the inverse Lévy measure method to the multidimensional setting with jumps in any direction. We will loosely present LePage's approach [69]. Let {X t : t ∈ [0, 1]} now be a Lévy process in R d without Gaussian components with Lévy measure ν(dz). We seek a representation of the associated Poisson random measure µ(dz, ds) on R d 0 × [0, 1] with intensity measure ν × m as a series like (3.4). Consider a radial disintegration of ν(dz) given by where σ (dξ ξ ξ ) is some probability measure on the unit sphere S d−1 of R d and {h(·, ξ ξ ξ )} ξ ξ ξ ∈S d−1 is a measurable family of Lévy measures on (0, +∞). The idea behind this disintegration is to decompose the intensities of jumps sizes by magnitude and direction; specifically, the intensity of jumps of magnitude r > 0 in the direction ξ ξ ξ ∈ S d−1 is represented by h(dr, ξ ξ ξ ), while the proportion of all jumps with directions in the set C ⊆ S d−1 is given by σ (C). We define the generalised inverse of the tail of h((u, +∞), ξ ξ ξ ) in the direction ξ ξ ξ as Let {U k } k∈N be a sequence of iid random vectors with law σ (dξ ξ ξ ) on S d−1 , independent of {Γ k } k∈N and {T k } k∈N . Similarly to before, we define the marked Poisson process Substituting the expression for µ(dz, ds) into the Lévy-Itô decomposition, we obtain Where the unit-time marginal X 1 is our infinitely divisible random vector of interest, its shot noise representation is given by By comparing the shot noise representations between infinitely divisible laws and their corresponding Lévy measures, we see a correspondence in which the series pertaining to the latter is merely the former but with uniform scattering of summands. Intuitively, the uniform scattering is necessary to preserve the stationarity of increments. An alternative derivation of the inverse Lévy measure method in the multidimensional case can be obtained via the generalised shot noise method of Section 3.4 in the following.
We provide the shot noise representation of the stable law obtained via the inverse Lévy measure method [70], which is crucial from a theoretical perspective [26,47,93,94] as well as for practical use [53,54,68], to name a few examples. Example 3.3 (Inverse Lévy measure method for stable random vector). Let X be a stable law with Lévy measure (2.5). Then, by the inverse Lévy measure method, it holds that where {U k } k∈N is a sequence of iid random vectors with distribution σ (dξ ξ ξ ) and {c k } k∈N is a sequence of suitable centres given in Theorem

Generalised shot noise method
We will now present the generalised shot noise method [85,87], which generalises the inverse Lévy measure method of Section 3.3.
Theorem 3.4 (Generalised shot noise method). Suppose a Lévy measure ν(dz) on R d 0 can be decomposed as where U is a random vector in some space U and H : (0, +∞) × U → R d 0 is such that for every u ∈ U , r → H(r, u) is nonincreasing. Then, the following statements hold.
(i) It holds that where X is an infinitely divisible random vector without Gaussian components with the Lévy measure ν(dz), {Γ k } k∈N are the arrival times of the standard Poisson process, {U k } k∈N are iid copies of U independent of {Γ k } k∈N , and {c k } k∈N is a sequence of suitable centres in R d . Moreover, we can take If additionally, the Lévy measure ν(dz) on R d 0 satisfies z >1 z ν(dz) < +∞, then we can instead take is an infinitely divisible random vector characterised by the Lévy-Khintchine triplet (a, 0, ν).
A rigorous proof of the generalised shot noise method can be found in [87], along with the almost sure convergence of the series (3.10). Moreover, there exist independent random sequences {Γ k } k∈N and {U k } k∈N such that the equality (3.10) holds almost surely. This result not only generalises the inverse Lévy measure method, but as the decomposition (3.9) of the Lévy measure is not unique, the generalised shot noise method can be used to derive distinct shot noise representations of the same infinitely divisible random vector. In particular, Theorem 3.4 can be used to derive the inverse Lévy measure, rejection, thinning and Bondesson's methods for shot noise representation of an infinitely divisible random vector, as follows.
Proposition 3.5 (Inverse Lévy measure, rejection, thinning and Bondesson's methods). Let ν(dz) be a Lévy measure on R d 0 such that where σ (dξ ξ ξ ) is a probability measure on the unit sphere S d−1 of R d and {h(·, ξ ξ ξ )} ξ ξ ξ ∈S d−1 is a measurable family of Lévy measures on (0, +∞). Let {Γ k } k∈N be a sequence of standard Poisson arrival times, and {U k } k∈N a sequence of iid random vectors under σ (dξ ξ ξ ) independent of {Γ k } k∈N . Then, an infinitely divisible random vector X without Gaussian components with the Lévy measure ν(dz) has the following representations.
Thus, we see that the generalised shot noise method allows us to choose from different shot noise representations of the same infinitely divisible law due to the nonuniqueness of the decomposition (3.9) of the Lévy measure. We remark that alternative series representations of Lévy processes exist. One such example is via the Karhunen-Loève expansion [38], leading to a Fourier-like series with infinitely divisible coefficients, which can be approximated via the truncation of shot noise representation (Section 4).

Examples
Armed with several different shot noise representation methods, we provide examples of shot noise representations of infinitely divisible laws. We begin by revisiting Example 3.3 of the stable law.
Example 3.6 (Shot noise representations for the stable law). The shot noise representation (3.8) can also be obtained via Bondesson's method, for instance, with G ≡ δ {+1} and h(r, ξ ξ ξ ) = a/(αr α ), as well as with the rejection method with the trivial choice h p (·) ≡ h(·). While the thinning method can be applied, with F(dr, ξ ξ ξ ) = e −r dr for example, it is significantly elapsed by the representation (3.8) in terms of elegance and practicality.
Next, in contrast to this lack of choice regarding shot noise representations for the stable law, we consider several shot noise representations for the tempered stable law.
Example 3.7 (Shot noise representations of tempered stable law). We first consider shot noise representations of the tempered stable law based on the thinning, rejection and inverse Lévy measure methods which we have seen previously in Proposition 3.5 [45]. For every α ∈ (0, 2) and r > 0, define . Define the following kernels: k } k∈N be a sequence of independent exponential random variables with rate λ / V k , and {W k } k∈N a sequence of independent gamma random variables with shape λ 1 and scale V k /λ 2 . Define the series 20) where {c j,k } k∈N , j = 1, 2, 3, 4, are sequences of suitable centres in R d . Then, for j = 1, 2, 3, 4, the series X j converges almost surely and is equal in law to the tempered stable law with parameters α and ρ(dv). The first two representations are derived from the thinning method and the latter two are derived from the rejection and the inverse Lévy measure methods, respectively.
Yet another shot noise representation for the tempered stable law is Rosiński's representation [88], which can be verified via the generalised shot noise method (Theorem 3.4) but falls outside of the methods described in Proposition 3.5. Let {W k } k∈N , {U k } k∈N and {V k } k∈N be mutually independent sequences of iid standard exponential, standard uniform random variables and random vectors in R d 0 with distribution v α ρ(dv)/m α,ρ , respectively. Then, where {Γ k } k∈N is a sequence of standard Poisson arrival times and k 0 and z 0 are suitable constants depending only on α and ρ, we have that the series converges almost surely and is equal in law to the tempered stable law with parameters α and ρ(dv). Along with the aforementioned representations (3.18)-(3.21), the shot noise representation (3.22) shares the advantages of being explicit and exact.
Note that the shot noise representations for the tempered stable law in Example 3.
) do not cover the case of the CGMY law with Y ≤ 0. With Y < 0, the CGMY law becomes compound Poisson. For the case when Y = 0, which leads to the gamma law, we present its shot noise representations [87, Section 6] as follows.
Example 3.8 (Shot noise representations of gamma law). Let X be a gamma law with Lévy measure (2.9). Let {Γ k } k∈N be standard Poisson arrival times. Then, it holds that (i) by the inverse Lévy measure method, where E 1 (x) := +∞ x u −1 e −u du denotes the exponential integral function and E −1 1 its inverse. (ii) by the rejection method with h(r) = ae −β r /r and h p (r) = a/r(1 + β r), where {V k } k∈N is a sequence of iid standard uniform random variables. (iii) by the thinning method with F(dr) = β e −β r dr, where {V k } k∈N is a sequence of iid standard exponential random variables. (iv) by Bondesson's method with G(du) = β e −β u du and g(r) = e −r/a , where {V k } k∈N is a sequence of iid standard exponential random variables.
Of the shot noise representations of the gamma law above, the easiest series to work with is perhaps the one associated with Bondesson's method (3.26), first appearing in [12]. By contrast, the most difficult series from an implementation point of view is the one resulting from the inverse Lévy measure method. Comparing this to the case of the stable law (Example 3.3) where we saw that the inverse Lévy measure method yields the most convenient representation, it is clear that there is an advantage to having a variety of shot noise representation methods such as those in Theorem 3.4 and Proposition 3.5. We mention that the inverse Lévy measure method (3.23) is employed in [63] to describe shot noise representations for the variance gamma law and process. Next, we present shot noise representations of a layered stable law [42]. Example 3.9 (Shot noise representations of a layered stable law). Let X be a layered stable law with the Lévy measure (2.10) and Let {Γ k } k∈N be standard Poisson arrival times independent of a sequence {V k } k∈N of iid random vectors distributed under σ (dξ ξ ξ ). Denote z 0 := E[V 1 ] and {b k } k∈N a sequence of suitable centring constants depending only on β . Then, it holds that (i) by the inverse Lévy measure method, (ii) assuming α < β , by the rejection method, where {U k } k∈N is a sequence of iid standard uniform random variables independent of the other random sequences and (iii) assuming α < β , by the rejection method, where {U k } k∈N is a sequence of iid standard uniform random variables independent of the other random sequences and These examples of shot noise representations are only a handful of the applications of shot noise methods in the literature, and the derivation of shot noise representations and their usage in sampling and simulation are still ongoing topics of research. We remark that shot noise series is the only known representation of infinitely divisible laws in many cases, with the multivariate stable law and its generalisations as such examples. In the following, we discuss a truncation scheme for sampling via shot noise representations and the associated error analysis.

Truncation of shot noise representations
We have seen in Section 3 that for shot noise representations, the summand H(Γ k ,U k ) in (3.10) corresponds to jumps associated with the Lévy measure. In particular, as H(·, ξ ξ ξ ) is nonincreasing, the summands are expressed in the descending order of jump magnitudes. Naturally, this implies that the first finitely many (large) jumps account for significantly more variation of the infinitely divisible random vector than the remaining smaller jumps [44]. With the validated notion that the shot noise series can be reasonably approximated by its partial sums, our shot noise representation established previously provides us with a powerful method to approximating infinitely divisible laws for sampling. Namely, we do so by truncating the series representation to a finite sum. Simulation via shot noise is well scalable to higher dimensional settings (for example, see [96]). Simulation via a finite truncation approach in the case of infinite jump intensity is typical for shot noise processes [73,74]. In what follows, we describe a particular truncation scheme in more detail and provide examples and error analysis.
Let X be an infinitely divisible random vector without Gaussian components and with an infinite Lévy measure ν(dz). To provide intuition for approximations via shot noise representation, we describe two finite truncation schemes with the inverse Lévy measure method. Suppose a shot noise representation for X is given by where the random sequences are as in Proposition 3.5 (i). While we have assumed a scenario in which the compensation constants are not required for simplicity, it should be noted that their presence does not lead to any substantial difference in the analysis. Immediately, one may approximate X by only including the summands corresponding to the index set {1, 2, · · · , n} for a fixed truncation parameter n ∈ N rather than the entire infinite series. This is simple to implement, and it is clear that greater accuracy can be obtained by increasing n. However, there are two important aspects of this deterministic truncation scheme to consider. Firstly, by fixing n, we are conditioning on the number of jumps of the approximation, which may be undesirable for certain applied contexts in which the number of jumps is required to remain random. Secondly, while we can see that by truncating the series, we discard all jumps below some magnitude, for the direction ξ ξ ξ ∈ S d−1 this threshold magnitude is given by H I (Γ n , ξ ξ ξ ), which is random.
An alternative truncation scheme is to instead perform summation with respect to the random index set {k ∈ N : Γ k ≤ n}, where n > 0 is the truncation parameter. We refer to this framework as the Poisson truncation approximation, which differs with the deterministic truncation scheme described previously by allowing the index set to depend on the underlying Poisson arrival times. In this way, we no longer condition on the number of jumps, but rather for each direction ξ ξ ξ ∈ S d−1 , we include all jumps with magnitudes greater or equal to the deterministic threshold H I (n, ξ ξ ξ ). This fixed threshold immediately gives an indication of the error. Equivalently, in every direction ξ ξ ξ ∈ S d−1 , this truncation method exactly simulates the tail of the Lévy measure ν(dz) over (H I (n, ξ ξ ξ ), +∞).
In the case that the shot noise representation does not correspond to the inverse Lévy measure method, while the shape of the domain simulated via Poisson truncation may not necessarily be as simple, it still holds that the simulated region is deterministic with finite measure and thus can still provide an indication of the error. We see that the Poisson truncation approximation of an infinitely divisible random vector is in essence an approximation by a compound Poisson random vector. In the form of the generalized shot noise method of Theorem 3.4, the partial Levy measure described via the Poisson truncation {k ∈ N : Γ k ≤ n}, say ν n (dz), is given by for n ∈ N. Hence, the total mass that the Poisson truncation describes is ν n (R d 0 ) = n. Hereafter, we focus on the setting of the Poisson truncation method and reserve the notation ν n for such a truncated Levy measure via the Poisson truncation approximation.
Note that the average number of summands under Poisson truncation is n, since the Γ k 's in the index set {k ∈ N : Γ k ≤ n} corresponds to the arrival times of the standard Poisson process on [0, n]. However, this may be merely an upper bound for the average number of jumps, as for example, some summands may evaluate to zero in the case of the rejection and thinning methods (see (3.14) and (3.15), respectively). Consider the case when the infinitely divisible random vector without Gaussian components has a finite Lévy measure ν(dz), that is, is a compound Poisson random vector. As the total jump intensity ν(R d 0 ) < +∞ is originally finite, the shot noise representation must almost surely have a finite number of nonzero terms. So in this case, there is no need to artificially truncate the series representation (3.10), that is, where N is a Poisson random variable with rate ν(R d 0 ) and {V (k) } k∈{1,··· ,N} is a sequence of order statistics of N iid uniform random variables on (0, ν(R d 0 )). The representation (4.1) provides an alternative random sum representation of compound Poisson random vectors, compared to X . While the latter representation presents the jump structure as iid random vectors, the shot noise representation (4.1) decomposes the jumps in decreasing order of contribution to variation. For the sampling of the compound Poisson law, the random sum (4.1) may be more advantageous to implement if sampling the random sequence {Y k } k∈N is not as computationally convenient, which is often the case in multidimensional settings.
One should also note that by the memoryless property of the exponential distribution, the two sets of Poisson arrival times {Γ k : Γ k ≤ n, k ∈ N} and {Γ k : Γ k ∈ (n, m], k ∈ N} are independent for every n < m. Consequently, we can consider the Lévy measures (ν − ν n )(dz) and (ν n − ν m )(dz) as corresponding to independent components of the Lévy process with Lévy measure ν(dz). This fact may be useful for incrementally simulating a Lévy process based on the domain of its Lévy measure, as well as for the analysis of error (Section 5.2).
As an example of error analysis, we can consider the mean-squared error associated with the Poisson truncation of shot noise representation. Denote ν(dz) and µ(dz, ds) as the Lévy measure and Poisson random measure associated with the infinitely divisible random vector of interest, and denote ν n (dz) and µ n (dz, ds) as that of its Poisson truncation approximation. Assume the Lévy measure ν(dz) is isotropic. The truncation error for the shot noise representation (3.10) is expressed as the tail series By Theorem 3.1 and the Itô-Wiener isometry, the mean-squared error is given by for every n > 0. For the case where the Lévy measure ν(dz) is not isotropic, application of the Itô-Wiener isometry in (4.2) can still be invoked in the case of the inverse Lévy measure method for n sufficiently large such that the support of (ν − ν n )(dz) is contained in the unit ball. As an example, we present the evaluation of the mean-squared truncation error (4.2) for the isotropic stable law in the following.
the mean-squared error (4.2) of the Poison truncation approximation is given by for every n > 0. As mentioned previously, the mean-squared error (4.3) still holds if σ (dξ ξ ξ ) is not isotropic for sufficiently large n > 0 satisfying (αn/a) −1/α < 1. We see that in the case of the stable random vector, mean-squared convergence of the Poisson truncation approximation is very fast for α close to zero, so truncating the series (3.8) to a relatively small number of terms provides a good approximation. However, for α close to two, convergence is much slower and thus, to achieve a level of accuracy, significantly more summands must be computed. This is intuitive; increasing α leads to thinner tails and hence the magnitudes of the largest jumps decrease. Consequently, the variation explained by the largest jumps becomes diluted. In the univariate case, it is suggested [12] that the approximation can be improved by the inclusion of a normal random variable with variance given by (4.3). This idea is the basis for Gaussian approximation of the truncation error (Section 4.2), which can be extended to the multivariate setting. Alternatively, investigation of the truncation error for the representation for stable laws can be found in [8,9] in terms of optimal bounds for the variation.
The idea that the first few summands of the shot noise representation (3.10) accounts for a large amount of variation has useful applications outside of sampling (Section 6). For instance, it has been shown in [14] that absolute continuity of the law of the first few summands guarantees the absolute continuity of the entire shot noise series.

Comparisons among shot noise representations
We compare among the errors from Poisson truncation approximations of the various shot noise representation methods established in Proposition 3.5. Consider the following general result [46]. (iii) For every q ≥ 0 such that z >1 z q ν(dz) < +∞, Note that the above result holds for the general Lévy measure ν(dz). The result (i) echoes our understanding of Poisson truncation in simulating a mass n irrespective of the underlying shot noise representation, and that truncation of distinct representations corresponds to the simulation of different regions of the Lévy measure ν(dz). By (ii), it is clear that under the Poisson truncation method, no method presented in Proposition 3.5 can simulate the tail of the Lévy measure more accurately than the inverse Lévy measure method. Moreover, in light of (i), we deduce that the other methods simulate regions of the Lévy measure closer to the origin. By (iii), we see that the inverse Lévy measure method captures more variation of the jumps than the other methods. The result (iv) is also useful, as for q = 2, the integrals correspond to the variances of discarded jumps, which can be viewed as representative of the error associated with the approximation. The smaller the variance, the better the approximation. Thus, we see that at least under the Poisson truncation scheme, the Lévy measure method is preferred over the others. The main drawback to the inverse Lévy measure method is that the tail of the Lévy measure may not be conveniently invertible (for instance, see Example 3.8), however in such cases one may still numerically invert the Lévy measure as required [46]. If the numerical inversion of the tail of the Lévy measure is computationally expensive, then one would prefer not to use the inverse Lévy measure method. Additionally, the error of numerical inversion may accumulate in the series.
While Theorem 4.2 only addresses inverse Lévy measure, rejection, thinning and Bondesson's methods for any Lévy measure, it appears that the result extends beyond those methods. As an example, the following demonstrates that the result holds in the case of Rosinski's series representation for the tempered stable law [45].
(ii) It holds that for every q ≥ 0 such that z >1 z q ν(dz) < +∞, We observe that the inverse Lévy measure method outperforms the methods of Proposition 3.5, as expected due to Theorem 4.2, but as well as Rosiński's shot noise representation. We also state the Lévy measure ν 5,n (dz) as follows. Recall from Section 2.2 that the Lévy measure for the tempered stable law can be written in polar form as ν(dr, dξ ξ ξ ) = r −α−1 q(r, ξ ξ ξ )dr σ (dξ ξ ξ ), which is often provided as such. Then, it holds that [22] From this, one can proceed with error analysis by investigating the Lévy measure corresponding to the truncation error. (i) For every x > 0, it holds that (iii) The variance of the discarded jumps is given by where σ 2 := +∞ 0+ z 2 v(dz) = a/β 2 and we denote γ(a, x) := x 0+ u a−1 e −u du as the lower incomplete gamma function for a > 0 and x > 0.
The result (iii) corresponds to Theorem 4.2 (iv) on the variance of the discarded jumps. Without direct comparison, we already know from Theorem 4.2 that the variance associated with the inverse Lévy measure method cannot be outperformed by the other methods examined.
To mention a more recent example of Poisson truncation error analysis, it is found in [72] that in the case of the t-distribution, the mean-squared truncation error for the inverse Lévy measure and rejection methods are both bounded by 2ν/(π(n − 1)), where ν is the degrees of freedom parameter.

Gaussian approximation of small jumps
In the Poisson truncation approximation of an infinitely divisible random vector, as the norm of the kernel r → H(r, v) in (3.10) is nonincreasing, we discard jumps of sizes within some neighbourhood about the origin. Under an appropriate method (for example, the inverse Lévy measure method), suppose the truncation parameter is sufficiently large so only jumps of magnitudes less than a small ε > 0 are discarded. Then, the variance of the discarded jumps is given by A natural idea to pursue is to approximate the discarded jumps by a Gaussian random vector [12]. This is particularly practical when the shot noise representation of the infinitely divisible random vector converges slowly, that is, when r → H(r, v) decreases slowly, as is the case with stable random vectors with stability α close to two (Example 4.1).
In what follows, we restrict ourselves to the one-dimensional setting. Specifically, let X be an infinitely divisible random variable with Lévy measure ν(dz) and X ε the truncation approximation of X with jump sizes of |z| < ε discarded. Define the error, consisting of the discarded small jumps, as X ε := X − X ε . (4.5) We seek the conditions in which it is valid to approximate X ε /σ (ε) with a standard normal random variable, as such a Gaussian approximation cannot be applied for every infinitely divisible law, with the Poisson distribution being a simple counterexample. We examine some notions from [3] on this matter. A necessary and sufficient condition for the Gaussian approximation of discarded jumps to hold is as follows.
Theorem 4.5 (Gaussian approximation of discarded jumps). Let σ (ε) and X ε be defined as in (4.4) and (4.5), respectively. Then, we have that X ε /σ (ε) converges in law to the standard normal distribution as ε → 0 if and only if for every c > 0, as ε → 0.
A sufficient yet more verifiable condition is given in [3, Proposition 2.1], which shows that a Gaussian approximation of the discarded jumps is valid when its variation decreases at a rate slower than its upper bound. If σ (ε)/ε → +∞ as ε → 0, then the condition (4.6) holds. Moreover, if the Lévy measure ν(dz) does not have any atoms in a neighbourhood about the origin, then this condition is equivalent to the condition (4.6).
There is a profound relation between the decay of the generalised inverse of the tail of the Lévy measure to the validity of approximating the discarded jumps by a normal random variable [3, Proposition 2.2]. Suppose the Lévy measure ν(dz) is symmetric and infinite. By symmetry, the kernel (3.12) for the inverse Lévy measure method is independent of the direction and simplifies to H I (r) = inf{u ∈ (0, +∞) : 2ν((u, ∞)) < r} for r > 0. If, for every a > 0, it holds that lim t→∞ H I (t + a)/H I (t) = 1, then X ε /σ (ε) converges in law to the standard normal distribution as ε → 0. For example, it is straightforward to show that the stable distribution satisfies this condition. Thus, the error from truncation asymptotically behaves like a normal random variable. However, not every infinitely divisible distribution enjoys the benefit of Gaussian approximation, as the gamma law is a counterexample where the small jumps are infinitely active yet not intense enough to resemble Brownian infinite variation [3,12]. For Gaussian approximation of discarded jumps in the case of higher dimensions with more general methods than the inverse Lévy measure method, we refer the reader to [22].
In the light of Theorem 4.2 (iv), the error analysis of Poisson truncation can be revisited from the perspective of Gaussian approximation of the error [46]. Suppose that the Poisson truncation error of an infinitely divisible random variable can be approximated by a normal random variable. Denote the Lévy measure associated with the Poisson truncation of each of the shot noise representation methods in Proposition 3.5 as ν k,n (dz), k = 1, 2, 3, 4. Then, for each of the methods, the normal random variable which approximates the discarded jumps has the variance Recall Theorem 4.2 (iv), in which we have seen that σ 1,n ≤ σ k,n holds for k = 2, 3, 4, where k = 1 corresponds to the inverse Lévy measure method. Thus, no normal random variable used in approximating the error of the methods examined has a lower variance than that associated with the inverse Lévy measure method. This can be interpreted as the inverse Lévy measure method being an optimal approximation in the L 2 sense, in comparison to the other methods under Poisson truncation. It is worth noting that this discussion is also important in higher dimensions.

Numerical schemes via truncation of shot noise representation
So far, we have focused primarily on infinitely divisible laws without Gaussian components, with glimpses of shot noise representation for Lévy processes in Sections 3.2 and 3.3. We have seen the correspondence between infinitely divisible laws and Lévy processes reflected in their shot noise representations. In Section 4, we have discussed the method of approximating infinitely divisible random vectors via the truncation of shot noise representation. In this section, we focus entirely on the technique of Poisson truncation of shot noise representations for simulating Lévy processes (Section 5.1), infinitely divisible processes (Section 5.2) and fields (Section 5.3) and Lévy-driven stochastic differential equations (Section 5.4). For simulation purposes, while the truncation of shot noise representation is useful for infinitely divisible laws and Lévy processes, often times it is merely optional among a wider array of numerical methods available. However, for infinitely divisible processes and Lévy-driven stochastic differential equations, truncation of shot noise representations offers a very effective approach due to the necessity of jump-based approximation in distilling the structure of the stochastic process in general. As a means of demonstrating the effectiveness of truncation of shot noise representation, we provide various numerical illustrations along the way.

Simulating Lévy processes
The inverse Lévy measure method established in Section 3.3 presents us with a shot noise representation (3.7) for a Lévy process {X t : t ∈ [0, 1]} in R d without Gaussian components. Recall that this is indeed a shot noise representation for the infinitely divisible random vector X 1 , except the summands are scattered uniformly and the centres are subtracted in proportion over the unit time interval. We begin by briefly describing the general shot noise method for additive processes, which generalise Lévy processes by relaxing the property of stationary increments. Consider an additive process described by a more general Lévy-Itô decomposition [95, Chapter 4] where λ (·) ≥ 0 is a nonnegative function with support including [0, T ] for a fixed T > 0 and µ(dz, ds) is a Poisson random measure on R d 0 × [0, +∞) with intensity measure ν(dz)λ (s)ds. Suppose the Lévy measure ν(dz) satisfies the decomposition (3.9). By a change of variable, it holds that for every (B, Thus, by scattering the summands in (3.10) according to the density λ (·)/ T 0 λ (s) ds instead of uniformly, and replacing Γ k by Γ k / T 0 λ (s) ds, we obtain the shot noise representation for the additive process as where {S k } k∈N is a sequence of iid random variables with density λ (·)/ T 0 λ (s) ds and {c k } k∈N is a suitable sequence of centres. In particular, by considering the case where λ ≡ 1, we obtain the series representation for any Lévy process over [0, T ] based on the generalised shot noise method (Theorem 3.4) as where {T k } k∈N is a sequence of iid uniform random variables on (0, T ) independent of the other random sequences. It is known [87, Theorem 5.1] that the infinite series (5.2) converges almost surely and uniformly on [0, T ]. Moreover, there exists a version of the random sequences in the right hand side of (5.2) such that the equality holds almost surely. We also mention Lévy processes of type G, which admit special shot noise representations related to the inverse Lévy measure method of Section 3.3. A Lévy process {X t : t ≥ 0} in R is said to be of type G if its increments can be represented in law as X t 2 − X t 1 L = V 1/2 G for t 1 < t 2 , where V is a nonnegative infinitely divisible random variable with Lévy measure ν(dz) and G is a standard normal random variable. With H I (·) defined in (3.6) but independent of the directional argument, it holds that [86] where {Γ k } k∈N is a sequence of standard Poisson arrival times, {G k } k∈N is a sequence of iid standard normal random variables and {T k } k∈N is a sequence of iid uniform random variables over (0, T ) such that the random sequences are mutually independent. We refer to [104] for results on error analysis for Poisson truncation of the shot noise representation (5.3). We also mention that the Lévy process of type G admits the subordinated Brownian motion representation {X t : t ≥ 0} is a standard Brownian motion in R and {V t : t ≥ 0} is a subordinator with Lévy measure ν(dz). Thus, an alternative simulation method to the Poisson truncation of the shot noise representation (5.3) is via the truncation of a shot noise representation for the subordinator {V t : t ≥ 0}.
With the knowledge of shot noise representations of Lévy and additive processes and Poisson truncation from Section 4, we have a powerful and general method for simulating their sample paths. All general as well as specific Poisson truncation error analyses for infinitely divisible laws from Section 4 carry over to the case of Lévy and additive processes. In particular, Gaussian approximation of discarded jumps holds for the case of Lévy and additive processes by including a Brownian motion (and its deterministically time-changed version), instead of a normal random vector, scaled by the variance (4.4) of the small jumps [3]. The conditions for the one-dimensional setting as well as their multidimensional generalisations (Section 4.2) apply in this setting.

Simulation recipes
The implementation of Poisson truncation of shot noise representation is attractively clear-cut. Hereafter, we focus on Lévy processes, as the generalisation to additive processes is rather trivial in light of (5.1) and (5.2). Due to the uniform scattering of jumps along the time interval [0, T ], it is little more than simply a matter of independently sampling the random sequences appearing in the series (5.2) and evaluating the shot noise kernel H. To generate the sequence of Poisson arrival times {Γ k } k∈N , one may take advantage of the fact that the interarrival times are iid standard exponential random variables {E k } k∈N . Based on the Poisson truncation of the shot noise representation (5.2), we describe the simulation recipe for generating approximate sample paths of the Lévy process in R d over the time interval [0, T ] as follows.
Step 1. Generate a standard exponential random variable E 1 . If E 1 ≤ nT , then assign Γ 1 ← E 1 . Otherwise, return the degenerate zero process as the approximate sample path and terminate the algorithm.
In the case where the centres {c k } k∈N are not necessary for the almost sure convergence of the shot noise series (5.2), the assignments in step 7 simplify to X k ← X k−1 + J (k) . We can alternatively sample the Poisson arrival times in the spirit of the compound Poisson representation (4.1), as follows.
Step 1. Generate a Poisson random variable N with rate nT . If N = 0, then return the degenerate zero process as the approximate sample path and terminate the algorithm. Step 2. Generate N iid uniform random variables {V k } k∈{1,...,N} on (0, nT ) and sort in ascending order to obtain {V (k) } k∈{1,...,N} .
Note that in the above simulation recipes, we work under the truncation {k ∈ N : Γ k ≤ nT } instead of {k ∈ N : Γ k ≤ n} to simulate a mass n of the Lévy measure per unit time (recall Theorem 4.2 (i)). While the two simulation recipes offered only differ in the method for sampling Poisson arrival times, we highlight some contrasting features which may be important in practice. The second simulation recipe, based on the conditional uniformity of Poisson arrival times, is simpler to implement. In particular, the lack of a need for a while loop makes the this recipe more natural to implement from a functional programming perspective. In contrast, the first recipe based on successive exponential sampling carries the advantage of easy modification for adaptive truncation. That is, one may continually sample summands until some criterion, which may dynamically update, is reached. The second recipe based on conditional uniformity of Poisson arrivals does not share this advantage, as resampling the Poisson random variable for the number of jumps with a larger rate does not guarantee a greater number of jumps. Additionally, previous jump timings cannot be reused and would require complete resampling from scratch, which would become a source of inefficiency. Hence, in some scenarios, there are grounds to prefer one recipe over the other.

Numerical illustrations
In what follows, we provide numerical examples of approximate sample paths for some representative Lévy processes of both theoretical and practical interests. We first demonstrate the truncation scheme for the gamma process in Figure 1 based on Bondesson's method described in Example 3.8 (iv), with different truncation parameters to illustrate the convergence and error of the Poisson truncation method. The estimates for the unit-time marginal in Figure 1 suggest that the approximate sample paths based on the truncation of Bondesson's shot noise representation for the gamma process (3.26) converge very fast in n. This has already been verified by Example 4.4 (the k = 4 case), in which the mean and variance of the truncation error converge to zero exponentially fast. Thus, despite the inapplicability of Gaussian approximation of the discarded jumps for the gamma process (Section 4.2), the exponential mean-squared convergence of this method makes such accuracy considerations superfluous.
Recall that the shot noise representation of the stable law is provided in Example 3.3, and the discussion of the mean-squared error of its truncation is given in Example 4.1. We provide sample paths of the 2-dimensional stable process with the isotropic Lévy measure in Figure  2. From our numerical illustration, we see that lower values of the stability parameter correspond to larger jumps (Figure 2 (a)), while higher values of the stability parameter correspond to smaller jumps and resemble closer to a Brownian motion (Figure 2 (c)). In contrast to the case of the gamma process, the mean-squared convergence remains slower than exponential. As such, Gaussian approximation of the discarded jumps (Section 4.2) can be practical for enhancing the accuracy of the simulation in this case.
We provide sample paths of the tempered stable process with the isotropic Lévy measure based on Rosiński's series representation (3.22) in Figure 3 below. Comparing with Figure 2, we see that the jumps of the tempered stable sample paths tend to be smaller, which is expected from the exponential tempering of the Lévy measure. This is most prominent for α = 0.5, where the stable sample paths ( 2 (a)) see jumps with magnitudes easily exceeding 100, while the tempered stable counterparts (Figure 3 (a)) do not observe jumps with magnitudes greater than one, due to the random truncation W k U 1/α k V k in every summand of (3.22). We mention here that for the stable process, allowing the stability parameter α : [0, +∞) → (0, 2) to vary with time leads to the multistable process. Shot noise representations for the multistable processes can be found in [66,67]. Defined similarly is the tempered multistable process, for which the CGMY process [20] with Lévy measure (2.8) is an example of. Shot noise representations for tempered multistable processes are provided in [66]. Another example of representing Lévy processes via their shot noise series is the case of t-distributed increments [72].

Discussion
We discuss some aspects of simulation by Poisson truncation of shot noise representations. To begin with, we remark that simulating Lévy processes based on the thinning method (Proposition 3.5 (iii)) may be computationally taxing for obtaining a large number of jumps. This is due to the decreasing acceptance probability as the summation index k increases. For example, in the case of the thinning method for the gamma process (Example 3.8), the acceptance probability for taking the k-th summand as a jump is given by P(Γ k V k < α). As a result, significantly more computations are required to obtain a large quantity of jumps compared to, say, the inverse Lévy measure method. This explains the absence of exponential convergence rates of the thinning method (3.25) in the case of the gamma process (Example 4.4), that are enjoyed by the other methods (3.23), (3.24) and (3.26). Nevertheless, the thinning method plays a crucial role from a theoretical point of view. For instance, it is used in [99] to discuss the boundedness of infinitely divisible processes. As such, different shot noise representations have different potential and use. For example, while the truncation of the shot noise representation by the inverse Lévy Figure 3: Examples of approximate sample paths of the 2-dimensional tempered stable process with the isotropic Lévy measure based on the truncation of Rosiński's series representation (3.22). The other parameters are a = 1, n = 10 5 and T = 1. Each plot contains 10 iid sample paths.
measure method simulates a Lévy process by discarding its smallest jumps, the truncation of a representation by the rejection method illustrates the relationship between a Lévy process with another. The Poisson truncation approximation of a Lévy process is able to preserve various key properties from the original process. For example, the discontinuity of sample paths clearly hold for both the original Lévy process and its Poisson truncation approximation. As the largest jumps of a Lévy process without Gaussian components account for the majority of the variation, some key moment properties are retained by the Poisson truncation approximation. In the case of the tempered stable and gamma processes, the sample paths resulting from Poisson truncation retain the finiteness of moments of all polynomial orders. Meanwhile, for the stable process, marginal moments of order [α, +∞) remain infinite even after Poisson truncation. Similarly, the extremal behaviour remains unchanged after Poisson truncation as it is attributed to the largest jumps [43].
It is well-known that the finiteness of the total variation of a Lévy process without Gaussian components with Lévy measure ν(dz) depends on the finiteness of the integral R d 0 ( z ∧ 1) ν(dz). In particular, a sufficiently intense activity of small jumps is the only possible source of infinite variation. Truncation of shot noise representation cuts off all small jumps and thus necessarily leads to sample paths of finite variation, so this property is preserved for finite variation Lévy processes. However, sample paths of infinite variation, such as that of stable and tempered stable processes with stability α ∈ [1, 2), will result in finite variation after applying Poisson truncation. We mention that in this case, the total variation of the Poisson truncation approximation diverges fastest in the case of the inverse Lévy measure method, echoing the dominance we saw in Theorem 4.2 (iii) but with the integrand z ∧ 1 instead. The lack of preservation of infinite variation means that even though the truncation of shot noise representation leads to sample paths based on individual jumps, the resulting paths cannot be employed in full for investigating sample path properties. From the perspective of simulation, this is not an issue, as the compromise of generating finite variation approximations of the Lévy process is implied by the very nature of numerical investigation.
Before closing this subsection, we mention a possible disadvantage of the truncation method for sample path generation of Lévy processes. When sample path generation via increments is possible, one important distinction between such a scheme and that of a Poisson truncation method is that in the latter, a terminal time T is fixed beforehand and cannot be extended during the simulation, whereas in the former, one can keep piling on as many increments as desired, possibly until some condition is fulfilled. An example of when this distinction is important is through a comparison between the settings of [106] and [18]. In the former, Poisson truncation of shot noise representation is used to generate sample paths of an inverse subordinator. In that setting, the goal of evaluating immobility periods requires the observation of jumps, thus rendering the truncation method as ideal and the incremental method as inappropriate. In [18], however, evaluations of the inverse subordinator at any time t > 0 is required, so if the subordinator (before inversion) has yet to reach t, one must keep adding increments until that level is reached. This is illustrated in Figure 4. Thus, in such a scenario, sample path generation by increments should be preferred.

Simulating infinitely divisible processes
So far, we have studied approximations of Lévy processes via truncation of shot noise representations. We will now generalise the technique further to approximate stochastic processes governed by Lévy-driven stochastic integrals of the form (2.4), which is a large class of infinitely divisible processes. We emphasise that our focus is on the case where the driving Lévy process is of infinite jump activity, as otherwise, exact simulation methods are readily available. The technique revolves around the partitioning the stochastic process based on the the sizes and timings of the underlying jumps, as we will present shortly in (5.5). Shot noise representation is perhaps even more pertinent to the construction and approximation of infinitely divisible processes, which often requires the consideration of individual jumps more so than Lévy processes for both theoretical and numerical purposes.
Suppose we want to simulate the stochastic process {X t : t ∈ [0, T ]} in R d such that the marginal is described by the stochastic integral X t = T f (t, s) dL s , where T ⊆ R and {L s : s ∈ T } is a Lévy process with an infinite Lévy measure ν(dz) with a decomposition (3.9). This corresponds to the stochastic integral form (2.4) where the integrator is a Lévy process. Naturally, each jump of the underlying Lévy process at time s is modulated by s → f (t, s), so we have a Lévy-Itô decomposition of the form where µ(dz, ds) is the Poisson random measure on R d 0 × T associated with {L s : s ∈ T } and ν(dz)ds is the corresponding compensator. Various theoretical developments for infinitely divisible processes and their series representations have been established, such as their spectral representations [80] and path properties [84,99]. Much like in the case of Lévy processes, shot noise representations for infinitely divisible processes converge almost surely uniformly under suitable technical conditions [4]. Moreover, if the probability space is rich enough, then one can choose the random sequences such that the shot noise representation is almost surely equal to the infinitely divisible process [90].
We look to truncate the infinitely divisible process {X t : t ∈ [0, T ]}(5.4) based on jump timings and sizes of the underlying Lévy process {L s : s ∈ T } via shot noise representation in order to obtain independent simulatable and error components for analysis. If Leb(T ) < +∞, that is, when the domain of the jump timings is bounded, we require no truncation on the index set. Otherwise, we introduce a nondecreasing sequence {T n } n∈N of connected subintervals of T such that ∪ n∈N T n = T and Leb(T n ) < +∞. That is, the parameter n represents a truncation based on time timings. For the truncation over jump sizes, we note that the finite measure ν m (dz) := m 0+ P(H(r,U) ∈ dz) dr is the Lévy measure of the Poisson truncation approximation of the Lévy process {L s : s ∈ T n } with respect to the index set {k ∈ N : Γ k ≤ mLeb(T n )}, for every n ∈ N. That is, m ∈ N fulfils the role of a finite truncation parameter on the Lévy measure ν(dz). We remark that for any fixed m, the impact of the truncation on the sizes of the surviving jumps depends on the shot noise representation used, in the same vein as our discussion in Section 4.1. We decompose the stochastic process as [55] Intuitively, the component X t (m, n) corresponds to large jumps over T n , Q t (m) corresponds to small jumps over T , and R t (m, n) corresponds to large jumps larger over T \T n . Since the regions of jump sizes and timings simulated by each component form a disjoint union, the stochastic processes on the right hand side of (5.5) can be treated independently thanks to the independent scattering of the Poisson random measure µ(dz, ds).
where {T k } k∈N is a sequence of iid uniform random variables on T n , and the other random sequences are as in (3.10). For generating sample paths of the infinitely divisible process based on the Poisson truncation approximation (5.6) over sample points 0 = t 0 < t 1 < · · · < t J−1 < t J = T , we provide the numerical recipe as follows: Step 1. Generate a standard exponential random variable E 1 . If E 1 ≤ mLeb(T n ), then assign Γ 1 ← E 1 . Otherwise, return the degenerate zero process as the approximate sample path and terminate the algorithm.
We can consider the residual components {Q t (m) : t ∈ [0, T ]} and {R t (m, n) : t ∈ [0, T ]} as the error processes to analyse. In particular, if the kernel f (t, ·) is square-integrable and essentially uniformly bounded for every t ∈ [0, T ], and the kernel of the shot noise representation H(·, ξ ξ ξ ) satisfies some suitable technical conditions, then the stochastic process {Q t (m) : t ∈ [0, T ]} comprising of small jumps can be approximated by a Gaussian process in the same vein as Section 4.2. For more details, we refer the reader to [55].
In what follows, we demonstrate this approximate sample path generation method along with error analysis using the examples of higher order fractional stable motion (Section 5.2.1) and Lévy-driven continuous-time autoregressive moving average (CARMA) processes (Section 5.2.2). In both of those cases, for simplicity we restrict ourselves to the univariate setting with T = R, and typically consider the Poisson truncation based on the inverse Lévy measure method, so ν m (dz) = 1 (η(m),+∞) ( z )ν(dz), where η(m) = sup{r > 0 : z >r ν(dz) > m}. We also discuss Lévy-driven Ornstein-Uhlenbeck processes as interesting cases in which simulation by increments may be preferred over truncation of shot noise representation (Section 5.2.3).
Using the framework of the decomposition (5.5) based on the inverse Lévy measure method with the truncation on jump timings T κ := (κ, T ] (where we denote the truncation parameter as κ instead of n to avoid notational conflict), we decompose the higher order fractional stable motion as L H,α,n t = L H,α,n t (m, κ) + Q t (m) + R t (m, κ). For generating sample paths of the higher order fractional stable motion based on the Poisson truncation approximation (5.9), we follow the numerical recipe described previously in Section 5.2 but with the shot noise representation (5.9) instead. Examples of this sample path generation scheme for the higher order fractional motion are provided in Figure 5 below. We observe the versatility of the higher order fractional stable motion for generating rough paths (Figure 5 (a)) as well as aggregated smooth paths ( Figure 5 (c)). We remark on the scenario where the sample paths of the stochastic integral process are almost surely unbounded on every finite interval of positive length, which happens whenever the integrand of the stochastic integral is explosive. For instance, in the case of the higher order fractional stable motion (5.7), this occurs when α ∈ (0, 2 ∧ (1/(n − 1))) and H ∈ (n − 1, n ∧ (1/α)) [53, Theorem 4.1]. In such a case, it is nonsensical to use truncation of shot noise representation to simulate the stochastic integral process, and even misleading if done so without the knowledge of sample path unboundedness. This is because truncation to a finite Lévy measure will almost surely produce a bounded sample path, which would otherwise be unbounded in the absence of truncation. For this reason, we advise to check that the stochastic process is almost surely bounded over [0, T ] prior to generating its sample paths.
We now turn our attention to the stochastic process {Q t (m) : t ∈ [0, T ]} consisting of the small jumps over (+∞, T ]. Let us further decompose this stochastic process as Since the kernel f (t, s; H, α) in Z t (m) can be written without (·) + , it is continuously differentiable with respect to t. The following result gives us the asymptotic behaviour of Q t (m), and validates its approximation via a fractional Brownian motion [53, Theorem 6.3]. Let {B t : t ∈ R} be a temporally extended standard Brownian motion in R and denote σ (m) := ( |z|≤η(m) |z| 2 ν(dz)) 1/2 .
(i) The finite dimensional distribution of {R t (κ) : t ∈ [0, T ]} converges in probability to zero as κ → +∞. Moreover, if α ∈ (1, 2), then the convergence can be strengthened to convergence in probability to the degenerate zero process uniformly on [0, T ]. (ii) For every κ > 0, as λ → +∞, it holds that Moreover, the supremum over [0, T ] can be replaced by a maximum over a finite number of observation times in [0, T ].
Thus, we see the error process can be made arbitrarily close to the degenerate zero process by increasing κ, and that the tail of sup t∈[0,T ] |R t (κ)| resembles that of a Pareto distribution. We have therefore successfully approximated the higher order fractional stable motion.
This method of approximation for infinitely divisible processes is rather systematic. To summarise, we do so by firstly considering a domain of the Lévy measure for which it is finite, which is analogous to an approximation via Poisson truncation of shot noise representation. If possible, Gaussian approximation can be additionally used on remaining components to obtain greater accuracy. The remaining stochastic process {R t (m, κ) : t ∈ [0, T ]} is treated as the error of the approximation, for which we analyse its properties.
We briefly mention the tempered stable generalisation of the higher order fractional stable motion. By replacing the Lévy measure of the driving process in the stochastic integral (5.7) with that of a symmetric tempered stable Lévy measure where β > 0 is a tempering parameter, we obtain the higher order fractional tempered stable motion. Its properties, such as persistent autocorrelations and behaviour in short and long time regimes, are investigated in [19]. A shot noise representation in the vein of (5.9) is given by [19,Section 6] where {E k } k∈N is a sequence of iid standard exponential random variables and {R k } k∈N is a sequence of iid standard uniform random variables, such that all random sequences are mutually independent. This shot noise truncation is not surprising at all, given Rosiński's series representation (3.22) for the tempered stable law. An alternative fractional tempered stable motion is studied in [41], where the Lévy measure of the driving process in the stochastic integral (5.7) is once again replaced by a tempered stable Lévy measure (5.10), but the first-order moving average kernel f 1 is also replaced, by a Volterra kernel where H ∈ (1/α − 1/2, 1/α + 1/2), α ∈ (0, 2) and c H,α is a constant depending only on H and α. Note that as the kernel in this fractional tempered stable motion is distinct from the higher order fractional tempered stable motion discussed in [19,Section 6], the former is not simply a lower-order version of the latter. The fractional tempered stable motion based on the Volterra kernel (5.12) does not integrate over negative time, so within the context of the decomposition 5.5, the truncation parameter on time is degenerate (κ = 0). Consequently, the error analysis is simplified as the stochastic process {Q t (m) : t ∈ [0, T ]} of the small jumps also has no negative time component, and the Similarly to the moving average kernel, the Volterra kernel can capture selfsimilar dynamics. However, unlike the moving average kernel, there is no obvious generalisation of the Volterra kernel that can preserve its key properties.

Lévy-driven CARMA processes
We now turn to a numerical scheme for generating approximate sample paths of Lévy-driven CARMA processes via Poisson truncation of shot noise representation [54]. Lévy-driven CARMA processes naturally generalise Gaussian CARMA processes so as to capture asymmetry and heavy tails in a variety of physical and social science settings. We begin defining the Lévy-driven CARMA process as follows. Fix a 1 , · · · , a p , b 0 , · · · , b p−1 ∈ R such that b q = 1, q ≤ p − 1 and b k = 0 for k > q, and define the polynomials a(z) := z p + a 1 z p−1 + · · · + a p and b(z) := b 0 + b 1 z + · · · + b q z q in such a way that a(z) and b(z) have no common roots. Define A ∈ R p×p by Denote the eigenvalues of A as λ 1 , · · · , λ p ∈ R, that is, a(z) = ∏ p k=1 (z − λ k ). Let {L t : t ∈ R} be a temporally extended univariate Lévy process in same sense as in Section 5.2.1. Define e p ∈ R p as the unit vector in the p-th direction and b := [b 1 , b 1 , · · · , b p−1 ] ∈ R p . A Lévy-driven CARMA process in R of order (p, q) with p > q is defined as {Y t : t ∈ R}, where Y t := b, X t and {X t : t ∈ R} is a stochastic process in R p satisfying Under suitable technical conditions, the Lévy-driven CARMA process {Y t : t ∈ R} is strictly stationary, and can be expressed as a linear combination of the real and dependent Lévy-driven Ornstein-Uhlenbeck processes as follows: Note that the CARMA(1, 0) process corresponds to the Lévy-driven Ornstein-Uhlenbeck process, which we will discuss exclusively in Section 5.2.3 due to their special feature with respect to sample path generation.
We first consider the stable CARMA process. Specifically, the driver {L t : t ∈ R} is a temporally extended stable process with Lévy measure 2) and zero if α = 1. We define a constant λ α,β ,m := β c α η(m) 1−α α/(α − 1), which is used in a correction term to centre the error term Q t (m) when α ∈ (0, 1). Approximation and error analysis for this case is provided as follows. Define the function g(u) := ∑ p k=1 1(u ≥ 0)e λ k u b(λ k )/a (λ k ). In the context of the decomposition (5.5) for the stable CARMA process based on the inverse Lévy measure method and the truncation on the jump timings T n := (n, T ], the following statements hold [54,Section 4]: If q < p − 1, then the convergence can be strengthened to weak convergence in C ([0, T ]; R). (iii) The finite dimensional distributions of {R t (n) : t ∈ [0, T ]} converges in probability to zero as n → +∞. If α ∈ (1, 2), then the convergence can be strengthened to the convergence in probability uniformly on [0, T ]. (iv) It holds that for every n > 0 as λ → +∞, So clearly, the error component {Q t (m) : t ∈ [0, T ]} is asymptotically Gaussian while the error component {R t (n) : t ∈ [0, T ]} asymptotically has a Pareto-tailed distribution, just as for the higher order fractional stable motion in Section 5.2.1. The latter is unsurprising due to the presence of the stable driver. Following the simulation recipe outlined in Section 5.2 but based on the shot noise representation (5.13) instead, we provide approximate sample paths in Figure 6. From the generated sample paths, we observe the mean-reverting property of the CARMA process in diminishing the impact of jumps over time.
Next, we consider the case where the driving Lévy process {L t : t ∈ R} without Gaussian components is centred and has finite secondorder moments. Specifically, we demand that the characteristic function follows the form such that the infinite Lévy measure satisfies R 0 z 2 ν(dz) < +∞. We now deviate from the inverse Lévy measure method and leave the underlying shot noise representation general. We state the results on the error processes from the decomposition (5.5) as follows [ Additionally, if q < p − 1, then the convergence can be strengthened to weak convergence in the space C ([0, T ]; R). (ii) It holds that for every m > 0, the error process {R t (m, n) : t ∈ [0, T ]} converges in probability to the zero process uniformly on [0, T ] as n → +∞.
By (ii), we see that the error component {Q t (m) : t ∈ [0, T ]} can be approximated by a Gaussian process. Moreover, it has been shown that when possible, including the Gaussian approximation in the simulation for {Y t : t ∈ [0, T ]} not only improves precision, but is also necessary to preserve the second-order structure [54,Proposition 5.3].
In conclusion, we see that the general idea of decomposing a Lévy measure in terms of its jump magnitudes and timings is a powerful tool that can be applied systematically for approximating sample paths of infinitely divisible processes.

Lévy-driven Ornstein-Uhlenbeck process
We now consider the Lévy-driven Ornstein-Uhlenbeck (OU) process, which can be thought of as a special case of the Lévy-driven CARMA process with (p, q) = (1, 0) but with an additional centring parameter. This class of infinitely divisible processes deserves special attention in the context of numerical methods, as exact simulation methods by increments are available in some cases, which may be preferred over the Poisson truncation of shot noise representation.
Let {L t : t ≥ 0} be a Lévy process without Gaussian components and with Lévy measure ν(dz). The Lévy-driven OU process {X t : t ≥ 0} is described by the stochastic differential equation where λ > 0 and µ ∈ R d . The explicit solution is given by In light of this stochastic integral representation, the Lévy-driven OU process is an infinitely divisible process. The OU process is typically defined by its invariant law lim t→+∞ L (X t ). In the case where the invariant law is Gaussian, the driver of the OU process (5.14) is a Brownian motion, which is outside the scope of the present survey. It is known that [95,Theorem 17.5] if the Lévy measure ρ(dz) of the driving process {L t : t ≥ 0} satisfies the integrability condition z >2 ln z ρ(dz) < +∞, then there exists a Lévy-driven OU process (5.15) such that the invariance law lim t→+∞ L (X t ) exists and is selfdecomposable and infinitely divisible with the Lévy measure ν(B) = λ −1 . Conversely, every selfdecomposable law admits the unique existence of a Lévy-driven OU process such that the Lévy measure of its Lévy driver satisfies the aforementioned integrability condition [95]. For simplicity, we focus on the univariate setting. Where w(z) is the Lévy density of the unit-time marginal L 1 of the driving Lévy process, it is related to u(z) by the equation A (non-Gaussian) stable OU process is defined as the Lévy-driven OU process such that its invariant law lim t→+∞ L (X t ) is a (non-Gaussian) stable law. If the invariant law is a stable law with the one-sided Lévy density u(z) = az −α−1 on (0, +∞) with α ∈ (0, 1), then the driving process admits the Lévy density w(z) = λ aαz −α−1 on (0, +∞), according to the relation (5.16), that is, the driving process is a stable subordinator with stability α, but with scale λ aα. Combining results from Example 3.3 and Section 5.2, we have the shot noise representation where {Γ k } k∈N is a sequence of standard Poisson arrival times independent of {T k } k∈N , a sequence of iid uniform random variables on (0, T ). Sample paths based on the Poisson truncation of the shot noise representation (5.17) are provided in Figure 7 below. Similar to our numerical illustrations of the stable CARMA process in Figure 6, we also observe mean-reverting behaviour in the sample paths of the stable OU process. In contrast, the driving Lévy process in this case is a stable subordinator, thus we see that all jumps in the sample paths of Figure 7 are in the positive direction. Similarly to the case of the stable process in Figure 2, the jumps associated with a lower stability parameter (Figure 7 (a)) are significantly larger. An interesting property of the stable OU process is that the Lévy measures of the invariant law and the driving process both correspond to a stable law with stability α, though with different scales. This invariance of the stability parameter between the Lévy measures does not hold in general, as we see in the following.
Consider a tempered stable OU process such that its invariant law lim t→+∞ L (X t ) is a tempered stable law with Lévy density u(z) = ae −β z z −α−1 on (0, +∞) with α ∈ (0, 1). From the relation (5.16), the Lévy density of the unit-time marginal L 1 of the driving process is given by Thus, for α ∈ (0, 1), the Lévy driver {L t : t ≥ 0} is the superposition of a tempered stable subordinator with stability α and scale λ aα, and a compound Poisson process with Lévy density λ aβ z −α e −β z , which is indeed a gamma density scaled by a factor of λ a. This leads to the shot noise representation [58,88] (5.19) where, in addition to the random sequences appearing in Rosiński's series representation (3.22), { Γ k } k∈N is a sequence of Poisson arrival times with intensity λ aβ α Γ(1 − α) independent to {Γ k } k∈N , and {G k } k∈N is a sequence of iid gamma random variables with shape 1 − α and rate β . Examples of sample paths based on the Poisson truncation of the shot noise representation (5.19) are provided in Figure 8 as follows. As expected, the jumps of the tempered stable OU process tend to be smaller than those of its stable counterpart in Figure 7 (a). We see from the plots in Figure 8 that the parameter λ controls the intensity of the mean-reverting property. For α ∈ (1, 2), by the relation (5.16), the driving process {L t : t ≥ 0} is instead a superposition of two independent tempered stable processes, one with stability α and scale λ aα, and another with stability α − 1 and scale λ aβ . A shot noise representation similar to (5.19) is available, however intricate centring terms must appear. For the sake of simplicity, consider the case where the invariant law of the OU process lim t→+∞ L (X t ) is a symmetric tempered stable law, so that the independent tempered stable processes forming the driving Lévy process are also symmetric. As almost sure convergence without centres is guaranteed, the shot noise series for the symmetric tempered stable OU process with stability α ∈ (1, 2) is given by where the familiar random sequences are the same as those in the representation (5.19), and the random sequences distinguished by the tilde are independent copies of the corresponding sequences.
With the availability of shot noise series representations for the stable (5.17) and tempered stable OU processes (5.19), we immediately have an easy method for sample path generation by Poisson truncation. As usual, this entails truncation errors due to the discarded jumps, and a decomposition of the stochastic integral process in the same vein as (5.5) can be performed to analyse the error. The kernel s → e −λ (t−s) 1 [0,t] (s) is square-integrable and uniformly bounded for every t, which conveniently permits Gaussian approximation of the discarded jumps. Alternatively, as the domain of integration over time is bounded, it may be easier to investigate the mean-squared error. Consider an infinitely divisible process T ]} is a Lévy process with an isotropic Lévy measure ν(dz) such that it has the shot noise representation (5.2) without the centring terms. Then, a shot noise representation for the infinitely divisible process can be obtained by simply including f (t, T k ) as a factor in each of the summands. For the Poisson truncation approximation, the Itô-Wiener isometry can be applied in the same vein as (4.2) to express the mean-squared error as E[|Q t (m) where Q t (m) previously defined in (5.5) integrates over [0, T ] instead of (−∞, T ]. This result can be straightforwardly generalised to the case of a multivariate Lévy integrator similarly to (4.2). We see that as long as the integrand f (t, s) is square-integrable, the quality of the Poisson truncation is characterised by the Lévy integrator.
Yet another representation is avaliable via the Markovian property, for which exact sample path generation of the Lévy-driven OU process is possible in some cases. Specifically, where ∆ denotes the size of the time step, it holds that where the equality in law holds by the independence and stationarity of the Lévy driver. In the one-dimensional setting, exact timediscretisation schemes based on (5.21) for the stable OU process with stability α ∈ (0, 2) and the tempered stable OU process with stability α ∈ (0, 1) are possible, as a stable random variable S α,a in R with stability α and scale a can be exactly simulated through the well-known representation where V is a uniform random variable on (−π/2, π/2) independent of E, a standard uniform random variable. For α ∈ (0, 1), the transition law of the tempered stable OU process consists of a compound Poisson random variable and a tempered stable random variable with stability α. The latter of which can be generated exactly by an acceptance-rejection algorithm on S α,a (5.22), so an exact time-discretisation scheme for sample path generation is readily available [57]. Therefore, we see that there is a reasonable basis for turning to the classical recursive sampling by increments as opposed to by Poisson truncation of shot noise representation. However, there are two bases for which one may prefer using Poisson truncation of shot noise representation. Firstly, if the application context requires the observation of jumps, such as in insurance mathematics, then simulation via increments cannot be applied. Secondly, the Markovian successive representation (5.21) may not offer an exact sampling scheme for all OU processes. For instance, there is yet an exact sampling method for tempered stable random variables with stability α ∈ (1, 2), which appears in the transition law for the tempered stable OU process with α ∈ (1, 2). As a result, the simulation method by increments for this case does not carry the advantage of being exact [58]. Moreover, the problem of sampling of multivariate stable random vectors hinders multidimensional generalisations of the recursive scheme based on the Markovian representation (5.21) in practice.
Before moving on, we briefly mention that stochastic integral processes with respect to Lévy processes of type G also admit shot noise representations based on (5.3). Thus, similar Poisson truncation schemes can be used for their sample path generation. Mean-squared error bounds for the truncation of shot noise representation for such stochastic integral processes are provided in [86,104], including for schemes based upon subordinated Gaussian representations of Lévy processes of type G.

Simulating infinitely divisible fields
We consider the real harmonisable multifractional Lévy motion (RHMLM) [27,64] as an example of the simulation of an infinitely divisible field via a shot noise representation. The RHMLM is defined by where L(dξ ξ ξ ) is a Lévy random measure without Gaussian components with control measure ν(dz) on C and the function h : R d → (0, 1) varies in the place of the Hölder exponent. If the control measure ν(dz) is finite, then a shot noise representation for the RHMLM is given by [27] {X(x) : where {Γ k } k∈N is a sequence of arrival times of the standard Poisson process, {U k } k∈N is a sequence of iid uniform random vectors on S d−1 , and {Z k } k∈N is a sequence of iid random vectors distributed according to ν(dz)/ν(C) such that all random sequences are mutually independent, and c d := 2π d/2 /(Γ(d/2)d). Note that the shot noise representation (5.23) is distinct from the shot noise representations we have seen thus far. In particular, for the generalised shot noise representation (3.10), the Poisson arrival times simulate the Lévy measure through a random walk on the space of jump sizes, while in (5.23), the Poisson arrival times simulate a random walk over the space of jump positions. Armed with a shot noise representation (5.23) for the RHMLM, a straightforward sample path generation method is to simulate the first n summands. Denoting {X (n) (x) : x ∈ R d } as this approximation, it was found in [64] that for q ≥ p and every n ≥ q/2 + q(max K h)/d + 1, it holds that where · p,K denotes L p norm over a compact set K ⊂ R d , C q is a constant independent of n and D n,q (y) = Γ(n+1−q/y−qy/d) q/2+qy/d /Γ(n+ 1). When the control measure is infinite, it was suggested in [64] to separate the RHMLM in terms of large jumps and small jumps, where the former component can be simulated via the shot noise representation (5.23), while under certain technical conditions, the latter can be approximated by a Gaussian field in the same vein as Sections 4.2 and 5.2. We remark that the Poisson truncation scheme described in Section 4 can be applied if the removal of the conditioning on the number of jumps is desired.

Simulating Lévy-driven stochastic differential equations
As we have applied the theory of shot noise representations to approximations of a large class of infinitely divisible processes (Section 5.2), the next natural step is in generalising the method to approximating Lévy-driven stochastic differential equations (SDEs). Working towards a Poisson truncation method of approximating Lévy-driven SDEs, we first establish the setting of multivariate Lévy-driven SDEs of the form dX t = µ(t, X t ) dt + σ (t, X t ) dB t + θ (t, X t− ) dL t , t ≥ 0, (5.24) where {B t : t ≥ 0} is an l-dimensional Brownian motion, {L t : t ≥ 0} is an l-dimensional Lévy process with Lévy measure ν(dz) and the coefficients µ : [0, +∞) × R d → R d and σ , θ : [0, +∞) × R d → R d×l are continuous. A sufficient condition for the unique existence of the solution to the SDE (5.24) is for the coefficients to satisfy linear growth and Lipschitz conditions [2, Section 6.2]. Sample path generation for solutions to the general SDE is more difficult than for stochastic integral processes. In short, this is because stochastic integral processes discussed in Section 5.2 are described explicitly, whether Markovian or not, over the time interval of interest, leading to straightforward implementation and error analysis. In contrast, the solution to the general SDE can only be described implicitly, that is, the state X t appears in both sides of (5.24), thus requiring a recursive scheme. The exception is the very rare case when a closed-form solution for the SDE is available, such as the examples of the Lévy-driven OU process and Doléans-Dade stochastic exponential. We have seen the explicitness of the OU process in Section 5.2.3, which leads to its simulation in the form of a stochastic integral process (2.4). For the latter, consider the Lévy-driven Doléans-Dade stochastic exponential as described by the SDE dX t = X t− dL t . By Itô's lemma for discontinuous semimartingales, the explicit solution is available as if the support of the Lévy measure ν(dz) is the half-line (−1, +∞). The stochastic exponential can be simulated through the shot noise representation of the integral on the right hand side (see [51,Section 4.1]). More recently, exact methods based on rejection sampling have been developed [34,79], which can exactly simulate a class of univariate jump-diffusion processes with finite jump intensity without the need for recursion.
As usual with the study of differential equations, the immediate approach when met with the problem of investigating solutions to the SDE (5.24) is via the deterministic time-discretisation paradigm. The shortfall here is that simulation of the underlying Lévy integrator via increments does not observe jumps, which renders such a method inappropriate for certain practical scenarios, such as in the case of insurance models where individual claims often need to be observed. In order to incorporate the information of individual jumps for approximations of better quality, jump-adapted time-discretisation, where the discretisation of time includes jump timings, is preferred [16,75].
With jump-adapted methods, the necessity for computing a finite number of time steps requires the underlying jump component to correspond to a finite Lévy measure. In the case of the infinite Lévy measure, the only solution is via truncation to obtain a compound Poisson approximation of the jump component [61]. When possible, the accuracy of the numerical scheme can be improved via a Gaussian approximation of the discarded jumps [3,22]. In the case where the discretisation of time includes deterministic and random jump times, careful balance between the order of the scheme for the Gaussian component and the truncation of the Lévy measure for the jump component is desired, as the overall rate of convergence is only as fast as the slowest component [60]. In the case where the driver is a subordinated Lévy process, the Euler method in which the subordinator is approximated by truncation of its shot noise representation is studied in [91]. We also mention here the recent emergence of an alternative approach to approximating SDEs driven by Marcus-type Lévy noise via homogenisation of deterministic maps [21].
The basic framework for simulating SDEs via Poisson truncation is described as follows [48]. Denote ν n (dz) as the Lévy measure corresponding to the Poisson truncation of the shot noise series (5.2). For example, if the shot noise series we truncate corresponds to the inverse Lévy measure method in the isotropic case, then ν n (dz) = 1 [ε(n),+∞) ( z )ν(dz), where the cutoff threshold ε(n) for the magnitude of jumps is decreasing towards zero in n by the definition of the kernel (3.6). Thus, the corresponding approximation for the solution to the Lévy-driven SDE (5.24) is described by the following SDE as a mixture of integrals and a summation  Numerical methods and error analysis for Lévy-driven SDEs via the truncation of its Lévy measure in full generality are still on-going research. For other promising directions of future research, we conjecture that the shot noise representation has the potential to offer effective numerical methods for stochastic delay differential equations with jumps [25], backward stochastic differential equations with jumps [32,33] and stochastic partial differential equations with jumps [28,39].
importance of the initial arrival times [14]. In our context, this fact can be effectively exploited for numerically approximating expectations [44,56]. Moreover, techniques involving changing the underlying probability measure [52,56] are also useful. In what follows, we discuss some topics relevant to shot noise representations for approximating expectations.
Throughout, we assume that an infinitely divisible random vector of our interest admits the shot noise representation (3.10). Crucially for the succeeding discussion, recall that the arrival times {Γ k } k∈N of the standard Poisson process equal in law to successive summations of iid standard exponential random variables, that is, where {E k } k∈N is a sequence of iid standard exponential random variables. As the first (inter-)arrival time E 1 appears in all summands {H(Γ k ,U k )} k∈N , it accounts for a large portion of the variation of the underlying randomness. Similar statements can be said of the first few exponential interarrival times. We remark that Lévy processes and more general infinitely divisible processes are within our scope through the arrival times {Γ k } k∈N in their shot noise representations, such as (5.2), (5.9) and (5.13). Moreover, as information on individual jumps are perhaps even more crucial in the case of infinitely divisible processes, the following discussions are even more pertinent for the aforementioned more general setting.

Effective dimension on interarrival exponentials
Intuitively, in the case of a shot noise representation (3.10), the faster the kernel H(·, ξ ξ ξ ) decays, the greater the proportion of variation explained by the first few exponential random variables {E 1 , · · · , E n } (6.1). This leads to the notion that in many cases, the variation can be captured by only considering a lower dimension of the exponential interarrival times {E k } k∈N . This idea is formalised in [44] through the cumulative explanatory ratio (CER), which is interpreted as the proportion of the variance explained by the first few dimensions. Where X is an infinitely divisible random vector in R d without a Gaussian component and f : R d → R is a continuous function, suppose we want to compute E[ f (X)] provided that Var( f (X)) is finite. The CER associated with the first n interarrival times is defined by CER n := Cov(X f (n), X f (0)) Var(X f (0)) , (6.2) where, for natural numbers n ∈ {1, · · · , N}, where {E k } k∈N is an iid copy of the sequence of exponential interarrival times {E k } k∈N . A high CER (6.2) corresponds to a low effective dimension structure of the shot noise representation. This is a desirable property, as rather than using Monte Carlo methods with the convergence rate O(n −1/2 ), quasi-Monte Carlo methods can be reliably used on the first few interarrival times to achieve the faster convergence rate O(n −1 (ln n) d ) in practice. This is in contrast to problems with high-dimensionality, for which the speed improvement may often be too marginal to justify the use of quasi-Monte Carlo methods for such problems. The CER (6.2) for stable random variables and a modified CER for the randomised quasi-Monte Carlo method for the gamma random variable were investigated in [44], which were found to be remarkably high with only the first few terms of their shot noise series. Thus, quasi-Monte Carlo methods are applicable for greater accuracy in expectation computations.

Stratification on interarrival exponentials
We describe the variance reduction method of stratifying the exponential interarrival times {E k } k∈N (6.1), as investigated in [56]. We present the technique of stratified sampling when computing a random variable F involving a shot noise representation, say (3.10), which is built upon the sequence {E k } k∈N . For simplicity, we only consider the stratification of the first interarrival time It should be noted that the stratified sampling technique can be extended to further interarrival times. However, the extension may be computationally taxing with only marginal returns due to the low effective dimension of the shot noise representation (Section 6.1) and the rapid growth of strata with the Monte Carlo dimension [44].

Importance sampling on all individual jumps
Yet another importance sampling method, different from the one considered in Section 6.3, can be constructed using more information of the sample path via density transformations between individual jumps [52]. Let ({X t : t ≥ 0}, P) and ({X t : t ≥ 0}, Q) be Lévy processes in R d characterised by triples (γ P , A, ν P ) and (γ Q , A, ν Q ), respectively. Given some technical conditions on the distance between the two laws, it holds that the probability measures P and Q are equivalent and dP dQ F t = e U t , Q-a.s., where the stochastic process {U t : t ≥ 0} in R satisfies U t = η η η, X t − t 2 η η η, Aη η η − t γ Q , η η η + lim ε→0 ∑ (s, X s −X s− )∈(0,t]×(ε,+∞) ϕ(X s − X s− ) − t z >ε e ϕ(z) − 1 ν Q (dz) , Q-a.s., (6.5) where X t = X t − ∑ s∈(0,t] (X s − X s− ), ϕ := ln dν P /dν Q and the vector η η η ∈ R d satisfies γ Q − γ P − z ≤1 z (ν Q − ν P )(dz) = Aη η η. Moreover, the stochastic process {U t : t ≥ 0} is uniformly convergent in t on any bounded interval Q-a.s. and E Q [e U t ] = E P [e −U t ] = 1 for every t ≥ 0. For simplicity, we present the one-dimensional case. Suppose we want to evaluate E P [F], where F is a random variable involving the sample path of the Lévy process {X t : t ∈ [0, T ]}. As it holds that E P [F] = E Q [e U T F], we can estimate our quantity of interest via a Monte Carlo iteration of the latter, that is, lim n→+∞ 1 n n ∑ k=1 e U k,T F k = E P [F], Q-a.s., where {F k } k∈N is a sequence of iid copies of the random variable F and {U k,t : t ∈ [0, T ]} k∈N is a sequence of iid copies of {U t : t ∈ [0, T ]}. Observe from (6.5) that generating the sample path {U t : t ≥ 0} requires the jumps of the Lévy process, so generally the implementation of the computation significantly benefits from, or more precisely, often requires shot noise representation, for instance, infinitely divisible processes driven by the Lévy process {X t : t ≥ 0}.

Concluding remarks
In this survey, we have summarised shot noise representation with a view towards sampling infinitely divisible laws and generating sample paths of related processes. In particular, we reviewed the important aspects of shot noise representation through the rather scenic route of the Lévy-Itô decomposition approach. We provided shot noise representations of various popular laws and stochastic processes in the literature. Through our description of the truncation of shot noise representation, a general and systematic method for simulation and computation of expectations was discussed. Examples of simulation recipes were provided, and the key results for error analysis and our numerical demonstrations should provide the confidence that truncation of shot noise representation does not simply satisfy the need to be accurate, but also the desire for a straightforward and computationally feasible approach for simulation in practice.
We hope that the present survey makes clear the practicality of numerical methods for simulating infinitely divisible laws and related processes based on shot noise representations, and encourages future development into expanding the technique. We reiterate that the approximation of Lévy-driven SDEs and more general shot noise processes via truncation of shot noise representation is still the subject of further research. As mentioned previously, future directions for this area of research include numerical schemes for stochastic delay differential equations, backward stochastic differential equations and stochastic partial differential equations with jumps based on shot noise representations. As shot noise representation not only yields a viable method of sample path generation for a wide-range of stochastic processes, but can also provide insights into their properties, we expect further studies of such stochastic processes will continue to invoke shot noise representation techniques. For this reason, the investigation of shot noise representations and their truncation will remain in priority for the foreseeable future.