Extremes of Gaussian Processes with Random Variance

Let ξ(t) be a standard locally stationary Gaussian process with covariance function 1− r(t, t + s) ∼ C(t)|s| as s → 0, with 0 < α ≤ 2 and C(t) a positive bounded continuous function. We are interested in the exceedance probabilities of ξ(t) with a random standard deviation η(t) = η− ζt , where η and ζ are non-negative bounded random variables. We investigate the asymptotic behavior of the extreme values of the process ξ(t)η(t) under some specific conditions which depends on the relation between α and β .


Introduction and Main Results
Let (X (t), Y ), t ∈ R, be a random element, where X (t) is a random process taking values in R, and Y is an arbitrary random element.We say X (t) is a conditionally Gaussian process if the conditional distribution of X (•) given Y is Gaussian.We investigate the probabilities of large extremes, is the random variance of X .
Such processes were introduced in applications in finance, optimization and control problems.To our best knowledge, the paper by Adler et al. [1] was the first mathematical work where probabilities of large extremes of conditionally Gaussian processes where considered.The authors considered sub-Gaussian processes as an example of stable processes, that means processes of the type X (t) = ζξ(t), where ξ(t) is a stationary Gaussian process and ζ is a stable random variable, independent of ξ(•).That is, in our notations, Y = ζ and X (t) = Y ξ(t).Therefore we have a Gaussian process with random variance.This paper dealt with the mean of the number of upcrossings of a level u, as in the Rice formula, which can be applied for smooth Gaussian processes.Further results on this problem are dealt with in [9], [2], [7], [8].For examples, Doucet et al [2] considered to model the behavior of latent variables in neural networks by Gaussian processes with random parameters.Lototsky [7] studied stochastic parabolic equations with solutions of Gaussian processes, where the coefficients are modeled by a dynamic system.We consider in our paper more general Gaussian processes.
The aim of the present paper and subsequent ones which are in preparation, is to develop asymptotic methods for large extremes of conditional Gaussian processes.Our intention is to expand the Gaussian tools to wider class of random processes.The asymptotic theory for large extremes of Gaussian processes and fields is already well developed, see [11], [3], and the references therein.
A good part of this asymptotic theory for large extremes of conditional Gaussian processes is mainly based on the corresponding theory for Gaussian processes.The last one was began from the celebrated Pickands' theorem [10] on large extremes of stationary Gaussian processes and its extension to non-stationary Gaussian processes, as in Hüsler [5] for certain types of non-stationarity, and in Piterbarg and Prisyazhn'uk [12] where the non-stationary process has a non constant variance with a unique point of maximum.In [6] we consider also the type of processes ξ(t)η(t), but with smooth processes η(t).In this paper we investigate the case of less smooth processes η.Also we let ξ(t) to be a locally stationary Gaussian process ξ(t), instead of a stationary Gaussian process in [6].
P ξ(t)η(t) > u, for some t ∈ [0, T ] as u → ∞, where T < ∞; here η(t) can be interpreted as the random standard deviation of the Gaussian process ξ(t).In this paper we further assume that where η and ζ are non-negative bounded random variables, being independent of ξ(•); and assume that η ≥ s 0 (a.s.) for some s 0 > 0.
We approximate the tail of the standard normal distribution by the well-known relation We use Pickands constant H α which is defined by where the process χ(t) is a shifted fractional Brownian motion with expectation Eχ(t) = −|t| α and covariance function cov(χ(t), χ(s First we assume that the conditional expectation E ζ −1/β | η is bounded for all η given, which implies ζ is strictly positive with probability one.If the Gaussian process is stationary, we note that for almost all given η and ζ the conditions in Theorem D.3 of Piterbarg [11] also hold for the conditional probability It can be considered as a ruin probability for Gaussian processes with deterministic variance.In the following theorems, we show that under the condition above, the results can be generalized for locally stationary Gaussian processes with a random variance. Theorem 1.1.Let ξ(t) be a standard locally stationary Gaussian process with α ∈ (0, 2].Suppose that the random variable η has a bounded density function f η ( y), which is k times continuously differentiable in a neighborhood of σ = σ(η), for some k = 0, 1, 2, . .., and satisfies f (r) η (σ) = 0 for r = 0, 1, . . ., k − 1 and f (k)  η (σ) = 0. Further assume that the function E (ζ) ( y) (a) If α, β ∈ (0, 1), then assuming condition R These results show that the exact asymptotic behavior of the ruin probability P u,β depends on the local behavior of the marginal density f η ( y) at σ(η) and on the relation between α and β: β < α, = α and > α.We also notice that the impact of the function C(t) of the locally stationary Gaussian process is restricted in some cases on C(0), if α ≤ β ≥ 1, and that the whole function C(t) plays a role only in the case β < 1.

Remark:
In case C(t) ≡ C, as for stationary Gaussian processes ξ(t), the integral on the C(t) function in (a) and (c) simplifies to In the next section we introduce some necessary lemmas, and prove Theorem 1.1 in Section 3 and Theorem 1.2 in Section 4.

Lemmas
For our derivations, some useful lemmas are stated in this section.
The first lemma is a reformulation of Lemma 6.1 of Piterbarg [11] for the case of a stationary Gaussian process with general C(t) = C > 0, by use of a time transformation.
Lemma 2.1.For any z > 0 and h > 0, as u → ∞, The more general random variance case with (1 − ζt β ) 2 is dealt with in Lemma 3.1.For the derivation of the asymptotic behavior, we state the common result based on saddle-point approximation, in the following proposition.
To prove this, we make the variable change y = u 2 (x − σ) in the integral and use the saddle-point approximation, or simply see Fedoruk [4].
Another asymptotic approximation concerns a particular case of "delta-wise" sequences.
Lemma 2.3.Let g(x) be a non-negative and bounded function on [0, b], b > 0, which is positive and continuous at 0. Then for any h > 1, a > 0 and any α ∈ (0, 1], Proof: Choose ε > 0 arbitrarily small, and let δ > 0 be such that |g(x) − g(0 x dx Change variable y = a xu 2 in the first integral, and check that lim If α < 1, we can bound the second integral by where is some constant.If α = 1, it is easy to see that the second integral is bounded by a constant.Then the statement follows by letting ε → 0. We need in the proof of Theorem 1.2 a result which is necessary in other cases too.It is an extention of Lemma 6.3 in Piterbarg [11] for a stationary zero mean Gaussian process ξ(t), t ∈ [0, T ] with the usual correlation assumption: Then there exists some > 0 such that where C 0 is the absolute constant from Lemma 6.3 of [11] and where the intervals ∆ l,K l , l = 1, 2, cover the points t 1 and t 2 , respectively.We have Apply Lemma 6.3 in [11] to any term p i j in the sum.The distance between ∆ 1,i and ∆ 2, j is at least for all u sufficiently large.To get the second inequality we use the inequality (x + y) α ≥ a(x α + y α ), valid for all positive x, y.By summing the bounds, we get the stated assertion.

Proof of Theorem 1.1
We are approximating the locally stationary Gaussian process in small intervals by stationary Gaussian processes.Since C(t) is positive and continuous at 0, we have for any small ε > 0 and δ sufficiently small that sup [0,δ] |C(t) − C(0)| ≤ ε.Let X + (t) and X − (t) be two standard stationary Gaussian processes with covariance functions r + (t) and r − (t) respectively, where for all t = s ≥ 0, Such stationary Gaussian processes exist.We apply Slepian's lemma (cf.Theorem C.1 of Piterbarg [11]) to derive the bounds In the same way we define further stationary Gaussian processes These processes approximate the locally stationary Gaussian process ξ(t) in the intervals I k with Slepian's inequality.
Proof: (a) We use the intervals I k as partition of the interval [0, T ].Since an interval with length smaller than u −2/β has here no asymptotic effect on the probability, we assume without loss of generality that k For any given ζ, by Theorem D.2 of Piterbarg [11], the stationarity of ξ(t) and the time transformation such that C = 1 as in Lemma 2.1, we get the upper bound of the conditional probability, which is used for the dominating convergence.
where C 1 is some constant, not depending on ζ and k.
By a reformulation of Theorem D.3 (i) in Piterbarg [11] for stationary Gaussian processes, (in which the author considered that the variance reaches its maximum at an interior point of the segment [0, δ] (with some δ > 0); here our variance attains its maximum at 0 which is the boundary point of [0, δ], which implies the factor 2 is replaced by the factor 1 in that theorem), and with the time transformation to standardize C(0) + to 1 as in Lemma 2.1, we know that for any given ζ > 0, The analogous result holds for the X − (t) processes with C(0) − .With Slepian's inequality we get the bounds for the conditional probability of the analogous event with ξ(t), for any ζ > 0. Similar inequalities hold for the other processes X + k (t) and X − k (t) as mentioned.This implies that for the upper bound The first term is approximated in (3).Each term of the sum can be approximated by the upper bounds used as in the domination argument above.
Taking the expectation on ζ, the integral term with the factor ζ −1/β is dominated by Furthermore, since the integral is converging point wise to 0 for ζ > 0 (as u → ∞), we have that the sum is bounded by o(u 2/α−2/β Ψ(u)).Hence, the first term in (4) is dominating.
For the lower bound we use Since Eζ −1/β < ∞, we get the stated result by dominated convergence and letting → 0.
(b) Split the interval [0, T ] into subintervals with length u −2/β , with again n := [Tu 2/β ] = Tu 2/β = Tu 2/α .The proof follows the steps of the proof in a).However, since α = β, we need to apply Lemma D.1 of Piterbarg [11] for any given ζ, to show the domination.Here we use that By a reformulation of Theorem D.3 (ii) in Piterbarg [11], and with the time transformation to standardize C(0) + to 1, as above, we know that for any given ζ, where The analogous result holds for the lower approximation with X − (t) and C(0) − .The approximation for the maximum of the process in the interval [δ, T ] of part a) can be used again.Since Eζ −1/α = Eζ −1/β < ∞, we get the stated result by dominated convergence and letting → 0. The domination shows also that (c) For this case, we split the interval [0, T ] into subintervals I k = [kδ, (k + 1)δ] of length δ with 0 < δ < min(1, T ), and define new standard stationary Gaussian processes Then with the result of part b) we get by stationarity where and C > 0 some constant.We mentioned already that E(H ) is finite.By Theorem D.3 (iii) of Piterbarg [11], we know that for any given ζ The same result holds for the lower approximation with X − 0 (t).The interval (δ, T ] does not play a role in the asymptotic result, since using Theorem D.2 of Piterbarg [11] and the argument in the domination part as in proof of part a) above, we have for some constant C > 0. We note that E(exp(−ζδ Then by using the dominated convergence, the third statement follows.
This lemma is now applied in combination with Proposition 2.2 to prove the first main theorem.

Proof of Theorem 1.2
In the following lemmas, we always assume that ζ is a non-negative bounded random variable independent of ξ(•) and its density function From the stated theorem, we have to discuss different cases, depending on β <, = or > α and whether α <, = or > 1 as shown in Figure 1.The following lemmas deal with the different cases in the given order of a) -f), by applying similar ideas.We begin with case (a).
In each interval ∆ k we approximate the locally stationary Gaussian process ξ(t) by the stationary Gaussian processes for some 1 → 0 as u → ∞.We apply Slepian's lemma again as in Lemma 3.1, to get the following approximations.
First, we estimate the upper bound of the probability, by using the stationarity of X + k (t) and Theorem D.2 in Piterbarg [11], with the time transformation since C + (t k ) is in general not equal to 1.For u sufficiently large, we have where γ(u) ≤ γ (u) ↓ 0 as u → ∞ and does not depend on ζ and k, because C(t) is continuous and bounded.
By the assumptions of f ζ (x), for any arbitrarily small ε > 0, there exists some δ > 0 satisfying that for all 0 Hence the expectation in ( 5) is bounded by using the transformation y = T β u 2 z, where C is some constant for the remaining integral on [δ, σ(ζ)].By Fubini's theorem and dominated convergence, the above double integral equals Let us consider the inner integral; we use the variable transformation s = (v/ y) 1/β T : ) the last integral is tending to the constant T 0 s −β (C + (s)) 1/α ds = J + (T ).Thus we split the outer integral in (6) into two parts for v ≤ g(u)u 2 and v > g(u)u 2 with g(u) → 0 such that g(u)u 2 → ∞.The integral on v > g(u)u 2 is of smaller order than the first integral part because of the exponential function.Thus we approximate the first part which is equal to Since the first term of (5) equals o(u 2/α−2 Ψ(u)) as u → ∞, we obtain by combining the approximations P max where γ (u) ↓ 0 as u → ∞.Note also that J + (T ) → J(T ) = T 0 s −β C 1/α (s)ds as u → ∞, by letting 1 → 0. For the lower bound, by with the same intervals ∆ k and Bonferroni's inequality we have With the approximating stationary Gaussian process X − k (t) and Theorem D.2 of Piterbarg [11], the expectation of the first sum in ( 7) is bounded below in a similar way by by interchanging the integration, and using again Fubini's theorem and dominated convergence, where γ 1 (u) ↓ 0 as u → ∞ not depending on ζ, and where δ * = min{δ, vu 2β/α−2 /(log u) β }.By transforming the variable z to s = (v/zu 2 ) 1/β we get by letting u → ∞ and 1 → 0, since Now we consider the approximation of the integral in (8) from below, similar to the upper approxi-mation with g(u) → 0 such that g(u)u 2 → ∞ as u → ∞.
Combining the bounds we note that the lower bound of the first sum in (7) converges to the same bound as the corresponding upper approximation by letting u → ∞ and 1 , → 0.
It remains to approximate the double sum in ( 7) by deriving an upper bound.The double-sum in ( 7) is bounded by where For the third sum of (10), note that for all k + N ≤ l ≤ n, the variance of the Gaussian field there exists a constant b satisfying Therefore by the Borel theorem (cf.Theorem D.1 in Piterbarg [11]), we get as u → ∞, where δ := 2 min |t−s|≥ ε/4 (1 − r(t, s)) > 0.
In the conditional probabilities with respect to the neighboring intervals ∆ k and ∆ l with l − k ≤ N , we approximate ξ(t) by X + (t) with C max + 1 = C instead of C + (t k ) and C + (t l ), not depending on k.By stationarity and Theorem D.2 of Piterbarg [11], we estimate the first sum of (10): where γ(u), γ(u) ↓ 0 as u → ∞, not depending on k and ζ.Then with the proof for the upper bound of the probability, we obtain that the expectation of the first term of (10) equals o(u 2/α−2 Ψ(u)) as u → ∞.
For the second term of (10), we apply Lemma 2.4, letting We have, for a suitable constant C 2 .The expectation of the last sum is at most as u → ∞.
Since ε and 1 are arbitrary, we conclude as u → ∞.

Now we deal with the case (b).
Lemma 4.2.Let α ∈ (0, 1] and β = 1.Suppose that the density f ζ (x), x ≥ 0, is positive and continuous at 0. Then for any T ∈ (0, (σ(ζ)) −1 ), Proof: For any h > 1 and u sufficiently large, we split the interval [0, T ] into subintervals with length hu −2/α , denote these subintervals by ∆ k = [khu −2/α , (k + 1)hu −2/α ] and assume again n = T /(hu −2/α ) ∈ N without loss of generality.Let t k = khu −2/α .By using the approximation as above with the stationary Gaussian processes X + k (t) with C + (t k ), its stationarity, Slepian's lemma, Lemma 2.1 and finally Lemma 2.3 in the last step, we estimate the upper bound of the probability in a similar way, by denoting where γ(u) ↓ 0 as u → ∞, not depending on k and ζ.Now we split the sum into two parts for k ≤ n and > n with some > 0. In the first partial sum we use the bound Hence the first partial sum is bounded above by The second partial sum is of smaller order since exp(− T zu 2 ) → 0 and also E(exp(− T zu 2 ))→ 0, and we can use For the lower bound, by Bonferroni's inequality, we have Choose ε > 0 small and let u be large enough.Then with Lemma 2.1, the first sum in ( 11) is bounded below now with the use of where γ 1 (u) ↓ 0 as u → ∞, not depending on k and ζ, and using 2 − 2/α ≤ 0. Note that is positive and continuous at 0, hence so is g(z) f ζ (z).Then with Lemma 2.3, we obtain that ( 12) is bounded below by where γ 1 (u) ↓ 0 as u → ∞, not depending on ζ.
Note that E exp max [0,h(C − (0)) The double sum in (11) is split again into three parts, where The third sum in (13) can be estimated similarly as in the proof of Lemma 4.1, i.e. with the Borel's lemma, we have 2P max We estimate the expectation of the second sum in (13) by Lemma 6.3 of Piterbarg [11], since its conditions are satisfied by X + (t) with C.
as h → ∞, where C 1 and C 2 are some constants, not depending on h, and by using the concavity of the function x α .We applied also Lemma 2.3 in the second last step.
The next lemma considers the case (c) of Figure 1.
Proof: Note that where δ is chosen in such a way that 0 < δ < 2 − 2β.
a) Considering the first term of (14), we split [0, T ] into subintervals with length u −2 log u, denote the subintervals by ∆ k = [ku −2 log u, (k + 1)u −2 log u] and without loss of generality assume again n = T /(u −2 log u).We use again the approximating Gaussian processes X + k (t) with C + (t k ) on ∆ k , where t k = ku −2 log u.Then with Theorem D.2 of Piterbarg [11] and the fact that H 1 = 1, the first term of (14) has the upper bound where γ(u) ↓ 0 as u → ∞, not depending on k and ζ = z, and we write C * (t) := C 1/α (t) + 2 1 .
For any small ε > 0, let u be sufficiently large so that for all 0 since the density f ζ is positive and continuous at 0. Hence by Fubini's theorem and dominated convergence, we bound the integral in (15), As in the proof of Lemma 4.1, we get for the inner integral that as u → ∞, since the lower boundary of the integral tends to 0 for v ≤ g(u)u 2β+δ with g(u) → 0 such that g(u)u 2β+δ → ∞, and 1 is small.Therefore we obtain the upper bound for the first part (v ≤ g(u)u 2β+δ ) of the outer integral The second part of the outer integral is of much smaller order because of the exponential term which implies that the first term of (14), is bounded by as u → ∞, where γ (u) ↓ 0 as u → ∞.
To derive the lower bound of the first term of (14), we use Bonferroni's inequality, From ( 8) and ( 9) in the proof of Lemma 4.1, we know the lower bound of the first term in ( 16) by setting the upper endpoint of the integration interval as u 2β−2+δ , to derive the lower bound, similar to the upper bound, where γ 1 (u) ↓ 0 as u → ∞, and As in (10), we divide the double sum in the second term of (16) into three parts.Then from the proof of Lemma 4.1, we know that the integrand in the second term of (16) can be bounded by C 1 Ψ(u), where C 1 is some constant.Hence we have b) For the second term of (14), we use the following derivation which is also needed in the proof of Lemma 4.5 dealing with the case (f).Therefore we formulate it for both cases together, assuming α ≥ 1 and α > β where 0 < δ < 2 − 2β/α.We have Since u −2/α− δ/β = o(u −2/α ) as u → ∞, we get for any ε > 0, with Lemma D.1 of Piterbarg [11] using X + (t) and C + (0), that the first term of ( 17) is bounded by where γ 1 (u) ↓ 0 as u → ∞, not depending on ζ.Since H α (ε) → 1 as ε → 0, the estimate for the upper bound is obtained.
The lower bound of the probability is obvious, for any ζ > 0 as u → ∞, and thus P max as u → ∞.
c) Finally, putting the derived bounds together, using ε and 1 → 0, we conclude as u → ∞.
In the next lemma we consider the two cases d) and e) of Figure 1 together.Hence the conditions of Lemma 3.1 are fulfilled, and the results follow.
It remains to consider the case (f) in Figure 1.
For the first term in (20), we split [0, T ] into subintervals with length u −2/α and assume again n = Tu 2/α without loss of generality.On the subintervals we use X + (t) with C = C max + 1 as approximating stationary Gaussian process and use the stationarity and Theorem D.2 of Piterbarg [11], to bound the first term of (20) for u large.
P u (T ) := P( sup t∈[0,T ] X (t) > u), as u → ∞ where T > 0. Denote the random mean of X conditioned on Y by m(t, Y ) := E(X (t) |Y ) and the random covariance by

Figure 1 :
Figure 1: The 6 different domains of α and β dealt with in Theorem 1.2.