The expected Euler characteristic approximation to excursion probabilities of smooth Gaussian random fields with general variance functions

Consider a centered smooth Gaussian random field $\{X(t), t\in T \}$ with a general (nonconstant) variance function. In this work, we demonstrate that as $u \to \infty$, the excursion probability $\mathbb{P}\{\sup_{t\in T} X(t) \geq u\}$ can be accurately approximated by $\mathbb{E}\{\chi(A_u)\}$ such that the error decays at a super-exponential rate. Here, $A_u = \{t\in T: X(t)\geq u\}$ represents the excursion set above $u$, and $\mathbb{E}\{\chi(A_u)\}$ is the expectation of its Euler characteristic $\chi(A_u)$. This result substantiates the expected Euler characteristic heuristic for a broad class of smooth Gaussian random fields with diverse covariance structures. In addition, we employ the Laplace method to derive explicit approximations to the excursion probabilities.


Introduction
Let X = {X(t), t ∈ T } represent a real-valued Gaussian random field defined on the probability space (Ω, F, P), where T denotes the parameter space.The study of excursion probabilities, denoted as P{sup t∈T X(t) ≥ u}, is a classical and fundamental problem in both probability and statistics.It finds extensive applications across numerous domains, including p-value computations, risk control and extreme event analysis, etc.
In the field of statistics, excursion probabilities play a critical role in tasks such as controlling family-wise error rates [13,14], constructing confidence bands [10], and detecting signals in noisy data [8,13].However, except for only a few examples, computing the exact values of these probabilities is almost impossible.To address this challenge, many researchers have developed various methods for precise approximations of P{sup t∈T X(t) ≥ u}.These methods encompass techniques like the double sum method [6], the tube method [9] and the Rice method [3,4].For comprehensive theoretical insights and related applications, we refer readers to the survey by Adler [1] and the monographs by Piterbarg [6], Adler and Taylor [2], and Azaïs and Wschebor [4], as well as the references therein.
In recent years, the expected Euler characteristic (EEC) method has emerged as a powerful tool for approximating excursion probabilities.This method, originating from the works of Taylor et al. [12] and Adler and Taylor [2], provides the following approximation: where χ(A u ) represents the Euler characteristic of the excursion set A u = {t ∈ T : X(t) ≥ u}.
This approximation (1.1) is highly elegant and accurate, primarily due to the fact that the principle term E{χ(A u )} is computable and the error term decays exponentially faster than the major component.However, it is essential to note that this method assumes a Gaussian field with constant variance, limiting its applicability in various scenarios.
In this paper, we extend the EEC method to accommodate smooth Gaussian random fields with general (nonconstant) variance functions.Our main objective is to demonstrate that the EEC approximation (1.1) remains valid under these conditions, with the error term exhibiting super-exponential decay.For a precise description of our findings, please refer to Theorem 3.1 below.Our derived approximation result shows that the maximum variance of X(t), denoted by σ 2 T (see (2.1) below), plays a pivotal role in both E{χ(A u )} and the super-exponentially small error.In our analysis, we observe that the points where σ 2  T is attained make the most substantial contributions to E{χ(A u )}.Building on this observation, we establish two simpler approximations: one in Theorem 3.2, which incorporates boundary conditions on nonzero derivatives of the variance function over points where σ 2 T is attained, and another in Theorem 3.3, assuming only a single point attains σ 2 T .In general, the EEC approximation can be expressed as an integral using the Kac-Rice formula, as outlined in (3.2) in Theorem 3.1.While [12,2] provided an elegant expression for E{χ(A u )} termed the Gaussian kinematic formula, this expression heavily relies on the assumption of unit variance, which simplifies the calculation.In our case, where the variance function of X(t) varies across T , deriving an explicit expression for E{χ(A u )} becomes challenging.Instead, we apply the Laplace method to extract the term with the leading order of u from the integral, leaving a remaining error that is E{χ(A u )}o(1/u).For a more detailed explanation, we offer specific calculations in Sections 8 and 9. To intuitively grasp the EEC approximation, one can roughly consider the major term as g(u)e −u 2 /(2σ 2 T ) , while the error term diminishes as o(e −u 2 /(2σ 2 T )−αu 2 ), where g(u) is a polynomial in u, and α > 0 is a constant.The structure of this paper is as follows: We begin by introducing the notations and assumptions in Section 2. In Section 3, we present our main results, including Theorems 3.1, 3.2, and 3.3.To understand our approach, we outline the main ideas in Section 4 and delve into the analysis of super-exponentially small errors in Sections 5 and 6.Finally, we provide the proofs of our main results in Section 7. In Section 8, we apply the Laplace method to derive explicit approximations (Theorems 8.3 and 8.4) for cases where a unique maximum point of the variance is present.In Section 9, we demonstrate several examples that illustrate the evaluation of EEC and the subsequent approximation of excursion probabilities.

Notations and assumptions
Let {X(t), t ∈ T } be a real-valued and centered Gaussian random field, where T is a compact rectangle in R N .We define Here, ν(•) represents the variance function of the field and σ 2 T is the maximum variance over T .For a function f (•) ∈ C 2 (R N ) and t ∈ R N , we introduce the following notations on derivatives: Let B ≺ 0 (negative definite) and B 0 (negative semi-definite) denote that a symmetric matrix B has all negative or nonpositive eigenvalues, respectively.Additionally, we use Cov(ξ 1 , ξ 2 ) and Corr(ξ 1 , ξ 2 ) to represent the covariance and correlation between two random variables ξ 1 and ξ 2 .The density of the standard Normal distribution is denoted as φ(x), and its tail probability is Ψ(x) =

∞
x φ(y)dy.Let S j be the j-dimensional unit sphere.Consider the domain We draw from the notation established by Adler and Taylor in [2] to demonstrate that T can be decomposed into the union of its interior and lower-dimensional faces.This decomposition forms the basis for calculating the Euler characteristic of the excursion set A u , as elaborated in Section 3.
Each face K of dimension k is defined by fixing a subset τ (K) ⊂ {1, . . ., N } of size k and Denote by ∂ k T the collection of all k-dimensional faces in T .The interior of T is designated as This allows us to partition T in the following manner: (2.3)For each K ∈ ∂ k T , we define the number of extended outward maxima above u on face K as where ε * j = 2ε j − 1, and define the number of local maxima above u on face K as T , we define the index set T , then we have τ (K) ⊂ I(t) since ν ℓ (t) = 0 for all ℓ ∈ τ (K).It is worth noting that since ν i (t) = 2E{X i (t)X(t)}, we can also express this index set as Our analytical framework relies on the following conditions for smoothness (H1) and regularity (H2), in addition to curvature conditions (H3) or (H3 ′ ).
(H1) X ∈ C 2 (R N ) almost surely and the second derivatives satisfy the uniform mean-square Hölder condition: there exist constants C, δ > 0 such that (H2) For every pair (t, t ′ ) ∈ T 2 with t = t ′ , the Gaussian vector T and I(t) contains at least two indices, we have (E{X(t)X ij (t)}) i,j∈I(t) ≺ 0. (2.4) T and I(t) contains at least two indices, we have (ν ij (t)) i,j∈I(t) 0. (2.5) Conditions (H3) and (H3 ′ ) involve the behavior of the variance function ν(t) at critical points, and they are closely related, as shown in Proposition 2.1 below.Here we provide some additional insights into (H3 ′ ).Despite its initially technical appearance, (H3 ′ ) is in fact a mild condition that specifically applies to lower-dimensional boundary points t where ν(t) = σ 2 T .In essence, it indicates that the variance function should possess a negative semi-definite Hessian matrix at these boundary critical points where ν(t) = σ 2 T while concurrently exhibiting at least two zero partial derivatives.
For example, in the 1D case, since I(t) contains at most one index, there is no need to check (H3 ′ ).Similarly, in the 2D case, we only need to check (H3 ′ ) or (2.5) when σ 2 T is achieved at corner points t ∈ ∂ 0 T with I(t) = {1, 2}.Moreover, if the variance function ν(t) demonstrates strict monotonicity in all directions across R N , then I(t) = ∅ and there is no need to verify (H3 ′ ).
Next we demonstrate that (H3) implies (2.6).It suffices to show (2.4) for k = N − 1 and k = N , and for the case that I(t) contains at most one index, which complement those cases in (H3).
(i) If k = N , then t becomes a maximum point of ν within the interior of T and ), and hence (2.4) holds by (2.7).
(ii) For k = N − 1, we consider two scenarios.If I(t) = τ (K), then t becomes a maximum point of ν restricted on K, hence (2.4) is satisfied as discussed above.If then it follows from Taylor's formula that together with the fact ν(t) = σ 2 T , we see that ∇ 2 ν(t) cannot have any positive eigenvalue, thus (2.5) and hence (2.4) hold.
(iii) Finally, it's evident from the 1D Taylor's formula that (2.5) is valid when I(t) contains only one index.
The condition (2.6) established in Proposition 2.1 serves as the fundamental requirement for our main results, as demonstrated in Theorems 3.1, 3.2 and 3.3 below.As seen from Proposition 2.1, we can simplify (2.6) to condition (H3).Thus our main results will be presented under the assumption of condition (H3).Furthermore, it is worth highlighting that, in practical applications, verifying (H3 ′ ) can often be a more straightforward process.This condition directly pertains to the variance function ν(t), making it easier to assess.Thus, Proposition 2.1 provides the flexibility to check (H3 ′ ) instead of (H3).This insight simplifies the verification procedure, enhancing the practical applicability of our results.

Main results
Here, we will present our main results Theorems 3.1, 3.2 and 3.3, whose proofs are given in Section 7. Define the number of extended outward critical points of index i above level u on face K be Recall that ε * j = 2ε j − 1 and the index of a matrix is defined as the number of its negative eigenvalues.It is evident to observe that µ N (K) = M E u (K).It follows from (H1), (H2) and the Morse theorem (see Corollary 9.3.5 or pages 211-212 in Adler and Taylor [2]) that the Euler characteristic of the excursion set A u can be represented as Now we state the following general result on the EEC approximation for the excursion probability.
Theorem 3.1.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and (H3).Then there exists a constant α > 0 such that as u → ∞, In general, computing the EEC approximation E{χ(A u )} is a challenging task because it involves conditional expectations over the joint covariance of the Gaussian field and its Hessian, given zero gradient, which vary across T .However, one can apply the Laplace method to extract the term with the largest order of u from E{χ(A u )} such that the remaining error is Examples demonstrating the Laplace method are presented in Section 9.
It is important to note that in the expression (3.2), when k = 0, all terms involving ∇X |K (t) and ∇ 2 X |K (t) vanish.Consequently, if k = 0, we treat the integral in (3.2) as the usual Gaussian tail probabilities.This notation is also adopted in the results presented in Theorems 3.2 and 3.3 below.
The proof of Theorem 3.1 reveals that points where the maximum variance σ 2 T is attained make the most significant contribution to E{χ(A u )}.Therefore, in many cases, the general EEC approximation E{χ(A u )} can be simplified.The following result is based on the boundary condition (3.3) and is applicable at boundary points where nonzero partial derivatives of the variance function occur when σ 2 T is reached.Theorem 3.2.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and the following boundary condition Then there exists a constant α > 0 such that as u → ∞, In other words, the boundary condition (3.3) indicates that, for any point t ∈ J attaining the maximum variance σ 2 T , there must be ν i (t) = 0 for all i / ∈ τ (J).In particular, as an important property, we observe that (3.3) implies the condition (H3 ′ ) and hence (H3).The following result provides an asymptotic approximation for the special case where the variance function attains its maximum σ 2 T only at a unique point.Theorem 3.3.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and (H3).Suppose ν(t) attains its maximum σ 2 T only at a single point t * ∈ K, where K ∈ ∂ k T with k ≥ 0. Then there exists a constant α > 0 such that as u → ∞, where the sum is taken over all faces J of T such that t * ∈ J and τ (J) ⊂ I(t * ).
Employing the Laplace method, we will provide refined explicit approximation results in Section 8 under the assumptions in Theorem 3.3.Furthermore, we demonstrate several examples that illustrate the evaluation of approximating excursion probabilities in Section 9.

Outline of the proofs
Here we show the main idea for proving the main results above.Let f be a smooth real-valued function, then sup t∈T f (t) ≥ u if and only if there exists at least one extended outward local maximum above u on some face of T .Thus, under conditions (H1) and (H2), the following relation holds for each u ∈ R: This implies that the probability of the supremum of the Gaussian random field exceeding u is equal to the probability that there exists at least one extended outward local maximum above u on some face K of T .Therefore, we obtain the following upper bound for the excursion probability: On the other hand, notice that and Applying the Bonferroni inequality to (4.1) and combining these two inequalities, we obtain the following lower bound for the excursion probability: where the last sum is taken over all possible pairs of different faces (K, K ′ ).
Remark 4.1 Note that, following the same arguments above, we have that the expectations on the number of extended outward maxima M E u (•) in both (4.2) and (4.3) can be replaced by the expectations on the number of local maxima M u (•).
We call a function h(u) super-exponentially small [when compared with the excursion probability P{sup t∈T X(t) ≥ u} or E{χ(A u )}], if there exists a constant α > 0 such that The main idea for proving the EEC approximation Theorem 3.1 consists of the following two steps: (i) show that, except for the upper bound in (4.2), all terms in the lower bound in (4.3) are super-exponentially small; and (ii) demonstrate that the difference between the upper bound in (4.2) and E{χ(A u )} is also super-exponentially small.The proofs for Theorems 3.2 and 3.3 follow the same ideas, aiming to establish superexponential smallness for the terms involved in the lower bounds, as well as for the difference between the upper bound and EEC.
5 Estimation of super-exponential smallness for terms in the lower bound

Factorial moments
We first state the following result, which is a modified version (restricted on a face K) of Lemma 4 in Piterbarg [7], characterizing the decaying rate for factorial moments of the number of critical points exceeding a high level for Gaussian fields.
Lemma 5.1.Assume (H1) and (H2).Then there exists a positive constant C such that for any ε > 0 one can find a number ε 1 > 0 such that for any where The following result shows that the factorial moments in (4.3) are super-exponentially small under our assumptions.
Proposition 5.2.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and (H3).Then there exists α > 0 such that as u → ∞, Proof.Due to Lemma 5.1, it suffices to show that for each T for some t ∈ K, then But t is a point with ν(t) = σ 2 T , thus Σ K (t) ≺ 0 by Proposition 2.1, implying Σ K (t)e = 0 for all e ∈ S k−1 and causing a contradiction.
On the other hand, suppose Var(X(t T and hence ν i (t) = 0 for all i ∈ τ (K), implying Σ K (t) ≺ 0 by Proposition 2.1.Similarly to the previous arguments, this will lead to a contradiction.The proof is completed.

Non-adjacent faces
The following result demonstrates that the last two sums involving the joint moment of two non-adjacent faces in (4.3) are super-exponentially small.Proposition 5.3.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1) and (H2).Then there exists α > 0 such that as u → ∞, where K and K ′ are different faces of T with d(K, K ′ ) > 0.
Proof.Consider first the case where dim(K) = k ≥ 1 and dim(K ′ ) = k ′ ≥ 1.By the Kac-Rice formula for high moments [2], we have (5.4) Notice that the following two inequalities hold: for constants a i 1 and b i 2 , and for any Gaussian variable ξ and positive integer m, by Jensen's inequality, where B m is some constant depending only on m.Combining these two inequalities with the well-known conditional formula for Gaussian variables, we obtain that there exist positive constants C 1 and N 1 such that for sufficiently large x and x ′ , sup (5.5) Further, there exists (5.6) Plugging (5.5) and (5.6) into (5.4),we obtain that there exists C 3 such that, for u large enough, where ε is any positive number and ρ = sup t∈K,t ′ ∈K ′ Corr[X(t), X ′ (t)] < 1 due to (H2).The case when one of the dimensions of K and K ′ is zero can be proved similarly.

Adjacent faces
The following result shows that the last two sums involving the joint moment of two adjacent faces in (4.3) are super-exponentially small.Proposition 5.4.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and (H3).Then there exists α > 0 such that as u → ∞, where K and K ′ are different faces of T with d(K, K ′ ) = 0.
Proof.Let I := K ∩ K′ , which is nonempty since d(K, K ′ ) = 0. To simplify notation, let us assume without loss of generality: , and all elements in ε(K) and ε(K ′ ) are 1.
We first consider the case when k ≥ 1 and l ≥ 1.By the Kac-Rice formula, where p t,t ′ (x, x ′ , 0, z k+1 , . . ., z k+k ′ −m , 0, w m+1 , . . ., w k ) is the density of the joint distribution of the variables involved in the given condition.We define and consider two cases for M 0 .

Var(X(t)|∇X
(5.12) Note that where each product on the right hand side consists of two sets with a positive distance.It then follows from Proposition 5.3 that I 1 (u) is super-exponentially small.On the other hand, since (5.13) Combining this with (5.11), we conclude that I 2 (u) and hence E{M E u (X, K)M E u (X, K ′ )} are super-exponentially small.
It is similar to show that D i A(t, t ′ , u) dtdt ′ are super-exponentially small for i = k + 1, . . ., k + k ′ − m.For the case k = 0 or l = 0, the argument is even simpler when applying the Kac-Rice formula; the details are omitted here.The proof is finished.
In the proof of Proposition 5.4, we have shown in (5.12) that, if M 0 = ∅, then the moment E{M u (X, K)M u (X, K ′ )} is super-exponentially small.It is important to note that, the boundary condition (3.3) implies (and generalizes) the condition M 0 = ∅, yielding the following result.

Estimation of the difference between EEC and the upper bound
In this section, we demonstrate that the difference between E{χ(A u )} and the expected number of extended outward local maxima, i.e. the upper bound in (4.2), is super-exponentially small.Proposition 6.1.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and (H3).Then there exists α > 0 such that for any Proof.The second equality in (6.1) arises from the application of the Kac-Rice formula: To prove the first approximation in (6.1) and convey the main idea, we start with the case when the face K represents the interior of T .
Case (i): k = N .By the Kac-Rice formula, we have Let where δ 1 is a small positive number to be specified.Then, we only need to estimate since the integral above with B(M 1 , δ 1 ) replaced by K\B(M 1 , δ 1 ) becomes super-exponentially small due to the fact sup Notice that, by Proposition 2.1, E{X(t)∇ 2 X(t)} ≺ 0 for all t ∈ M 1 .Thus there exists δ 1 small enough such that E{X(t)∇ 2 X(t)} ≺ 0 for all t ∈ B(M 1 , δ 1 ).In particular, let λ 0 be the largest eigenvalue of E{X(t)∇ 2 X(t)} over B(M 1 , δ 1 ), then λ 0 < 0 by the uniform continuity.
Also note that E{X(t)∇X(t)} tends to 0 as δ 1 → 0. Therefore, as δ 1 → 0, Thus, for all x ≥ u and t ∈ B(M 1 , δ 1 ) with δ 1 small enough, Note that the following is a centered Gaussian random matrix not depending on x: where (v ij ) is the abbreviations of the matrix v = (v ij ) 1≤i,j≤N .There exists a constant c > 0 such that for δ 1 small enough and all t ∈ B(M 1 , δ 1 ), and x ≥ u, we have This implies {v : Consequently, the integral in (6.5) with the domain of integration replaced by {v : , where α ′ is a positive constant.As a result, we conclude that, uniformly for all t ∈ B(M 1 , δ 1 ) and x ≥ u, By substituting this result into (6.4),we observe that the indicator function ½ {∇ 2 X(t)≺0} in (6.3) can be eliminated, causing only a super-exponentially small error.Thus, for sufficiently large u, there exists α > 0 such that Case (ii): k, l ≥ 0. It is worth noting that when k = 0, the terms in (6.1) related to the Hessian will vanish, simplifying the proof.Therefore, without loss of generality, let k ≥ 1, τ (K) = {1, • • • , k} and assume all the elements in ε(K) are 1.By the Kac-Rice formula, where δ 2 is another small positive number to be specified.Here, we only need to estimate since the integral above with B(M 2 , δ 2 ) replaced by K\B(M 2 , δ 2 ) is super-exponentially small due to the fact sup Var(X(t)|∇X(t) = 0) < σ 2 T .
On the other hand, following similar arguments in the proof for Case (i), we have that removing the indicator functions ½ {∇ 2 X |K (t)≺0} in (6.7) will only cause a super-exponentially small error.Combining these results, we conclude that the first approximation in (6.1) holds, thus completing the proof.
From the proof of Proposition 6.1, it is evident that the same line of reasoning and arguments can be readily extended to E{M u (X, K)}, leading to the following result.
Proposition 6.2.Let {X(t), t ∈ T } be a centered Gaussian random field satisfying (H1), (H2) and (H3).Then there exists a constant α > 0 such that for any 7 Proofs of the main results Proof of Theorem 3.1.By Propositions 5.2, 5.3 and 5.4, together with the fact M E u (K) ≤ M u (K), we obtain that the factorial moments and the last two sums in (4.3) are superexponentially small.Therefore, from (4.2) and (4.3), it follows that there exists a constant α > 0 such that as u → ∞, This desired result follows as an immediate consequence of Proposition 6.1.
Proof of Theorem 3.2.Remark 4.1 indicates that both inequalities (4.2) and (4.3) hold with . Therefore, the corresponding factorial moments and the last two sums in (4.3) with M E u (•) replaced by M u (•) are super-exponentially small by Propositions 5.2, 5.3 and 5.5.Consequently, there exists a constant α > 0 such that as u → ∞, The desired result follows directly from Proposition 6.2.
Proof of Theorem 3.3.Note that, in the proof of Theorem 3.1, we have seen that the points in M 2 defined in (6.6) make major contribution to the excursion probability.That is, with up to a super-exponentially small error, we can focus only on those faces, say J, whose closure J contains the unique point t * with ν(t * ) = σ 2 T and satisfying τ (J) ⊂ I(t * ) (i.e., the partial derivatives of ν are 0 at t * restricted on J).To formalize this concept, we define a set of faces T * as follows: For each J ∈ T * , let Note that, both inequalities (4.2) and (4.3) remain valid when we replace M E u (J) with M E * u (J) for faces J belonging to T * , and replace M E u (J) with M u (J) otherwise.Employing analogous reasoning as used in the derivation of Theorems 3.1 and 3.2, we obtain that, there exists α > 0 such that as u → ∞, This desired result is then deduced from Proposition 6.1.
8 Gaussian fields with a unique maximum point of the variance In this section, we delve deeper into EEC approximations when the variance function ν(t) reaches its maximum value σ 2 T at a solitary point t * .While Theorem 3.3 provides an implicit formula for such scenarios, our objective here is to obtain explicit formulae by employing integral approximation techniques based on the Kac-Rice formula.To facilitate this process, we begin by presenting some auxiliary results related to the Laplace method for integral approximations.

Auxiliary lemmas on Laplace approximation
The following two formulas state the results on the Laplace approximation method.Lemma 8.1 can be found in many books on the approximations of integrals; here we refer to Wong [15].Lemma 8.2 can be derived by following similar arguments in the proof of the Laplace method for the case of boundary points in [15].Lemma 8.1.[Laplace method for interior points] Let t 0 be an interior point of T .Suppose the following conditions hold: (i) g(t) ∈ C(T ) and g(t 0 ) = 0; (ii) h(t) ∈ C 2 (T ) and attains its minimum only at t 0 ; and (iii) ∇ 2 h(t 0 ) is positive definite.Then as u → ∞, T g(t)e −uh(t) dt = (2π) N/2 u N/2 (det∇ 2 h(t 0 )) 1/2 g(t 0 )e −uh(t 0 ) (1 + o(1)).

Gaussian fields satisfying the boundary condition (3.3)
For t ∈ T , we define the following notation for conditional variances: The following result provides explicit approximations to the excursion probabilities when the maximum of the variance is reached only at a single point and the boundary condition (3.3) is satisfied.
Proof.If k = 0, then ν i (t * ) = 0 for all i ≥ 1, and hence I(t * ) = ∅.The first line of (8.2) follows from Theorem 3.3 that Now, let us consider the case when k ≥ 1.Note that the assumption on partial derivatives of ν(t) implies I(t * ) = τ (K).By Theorem 3.3, we have where Applying the Laplace method in Lemma 8.1 with and noting that the Hessian matrix of 1/(2ν |K (t)) evaluated at t * is we obtain where Here, noting that Σ K (t * ) = E{X(t)∇ 2 X |K (t * )} ≺ 0 by Proposition 2.1, we let in (8.6) be a k × k positive definite matrix such that Q(−Σ K (t * ))Q = I k , where I k is the size-k identity matrix.Then Notice that E{X(t * )∇X |K (t * )} = 0 due to ν |K (t * ) = 0, we have where ∆(t * ) is a centered Gaussian random matrix with covariance independent of x.According to the Laplace expansion of determinant, E{det(∆(t * ) − (x/σ 2 T )I k )} is a polynomial in x with the highest-order term being (−1) k σ −2k T x k .Plugging this into (8.6) and (8.5), we obtain Finally, note that Therefore,

Gaussian fields not satisfying the boundary condition (3.3)
We consider here the other case when ν i (t * ) = 0 for some i / ∈ τ (K).For a symmetric matrix B = (B ij ) 1≤i,j≤N , we call (B ij ) i,j∈I the matrix B with indices restricted on I.
where Z is a centered Gaussian variable.
Denote by R n + = (0, ∞) n .To simplify the statement in Theorem 8.4, we present below another version with less notations on faces.