Averaging Gaussian functionals*

This paper consists of two parts. In the first part, we focus on the average of a functional over shifted Gaussian homogeneous noise and as the averaging domain covers the whole space, we establish a Breuer-Major type Gaussian fluctuation based on various assumptions on the covariance kernel and/or the spectral measure. Our methodology for the first part begins with the application of Malliavin calculus around Nualart-Peccati’s Fourth Moment Theorem, and in addition we apply the Fourier techniques as well as a soft approximation argument based on Bessel functions of first kind. The same methodology leads us to investigate a closely related problem in the second part. We study the spatial average of a linear stochastic heat equation driven by space-time Gaussian colored noise. The temporal covariance kernel γ0 is assumed to be locally integrable in this paper. If the spatial covariance kernel is nonnegative and integrable on the whole space, then the spatial average admits the Gaussian fluctuation; with some extra mild integrability condition on γ0, we are able to provide a functional central limit theorem. These results complement recent studies on the spatial average for SPDEs. Our analysis also allows us to consider the case where the spatial covariance kernel is not integrable: For example, in the case of the Riesz kernel, the first chaotic component of the spatial average is dominant so that the Gaussian fluctuation also holds true.


Introduction
Motivated by the Breuer-Major central limit theorem (CLT) [2] and recent studies on the spatial averages of SPDEs [14,15,7], we devote this paper to seeking general conditions that lead to the Gaussian fluctuations of averages of Gaussian functionals.
It is clear that (1.1) defines an inner product, under which the space C ∞ c (R d ) can be extended into a real Hilbert space H. Furthermore, the mapping φ ∈ C ∞ c (R d ) → W (φ) extends to a linear isometry between H and the Gaussian Hilbert space spanned by W . We write W (φ) = R d φ(x) W (dx) and E[W (φ)W (ϕ) = φ, ϕ H , for any φ, ϕ ∈ H. This gives us an isonormal Gaussian process over H. Now consider a real random variable F ∈ L 2 (Ω) that is measurable with respect to W and has the following Wiener chaos expansion: (1.2) where I W p (·) denotes the pth multiple stochastic integral with respect to W and f p belongs to the symmetric subspace H p of the pth tensor product H ⊗p , ∀p ∈ N; see [21] for more details. Along the paper we will denote by Π p F the orthogonal projection of F onto the pth Wiener chaos.
3) with 1 f x p (y 1 , . . . , y p ) = f p (y 1 − x, . . . , y p − x) for any x, y 1 , . . . , y p ∈ R d and p ∈ N. Here is another look at the above definition. For any x ∈ R d and any ϕ ∈ C ∞ c (R d ), we write ϕ x (y) = ϕ(y − x) and we introduce W x , the shifted Gaussian field, defined by W x (φ) = W (φ x ), for any φ ∈ C ∞ c (R d ), and by extension for any φ ∈ H. The family W x has the same covariance structure as W and the associated multiple stochastic integrals satisfy I Wx p (f ) = I W p (f x ) for any f ∈ H p , so that U x F (W ) = F (W x ) shall give us (1.3). Let F be given as in (1.2). We are interested in the spatial averages of U x F over B R = {x ∈ R d : x ≤ R}, with the particular aim at general conditions on the kernels {f p , p ∈ N} and the covariance kernel γ (and/or the associated spectral measure µ) that (1.4) where σ(R) is a normalization constant and N (m, v 2 ) stands for a real normal distribution with mean m and variance v 2 .
To illustrate how this spatial averaging is related to the aforementioned Breuer-Major theorem and to give a flavor of our results, we provide below a particular case (see Example 1.2) and refer to Section 2 for more general results. Let us first recall the continuous-time Breuer-Major theorem (in a slightly different form). Theorem 1.1. Suppose g ∈ L 2 (R, e −x 2 /2 dx) has the following orthogonal expansion in Hermite polynomials {H p = (−1) p e x 2 /2 d p dx p e −x 2 /2 , p ∈ N} : g = p≥m c p H p with c m = 0, m ≥ 1 known as the Hermite rank of g.
Let Y = {Y x , x ∈ R d } be a centered Gaussian stationary process with covariance function E[Y a Y b ] = ρ(a − b) such that ρ(0) = 1. Under the condition ρ ∈ L m (R d , dx), we , ω d being the volume of B 1 ; see also [3,25].
Example 1.2. Now fix a unit vector e ∈ H and put F = g W (e) , then U x F = g W x (e) = g Y x , with Y x = W (e x ). If g ∈ L 2 (R, e −x 2 /2 dx) has Hermite rank m ≥ 1 and then Theorem 1.1 produces an example of (1.4). Note that in this example, the Gaussian functional F = g W (e) depends only on one coordinate while our principal concern is for Gaussian functionals that may depend on infinitely many coordinates.
Recall the chaos expansions (1.2) and (1.3), and from now on, we consider the case where F has Hermite rank m ≥ 1, meaning that: 1 For a generalized function f ∈ H, we can define f x as follows. Let {fn, n ∈ N} ⊂ C ∞ c (R d ) be an approximating sequence of f in H, we can define f x n for each n ∈ N and f x to be the limit of the Cauchy sequence {f x n , n ∈ N} in H. It is routine to verify that the definition of f x does not depend on the particular choice of the approximating sequence. In this case, we write In view of Hu and Nualart's chaotic central limit theorem [11], based on the Fourth Moment Theorems of Nualart, Peccati and Tudor [23,26], it is enough to look for conditions that guarantee the central limit theorem on each fixed chaos, provided one has some uniform control of the variance of each chaotic component. More precisely, we have the following general result. Theorem 1.3. Consider a sequence of centered square integrable random variables (F n , n ∈ N) with Wiener chaos expansions F n = q≥1 I W q (f q,n ), where f q,n ∈ H q for each q, n ∈ N. Suppose that: (i) ∀q ≥ 1, q! f q,n 2 H ⊗q → σ 2 q , as n → +∞; (ii) ∀q ≥ 2 and ∀r ∈ {1, . . . , q − 1}, f q,n ⊗ r f q,n H ⊗(2q−2r) → 0, as n → +∞; (iii) lim N →+∞ lim sup n→+∞ q≥N q! f q,n 2 H ⊗q = 0 .
Then, as n → ∞, F n converges in law to N (0, σ 2 ), with σ 2 = q≥1 σ 2 q . We refer to [20,22] for more details on this result and to Section 2 for the definition of the r-contraction ⊗ r . Now let us look at the central limit theorem on each chaos. We fix an integer p ≥ 2 and put G p,R = I W p g p,R with σ 2 p,R := Var G p,R . Assume σ p,R > 0 for large R, then according to the Fourth Moment Theorem of Nualart and Peccati [23], we know that  Moreover, we have the following rate of convergence in the total variation distance, as a consequence of the Nourdin-Peccati bound (see [20,Chapter 5]): g p,R ⊗ r g p,R H ⊗(2p−2r) . (1.6) Throughout this paper, we write C for immaterial constants that may vary from line to line.
In the first part of this paper (Section 2), we will exploit the above ideas to derive sufficient conditions for (1.4) to hold, with σ(R) growing like CR d/2 . Note that the order of σ(R) matches the result in Theorem 1.1. Without introducing further notation, we provide another example of (1.4), which is a corollary of our main result (Theorem 2.15); see Remark 2.16. Theorem 1.4. Let the above notation prevail. Assume γ(0) ∈ (0, ∞) and γ ∈ L m (R d , dx), where m ≥ 1 is the Hermite rank of F . If we assume in addition that the kernels f p ∈ L 1 (R pd ) ∩ H p , p ≥ m, satisfy p≥m p!γ(0) p f p 2 L 1 (R pd ) < +∞ , (1.7) then, R −d/2 , and with s p s p s p = (s 1 , . . . , s p ), dt p t p t p = dt 1 · · · dt p , One may want to compare our Theorem 1.4 with Theorem 1.1 and Example 1.2. We refer the readers to Section 2 for more results with this flavor and here we briefly give a literature overview: 1. To the best of our knowledge, problem (1.4) first received attention in the 1976 paper [18] by Maruyama, using the method of moments. Proofs and extensions of Maruyama's CLT were published in his 1985 paper [19].
2. In 1983, Breuer and Major provided a CLT [2], motivated by the non-central limit theorems of Dobrushin, Major, Rosenblatt and Taqqu during 1977-1981 (see [8,17,27,28,29]). Unlike these works, Breuer and Major were interested at the asymptotic normality of nonlinear functionals over stationary Gaussian fields when the corresponding correlation function decay fast enough. Although Breuer-Major's theorem (see Theorem 1.1) takes a simpler form compared to Maruyama's CLT, it has found a tremendous amount of applications in theory and practice.
3. Chambers and Slud established further extensions to Maruyama's CLT in [4] and obtained the Breuer-Major theorem as a corollary (when assuming the existence of spectral density). In both [4] and Maruyama's work [18,19], the story always begins with a real stationary Gaussian process with time-shifts {U s , s ∈ R} and they formulated the chaos expansion based on the spectral (probability) measure.
4. In the present work, we provide sufficient conditions for (1.4) in terms of the spectral measure. Comparing our assumptions based on the spectral measure with those in [4], both sets of assumptions essentially cover our Theorem 1.4 as a particular case, while they are different in their full generality. Moreover, we also provide sufficient conditions for (1.4) in terms of the covariance kernel.
Our methodology from the first part can be applied to the study of spatial averages of the stochastic heat equation driven by Gaussian colored noise and this constitutes the second part of our paper. More precisely, we consider the following stochastic heat equation with a multiplicative Gaussian colored noise on R + × R d : xi concerns only space variables and the initial condition is fixed to be u 0,x ≡ 1.
for any φ, ψ ∈ C ∞ c (R + × R d ), where F denotes the Fourier transform with respect to the spatial variables and the following two conditions are satisfied: is locally integrable and nonnegative-definite, 2. γ 1 is a measure, such that γ 1 = F µ 1 for some nonnegative tempered measure µ 1 , called the spectral measure, satisfying Dalang's condition (see e.g. [6]) (1.10) If γ 1 is absolutely continuous with respect to the Lebesgue measure on R d , we still denote by γ 1 its density and then We will use this notation even if γ 1 is a measure. The basic example is d = 1 and γ 1 = δ 0 and in this case µ 1 is (2π) −1 times Lebesgue measure. We point out that (1.9) defines an inner product, under which C ∞ c (R + × R d ) can be extended into a Hilbert space H . As we did before, we can build an isonormal process where the sum runs over the permutation group S p over {1, . . . , p}. Quite often in this paper, we write f (s p s p s p , y p y p y p ) for f (s 1 , y 1 , . . . , s p , y p ), whenever it is convenient. For each t ≥ 0, let F t be the σ-algebra generated by W (φ) : φ is continuous with support contained in [0, t] × R d . We say that a random field u = {u t,x , (t, x) ∈ R + × R d } is adapted if for each (t, x), the random variable u t,x is F t -measurable.
We interpret equation (1.8) in the Skorokhod sense and recall the definition of mild solution from [9, Definition 3.1]. Definition 1.5. An adapted random field u = u t,x , t ≥ 0, x ∈ R d such that E u 2 t,x < +∞ for all (t, x) is said to be a mild solution to equation (1.8) with initial conditoin u 0,· = 1, if for any t ∈ R + , x ∈ R d , the process The above stochastic heat equation has a unique mild solution u with explicit Wiener chaos expansion given by (see [9,Theorem 3 where f t,x,n (s n s n s n , y n y n y n ) = 1 n! n−1 i=0 G(s σ(i) − s σ(i+1) , y σ(i) − y σ(i+1) ), (1.11) with σ ∈ S n being such that t > s σ(1) > · · · > s σ(n) > 0. In the above expression we have used the convention s σ(0) = t and y σ(0) = x. We also refer interested readers to [10,13] for more general noises.
Notice that u t,x − E[u t,x ] has Hermite rank 1 and it is known that for any fixed t ∈ R + , {u t,x : x ∈ R d } is strictly stationary meaning that the finite-dimensional distributions of the process {u t,x+y , x ∈ R d } do not depend on y. So the following integral resembles the object in (1.4) and we are able to establish its Gaussian fluctuation under some mild assumptions. The spatial averages (1.12) have been studied in recent articles [14,15,7]: (i) Huang, Nualart and Viitasaari [14] initiated their study by looking at the onedimensional (nonlinear) stochastic heat equation driven by a space-time white noise.
(ii) Huang, Nualart, Viitasaari and Zheng [15] continued to study the d-dimensional stochastic heat equation driven by Gaussian noise that is white in time and colored in space, with the spatial covariance described by the Riesz kernel.
In the above references, the Gaussian noise is assumed to be white in time, which gives rise to a martingale structure. This is important for applying Itô calculus (e.g. Burkholder-Davis-Gundy inequality and Clark-Ocone formula) to obtain quantitative central limit theorems for (1.12).
In the present paper, we consider a linear stochastic heat equation driven by spacetime colored noise, so Itô calculus can not be applied anymore; while due to the linearity, an explicit chaos expansion of the solution is available for us to apply the chaotic central limit theorem (Theorem 1.3).
We define and let Π p A t (R) be the projection of A t (R) on the pth Wiener chaos, that is, Throughout this paper, we assume that γ 0 , γ 1 are nontrivial, meaning that for any t > 0. The following is our main result.
with X 1 , X 2 two independent standard Brownian motions on R d . If in addition, there exist some t 0 > 0 and some α ∈ (0, 1/2) such that Notice that (1.14) is satisfied when γ 0 = δ 0 . In this case γ 0 is not a function but the result can be properly formulated.
One may ask what happens if γ 1 (R d ) is not finite, and this includes an important example, the Riesz kernel γ 1 (z) = z −β with β ∈ (0, 2 ∧ d).
(1) Assume that µ 1 admits a density ϕ 1 that satisfies As a consequence, we have (2) When γ 1 (z) = z −β for some β ∈ (0, 2 ∧ d), we have Note that the Riesz kernel in part (2) (1. 17) In particular, in dimension one, β ∈ (1/2, 1) is equivalent to the fractional noise with Hurst parameter H ∈ (1/2, 3/4). Remark 1.8. Unlike previous studies, we consider a noise that is colored in time, and our results complement, in particular, those in [14,15]. In [14] where the noise is white in space and time, the authors were able to obtain the chaotic central limit theorem for the linear equation (parabolic Anderson model), proving also a rate of convergence in the total variation distance. The quantitative CLT in the case γ 0 = δ 0 and γ 1 (z) = z −β , was obtained in [15] for the nonlinear equation, and the authors of [15] also proved that for the linear equation, the first chaos is dominant so the central limit theorem is not chaotic.
We point out that in both parts of Theorem 1.7 the first chaos dominates, that is, the central limit theorem is not chaotic. Moreover, we are able to provide the following functional version of Theorem 1.7. Theorem 1.9. Suppose γ 0 : R → R + ∪ {∞} is locally integrable and γ 1 (R d ) = +∞.
(1) Let the assumptions in part (1) of Theorem 1.7 hold and we assume that the condition (1.14) is satisfied. We put then as R → ∞, the process R −d/2 A t (R) : t ∈ R + converges in law to a centered continuous Gaussian process G with covariance given by (2) If condition (1.14) is satisfied for some α ∈ (0, 1/2) and γ 1 (z) = z −β for some β ∈ (0, 2 ∧ d), then the process R −d+ β 2 A t (R) : t ∈ R + converges in law to a centered continuous Gaussian process G, as R → ∞. Here the covariance structure of G is given We will organize the rest of our article into three sections. Section 2 begins with a subsection on some preliminary knowledge, where we provide some important lemmas for our later analysis. We devote Section 2.2 to the investigation of the central limit theorems on a fixed chaos by looking at assumptions on the covariance kernel and on the spectral measure separately. We derive the corresponding chaotic central limit theorems in Section 2.3. Section 3 is devoted to the proof of Theorems 1.6, 1.7 and 1.9. For Theorem 1.6. we show the convergence of the finite-dimensional distributions and the tightness. Theorem 1.7 and Theorem 1.9 are proved as a by-product of the estimations in the proof of Theorem 1.6. Finally, Section 4 provides the proofs of some technical results stated in previous sections.

Preliminaries
In this section, we introduce some notation for later reference and we provide several lemmas needed for our proofs.
Recall from our introduction that {W (h), h ∈ H} is an isonormal Gaussian process such that for any φ, ψ ∈ H, where γ is the covariance kernel and µ is the spectral measure whose Fourier transform is γ, understood in the generalized sense. Let H µ be the Hilbert space of functions g : R d → C such that g(−x) = g(x) for µ-almost every x ∈ R d and Here z is the complex conjugate of z ∈ C. It is clear that the Fourier transform stands as a linear isometry from H to H µ .
For any integer p ≥ 2, let H ⊗p (resp. H p ) the pth tensor product (resp. symmetric tensor product) of H. Note that for any integer p ≥ 2, the pth multiple stochastic integral I W p is a linear and continuous operator from H ⊗p into L 2 (Ω). We can define spaces like H ⊗p µ and H p µ in the obvious manner. To simplify the display, we introduce some compact notation below.
Notation A: For any R > 0, B R (x) stands for the d-dimensional Euclidean (closed) ball centered at x with radius R and we have used B R for B R (0). We write vol(A) for the volume of A ⊂ R d and ω d = vol(B 1 ). We use · to denote the Euclidean norm in any dimension.
For p positive, we denote by J p the Bessel function of first kind with order p: π 0 (sin θ) 2p cos x cos θ dθ, x ∈ R ; (2.1) see [16, (5.10.4)]. Let us also record here with Γ the Euler's Gamma function.
where J d/2 is the Bessel function of the first kind with order d/2.
(2) Given a positive real number p, we have As a consequence, we have sup{|J p (x)| : x ∈ R + } < +∞ and |J p (x)| ≤ C|x| −1/2 for any x ∈ R, here C is some absolute constant.
is an approximation of the identity.
Proof. (1) Let us suppose first that R = 1. In this case, one sees that the Fourier transform of 1 { u ≤1} is rotationally symmetric, so without losing any generality, we assume ξ = (0, . . . , 0, ρ) with ρ = ξ > 0. Then for d ≥ 2, where the last equality follows from the expressions (2.2) and (2.1). That is, for d ≥ 2, The above equality also holds true for d = 1, as one can verify by a direct computation for both sides. So the result in part (1) is established for R = 1. The general case follows from a change of variable.
(2) The asymptotic behavior of Bessel functions can be found in e.g. page 134 of the book [16]. The uniform boundedness of J p on R + follows immediately from this asymptotic behavior. By (2.3), we can find some L > 0 such that |J p ( (3) It suffices to show 1 = 1 L 1 (R d ) . It follows from point (1) that where interchanges of integrals and limits are valid due to the dominated convergence theorem. Our proof of this lemma is finished.
The following lemma has its discrete analogue in [20, (7.2.7)] and for the sake of completeness, we provide a short proof; see also [25, (3.3)]. Lemma 2.2. If φ : R d → R belongs to L p (R d , dx) for some positive number p. Then for any r ∈ (0, p), one has Proof. Fix δ ∈ (0, 1). We deduce from Hölder's inequality that Note that for any fixed δ ∈ (0, 1), the second term goes to zero, as R → +∞, while the first term can be made arbitrarily small by choosing sufficiently small δ.
At the end of this section, we record a consequence of Young's inequality.
Proof. Young's convolution inequality states that As a consequence, we obtain the following inequalities: This completes the proof of (2.5).
Recall from our introduction that we consider the case where F = k≥m I W k (f k ) has Hermite rank m ≥ 1 with f k ∈ H k for each k ≥ m. We write In what follows, we first investigate the central limit theorem on each chaos based on two sets of assumptions. One involves the covariance kernel γ and the other is based on the spectral measure µ. This is the content of Section 2.2, and in Section 2.3, we consider the case where F has a general chaos expansion. In each situation, the random variable may depend on infinitely many coordinates, which shall be distinguished from the classical Breuer-Major theorem.

Central limit theorems on a fixed chaos
Fix an integer p ≥ 2 and note that the random field we have, with the notation G p,R = I W p (g p,R ), Indeed, is bounded by one and convergent to one, as R → +∞, (2.7) follows from (2.6) and the dominated convergence theorem. This fact leads us to stick on the situation that the normalization σ(R) in (1.4) is of order R d/2 , as R → +∞.
Such an order is also consistent with the Breuer-Major theorem (see Theorem 1.1).

CLT under assumptions on the covariance kernel
We write Therefore, a sufficient condition for (2.6) to hold is the following hypothesis: Then, under (H1), Suppose that γ ∈ L p (R d ) and f p ∈ L 1 (R pd ). Then, hypothesis (H1) is satisfied. In fact, using Hölder's inequality, we obtain Under these necessary conditions, it is clear that (ii) Here is an example of non-integrable covariance kernel: γ(x) = x −β , with β ∈ (0, d). Now let us search for sufficient condition for κ p to be well defined. Notice that which is finite. Thus, we only need to control the integral at infinity. Notice that for L > 0 large (that may depend on the a i 's), there exist two constants C 1 , C 2 such that Then the finiteness of the integral at infinity is equivalent to p > d/β. In other words, the function κ p , given in (2.8), makes sense only for p > d/β. This forces us to consider chaoses of order at least d/β The following result is a central limit theorem under some restrictions on γ.
Theorem 2.5. Fix an integer p ≥ 2, f p ∈ H p and assume that the hypothesis (H1) holds. Moreover, suppose that one of the following two conditions hold true: Proof. In view of the Fourth Moment Theorem of Nualart and Peccati [23], to prove this central convergence it suffices to establish 2 If h 1 , . . . , hp ∈ H, we denote by sym h 1 ⊗ · · · ⊗ hp the symmetrization of the tensor product h 1 ⊗ · · · ⊗ hp: where Sp is the permutation group on the first p positive integers.
As a consequence, Shifting the variables from the kernels to the covariance, we write Making the change of variables The rest of our proof will be split into two cases.
Proof under (i). Using the tensor-product structure of the kernels, we can further bound (2.11) by In view of (2.9), the function φ belong to L p (R d ). It follows immediately from Hölder's inequality that Then, we can conclude our proof under the condition (i) by using Lemma 2.2.
Proof under (ii). Note first that due to Hölder's inequality, which implies that (2.11) can be further bounded by Note that by Hölder's inequality and Lemma 2.2, Thus, it follows from the dominated convergence theorem that, as R → ∞, This completes the proof.

CLT under assumptions on the spectral measure
Let us first study the asymptotic variance using the Fourier transform. Throughout this section, we are going to assume that µ(dξ) = ϕ(ξ)dξ, that is, the spectral measure is absolutely continuous with respect to the Lebesgue measure on R d . Note that ϕ(ξ) = ϕ(−ξ).
We first write, where τ (ξ p ξ p ξ p ) := ξ 1 + · · · + ξ p . As a consequence of Lemma 2.1, we obtain We remark that Ψ p is defined almost everywhere on R d and recall that is an approximation of the identity. Therefore, it is natural to introduce the following hypothesis: (H2) Ψ p , defined in (2.13), is uniformly bounded on R d and continuous at zero. (2.14) Note that for the particular case p = 1, and ϕ is uniformly bounded with continuity at zero, then the function Ψ 1 is uniformly bounded and continuous at zero.

Remark 2.6.
(1) Heuristically, we can rewrite Ψ p (0) as follows: where ν is the surface measure on the hyperplane {τ (ξ p ξ p ξ p ) = 0}. This is an informal expression, because the trace of F f p on the hyperplane {τ (ξ p ξ p ξ p ) = 0} is not properly defined for an arbitrary kernel f p .
(2) Notice that the quantity ) is integrable with respect to the probability measure R (x)dx. We can also read from (2.14) that the function ξ p−1 To obtain the Gaussian fluctuation of G p,R , one shall first establish the order of the variance and then compute the contractions. Our hypothesis (H2) gives the exact asymptotic behavior of Var(G p,R ). In fact, it is enough to impose a weaker condition, known as the Maruyama's condition concerning the variance; see [18].
We will provide a proof of Proposition 2.7 in Section 4, see also [4, Corollary 2.2].
The following lemma provides sufficient conditions for (H2) to hold. One of the conditions is ϕ ∈ L q (R d ), which is the condition imposed on the spectral density in the version of the classical Breuer-Major theorem proved in [1, Theorem 2.10].
The proof of Lemma 2.8 is given in Section 4.

Remark 2.9.
It is worth comparing the sufficient conditions for the hypotheses (H1) and (H2) here: This is natural in view of the Hausdorff-Young's inequality.
Note that both hypotheses imply that the fluctuation of G p,R is of order R d/2 ; moreover, as we will see shortly, both hypotheses (γ ∈ L p (R d ) and ϕ ∈ L q (R d )) imply that the fluctuation of G p,R is Gaussian, as R tends to infinity.
Let us introduce the following hypothesis, which can be seen as the contractionanalogue of (H2).
For the sake of completeness, we provide a proof in Section 4. Theorem 2.11. Fix an integer p ≥ 2 and f p ∈ H p satisfying hypotheses (H2) and (H3). Then, with σ p,R being the standard deviation of G p,R .
We will omit the proof of this corollary, as it follows simply from Proposition 2.7 and the following proof of Theorem 2.11.
Proof of Theorem 2.11. It suffices to show the contraction condition (1.5). We spilt the proof into several steps. We will use Fourier transform to rewrite (2.10) in Steps 1-3 and we will carry out the asymptotic analysis in Step 4.
Step 1: Plancherel's formula implies where F r denotes the Fourier transform with respect to the right-most r variables.
Step 2: Similarly, we have where F p−r denotes the Fourier transform with respect to the left-most p − r variables. It is clear that the composition of F p−r and F r is the usual Fourier transform.
Step 3: Using basic properties of the Fourier transform, we have ( . So combining facts from the above steps yields that the second integral in (2.10) is equal to It follows from Lemma 2.1 that Thus, we have for r ∈ {1, . . . , p − 1}, Step 4: In what follows, we prove that lim R→+∞ I R = 0.
We decompose the above integral into two parts: To ease the presentation, we introduce for every δ ∈ [0, ∞), Note that, by (2.12) and the symmetry of µ, we have which, under the hypothesis (H2), converges to ω d Ψ p (0), as R → +∞. Now on R pd × D δ , we can write, using Cauchy-Schwarz inequality, We claim that for any fixed δ > 0, T δ (R) → 0, as R → +∞.
2 converges to zero, as R → +∞; and clearly, so claim (2.18) follows from the dominated convergence theorem. Therefore, the first part R pd ×D δ goes to zero, as R tends to infinity.
Then, it remains to estimate the integral over R pd × D c δ . Similarly, we obtain, by applying Cauchy-Schwarz inequality, Recall that µ is symmetric. We can write, after the change of variable ( η p−r and then applying Cauchy-Schwarz inequality, From previous discussion, it holds under hypothesis (H2) that sup T 0 (R) : R > 0 < +∞.
So it remains to show that K R → 0, as R → +∞.
Making the following change of variables which converges to zero, as δ ↓ 0. This concludes our proof.
Recall the Hilbert-space notation H µ and H ⊗p µ from the beginning of Section 2. It is clear that belongs to H ⊗p µ for each R > 0, since F f p ∈ H ⊗p µ and τ (ξ p ξ p ξ p ) −d/2 J d/2 (R τ (ξ p ξ p ξ p ) ) is uniformly bounded for any given R > 0 (see Lemma 2.1). We can also define the corresponding contractions in this framework. For h 1 ∈ H ⊗p µ and h 2 ∈ H ⊗q µ (p, q ∈ N), their r-contraction, with 0 ≤ r ≤ p ∧ q, belongs to H ⊗p+q−2r µ and is defined by One should not confuse this notion with the one introduced in Notation A.
With the notation F R and ⊗ r,µ , we can rewrite I R in (2.17) as follows: where we used the fact that F R ⊗ r,µ F R η p−r η p−r η p−r , η p−r η p−r η p−r = F R ⊗ r,µ F R η p−r η p−r η p−r , η p−r η p−r η p−r , which follows simply from the definition of contraction. Hence, we can formulate the following Fourth Moment Theorem.
Theorem 2.13. Fix an integer p ≥ 2 and f p ∈ H p . Assume (H2), which implies that, in view of (2.12), Then, the following statements are equivalent: x . Therefore, we obtain the following estimates: µ < ∞ and µ admits a spectral density, then by the dominated convergence theorem, we have F R ⊗ r,µ F R H ⊗2p−2r µ → 0, which implies the Gaussian fluctuation; So one may intend to assume (2.20) which, however, is not reasonable in our framework. In fact, (2.19) and (2.4) tell us that then the integral over { τ (ξ p ξ p ξ p ) > 0} vanishes asymptotically, so that we can write This forces the integral in (2.21) to be zero by dominated convergence, so that σ 2 p = 0.

Chaotic central limit theorems
As a continuation of previous section, we consider the case of infinitely many chaoses and we derive a chaotic central limit theorem. Recall F ∈ L 2 Ω admits the following chaos expansion (1.2) with Hermite rank m ≥ 1: Let us introduce the following natural hypothesis: |γ| t i − s i + z dz < ∞.
Recall the notation κ p from (2.8) and we put So under (H4), Note that an immediate consequence of our hypothesis (H4) is the following result In fact, one can write, similarly as before, Now we state our main result as a consequence of (2.23), Theorems 2.5 and 1.3.
We can formulate another chaotic CLT based on the spectral measure.
Theorem 2.17. Suppose that F ∈ L 2 (Ω) admits the chaos expansion (1.2) with Hermite rank m ≥ 1. Assume that the spectral measure has a density. Suppose that for each p ≥ m, the function Ψ p defined in (2.13) is continuous at zero and the following boundedness condition holds (which implies (H2) for each p): Assume additionally that hypothesis (H3) holds for each p ≥ m. Then, Proof. For m = 1, we should consider the first chaos and it is clear that R −d/2 G 1,R is centered Gaussian with variance tending to ω d (2π) d Ψ 1 (0). Now let us consider higher-order chaoses. For each p ≥ m ∨ 2, hypotheses (H2) and (H3) hold true. This implies that G p,R R −d/2 converges in law to N (0, σ 2 p ), with σ p introduced in Theorem 2.5. In view of the chaotic central limit theorem (Theorem 1.3), it remains to check condition (2.23). We can write p≥N +1 where the last inequality follows from the fact that R (x)dx is a probability measure on R d ; so hypothesis (H4') implies (2.23). Hence, our proof is finished.
Corollary 2.18. Suppose that F ∈ L 2 (Ω) admits the chaos expansion (1.2) with Hermite rank m ≥ 1 and for each p ≥ m, the kernel f p belongs to L 1 (R pd ) ∩ H p . Assume that the spectral measure µ is finite with spectral density ϕ such that ϕ is uniformly bounded with continuity at zero and (2.24) Proof. Note that µ is finite, which is equivalent to ϕ ∈ L 1 (R d ). This implies with boundedness of ϕ that ϕ ∈ L q (R d ) for any q > 1. It is clear that for any p ≥ 2 ∨ m, f p ∈ L 1 (R d ) ∩ H p and γ ∈ L p/(p−1) (R d ), so Lemma 2.10 and Lemma 2.8 ensure that hypotheses (H2) and (H3) are valid on the pth chaos. If F has the first chaos with f 1 ∈ L 1 (R d ), then Ψ 1 is uniformly bounded with continuity at zero (the continuity of ϕ at zero is only required at this point). Therefore, G 1,R R −d/2 converges in law to a centered Gaussian with variance (2π) d Ψ 1 (0).

It remains to notice that
so that (H4') holds in this setting. To see this, we write that is, (H4') is implied by (2.24). Hence, the proof is done by applying Theorem 2.17.
3 Proof of Theorems 1.6, 1.7 and 1.9 Let u t,x be the mild solution to the linear stochastic heat equation (1.8) with initial condition u 0,x = 1 for all x ∈ R d , driven by a Gaussian noise with temporal and spatial covariance kernels being γ 0 and γ 1 , respectively. We assume γ 0 : R → [0, ∞] locally integrable and the Fourier transform of γ 1 is a nonnegative tempered measure µ 1 that satisfies the Dalang's condition (1.10).
Recall that where, for any integer p ≥ 1, f t,x,p is the kernel appearing in the Wiener chaos expansion of u t,x (see (1.11)). Let us introduce some notation for later convenience.
Notation B. For given t > 0 and p ∈ N, which may be a generalized function.
Here is the plan for the proof of Theorems 1.6 and 1.7. Section 3.1 deals with computing the limit of the covariance function of the process A t (R) as R → +∞, provided that γ 1 (R d ) is finite. Section 3.2 is devoted to the proof of the convergence of the finitedimensional distributions, and we prove the 3.3 under the extra assumption (1.14). As a by-product of the computations in Section 3.1, we provide a proof of Theorem 1.7 in Section 3.4.

Limiting covariance structure in Theorem 1.6
The main ingredient is the following Feymann-Kac representation.
where X 1 , X 2 are two independent standard Brownian motions on R d that start at zero.
We refer to [9,Theorem 3.6] for the proof of a more general statement. We point out that in this reference, the moment formula is stated for x = y and t = s, see equation (3.21) therein; one can prove the case x = y or t = s verbatim.
It follows from Lemma 3.1 that Note that in our setting, φ(z) ≥ 1 for every z ∈ R d ; note also that, since γ 1 is integrable, where the equality follows from Fubini's theorem. Note that where the object β t,s (z) can be understood as the "weighted" intersection local time of two independent Brownian motions X 1 and X 2 .
In order to show that R d φ t,s (z) − 1 dz < ∞, we first estimate the pth moment of β t,s (z). Without losing any generality, we assume s ≤ t. Using that γ 1 is the Fourier transform of the spectral density ϕ 1 , which is continuous and bounded due to the finiteness of γ 1 (R d ), we can write which is a nonnegative, uniformly continuous and uniformly bounded function in z. Indeed, it is clear that 0 ≤ E[β s,t (z) p ] ≤ E[(β s,t (0) p ] < +∞ and the uniform continuity follows from the dominated convergence theorem. Then by the monotone convergence theorem, we write Recall from (3.2) that the finiteness of E β s,t (0) p allows us to apply Fubini's theorem to get for any ε > 0, which is finite.
Consider first the case p ≥ 2. Using that s ≤ t and we can bound T p,ε as follows where Γ t := t −t γ 0 (u)du is finite for each t > 0 in view of the local integrability of γ 0 . Making the change of variables ξ p ξ p ξ p = (η 1 −η 2 , . . . , η p−1 −η p , η p ), yields, with the convention s p+1 = 0 and η 0 = 0, In the following, we will prove that Q p (η p ) is uniformly bounded and provide an estimate. We rewrite Q p (η p ) as follows. With h j (η) = exp − 1 2 w j η 2 , Using that ϕ 1 is bounded, we get On the other hand, using (4.3), we have where the last inequality follows from Lemma 3.3 in [9], with the notation Notice that these quantities are finite for any N > 0 by condition (1.10). We fix N such that 0 < 4Γ t C N < 1. This gives us the uniform boundedness of Q p and moreover, is finite, since 0 < 4Γ t C N < 1.
To show the integrability of φ s,t − 1, it remains to check that which follows from (3.1). Therefore, As a consequence, we proved that, for any s, t ∈ R + ,

Convergence of the finite-dimensional distributions in Theorem 1.6
Fix 0 < t 1 < · · · < t n < ∞ and put x,q dx . and has the following chaos expansion A R,j = q≥1 I W q (g q,j,R ) with g q,j,R symmetric kernels.
Suppose the following conditions (a)-(d) hold: (a) ∀i, j ∈ {1, . . . , n} and ∀q ≥ 1, Then A R converges in law to N (0, Σ) as R → +∞, where Σ = σ i,j n i,j=1 is given by Proof of conditions (a), (b) and (d): It suffices to prove that for any t, s ∈ R + and for any p ≥ 1, p! g p,R (t), g p,R (s) H ⊗p is convergent to some limit, denoted by σ p (t, s) and for each t ≥ 0, p≥1 σ p (t, t) < +∞ (3.9) and lim N →+∞ It is well-known in the literature that the pth moment of β t,t (0) coincides with the variance of the pth chaotic component of the solution u t,x ; see for instance [12]. Then, it is natural to expect that our verification of condition (a) in Proposition 3.2 will resemble the computations we have done for E β t,s (z) p . Moreover, we will see that condition (3.9) is a consequence of the finiteness of the integral R d φ t,s (z) − 1 dz proved in Section 3.1. The verification of condition (3.10) will be straightforward, as a by-product of the computations in Section 3.1.
Let us start with the case p = 1. By an easy computation, (3.11) where R (ξ) is the approximation of the identity introduced in Point (3) of Lemma 2.1. Since γ 1 is integrable on R d , ϕ 1 is uniformly continuous and uniformly bounded. Then, taking the limit as R → +∞ in (3.11), yields Now let us consider higher-order chaos. For a fixed p ≥ 2, we write The kernel f t,x,p is a nonnegative function on R p + × R pd , so f t,x,p , f s,y,p H ⊗p ≥ 0. We first write, by using the Fourier transform in space, Note that for s σ p s σ p s σ p ∈ ∆ p (t), by the change of variables y 1 = x σ 1 − x, y j = x σ j − x σ j−1 for j ≥ 2, we can write, with X 1 standard Brownian motion on R d as before, so that Keeping in mind the above expressions and making the time changes in (3.12) (from s j to t − s j and from s j to s − s j , for j = 1, . . . , p) yields  12) is indeed a function that depends only on the difference x − y. Furthermore, a quick comparison between (3.2) and (3.14) reveals that the only difference is that the variables inside the temporal covariance kernel are γ 0 (s j − r j ) in (3.2) and γ 0 (t − s j − s + r j ) in (3.14). Going through the same arguments that lead to (3.6) and (3.7), we get (with s ≤ t) This completes the verification of condition (a). Notice that Proof of condition (c): Given t > 0 and 1 ≤ r ≤ p − 1, we need to prove that We follow the same routine that leads to (2.17). We put f(s p s p s p , y p y p y p ) = f t,0,p (s p s p s p , y p y p y p ), and in this way, we have f t,x,p = f x , with f x being the spatially shifted version of f. Now we write (notice that we have the extra temporal variables now) where F f stands for the Fourier transform with respect to the spatial variables and we have used the short-hand notation a = τ (ξ r ξ r ξ r ), b = τ (η p−r η p−r η p−r ), a = τ ( ξ r ξ r ξ r ) and b = τ ( η p−r η p−r η p−r ).
Recall from previous steps that, with X 1 standard Brownian motion on R d , which is a positive, bounded and uniformly continuous function in ξ p ξ p ξ p . As in the proof of Theorem 2.11 (Step 4), we decompose the integral in the spatial variable into two parts, that is, we write for any given δ > 0, Similar to the arguments in Step 4 of the proof of Theorem 2.11, by using Cauchy-Schwarz inequality several times, we can write We claim that V 11 is uniformly bounded and V 12 vanishes asymptotically as R → +∞. In view of (3.16), making the change of variables t j = t − s j and η j = ξ 1 + · · · + ξ j for each j = 1, . . . , p, with η 0 = 0, we obtain, using (3.4) In the same way, we have which converges to zero as R tends to infinity. By the same arguments, we can get the uniform boundedness of V 2 as R tends to infinity. Thus, the term (3.17) does not contribute to the limit of g p,R (t) ⊗ r g p,R (t) By previous arguments, Therefore, taking into account that L 2 (η p−1 which converges to zero, as δ ↓ 0. This concludes the proof of condition (c).

Proof of tightness in Theorem 1.6
In this section, we are going to prove the tightness of (1.14). Under this condition, one can see easily that for any t > 0.
Recall that α ∈ (0, 1/2) is fixed. For any T > 0, we will show for any 0 < s < t ≤ T and any integer k ∈ [2, ∞) (3.19) where C = C T,k,α is a constant that depends on T, k and α. If we pick a large k such that kα > 2, we get the desired tightness by Kolmogorov's criterion. To show (3.19), we first derive the Wiener chaos expansion of A t (R) − A s (R) and apply the hypercontractivity property of the Ornstein-Uhlenbeck semigroup (see e.g. [21]) that allows us to estimate the L k (Ω)-norm by the L 2 (Ω)-norm on a fixed Wiener chaos.
We can write d(s, t, x; s 1 , y 1 ) = d 1 (s, t, x; s 1 , y 1 ) + d 2 (s, t, x; s 1 , y 1 ) with and d 2 (s, t, x; s 1 , y 1 ) = 1 [s,t) (s 1 )G(t − s 1 , x − y 1 ). (3.21) According to [5,Lemma 3.1], there exists some constant C α that depends on α such that Now we can express A t (R) − A s (R) as a sum of two chaos expansions that correspond to d 1 and d 2 : with ∆ p (s, t) = {t > s 1 > · · · > s p > s}. Let us first estimate the L 2 (Ω)-norm of J 2,p,R in several familiar steps. As in (3.12), (3.13) and (3.14), we write for p ≥ 1, with X 1 , X 2 independent standard Brownian motions on R d , which is a nonnegative function in x, y that only depends on the difference x − y. Observe that this inner product coincides with 1 (p!) 2 E β t−s,t−s (x − y) p for every p ≥ 1, see (3.2). Therefore, for p ≥ 2, we can write by using (3.6) Hence, as a consequence of the hypercontractivity property (see e.g. [20, Corollary 2.8.14]), we have for k ≥ 2 (3.24) Then, we can write for p ≥ 2, By the same trick of inserting exp( −ε 2 z 2 ), we have (3.28) Therefore, we can write Therefore, for p ≥ 2, For p = 1, it is easier to get the desired bound. Indeed, from (3.27), it follows that provided 0 < 4(k − 1)Γ T C N < 1, which is always valid for some N > 0.

Proof of Theorem 1.7
We are going to show that, under the hypotheses of Theorem 1.7, the first chaos dominates and, as a consequence, the proof of the central limit theorem reduces to the computation of the limit variance of the first chaos. The proof will be done in several steps.
Step 1. We have shown in the proof of Theorem 1.6 that, if γ 0 is locally integrable, γ 1 is integrable and Dalang's condition (1.10) is satisfied, then for any integer p ≥ 2, Var Π p A t (R) ∼ σ p (t, t)R d as R → +∞ and p≥2 σ p (t, t) < ∞. The above results also hold true, provided γ 0 is locally integrable and the modified version of Dalang's condition (1.15) is satisfied. To see the latter point, it is enough to proceed with the same arguments but replacing the estimate (3.3) by obtained by applying (4.2). Then, we can use the same arguments as in the proof of [9, Lemma 3.3], with C N , D N replaced by In this way, instead of the inequality (3.4), we can get and by choosing large N such that 0 < 4Γ t C N < 1, we can get instead of (3.6) and as a result, which is equivalent to (3.31).
Step 2. For the first chaotic component, if This observation, together with Step 1, justifies part (1) of Theorem 1.7.
Step 3. When γ 1 (z) = z −β for some β ∈ (0, 2 ∧ d), let us first compute the variance of Π 1 A t (R). We have for some constant c d,β . Then making change of variables (x, y, ξ) → (Rx, Ry, ξ/R) yields This expression is increasing in R and it converges, as R → +∞, to Note that Then by similar arguments as before, we obtain By the usual change of variables η j = ξ 1 + · · · + ξ j , with η 0 = 0, and (x, y, η p ) → (Rx, Ry, η p /R), we obtain Let us first analyze the part in the display (3.35), which can be rewritten as The function U R defined above is uniformly bounded by c −1 d,β B 2 1 dxdy x − y −β and for η p−1 = 0, by the Riemann-Lebesgue's Lemma, 0 ≤ U R (η p−1 ) converges to zero as R → +∞. As a result, By using (4.3) for the integration with respect to dη p−1 , . . . , dη 3 , dη 2 inductively, we get which is a convergent series by previous discussion. Then by dominated convergence and the Riemann-Lebesgue's lemma, we have This tells us that the first chaos is indeed dominant and we have the desired Gaussian fluctuation (1.16). This concludes the proof of Theorem 1.7.

Proof of Theorem 1.9
Part (1): The proof of the functional CLT for A t (R) can be done exactly by the same arguments from Sections 3.1, 3.2 and 3.3 except for using (3.32) and (3.33) instead of (3.4) and (3.6). So we leave the details for interested readers and refer to the forthcoming work [24] for similar situation when dealing with parabolic Anderson model driven by rough noise.
Part (2): By results in part (2) of Theorem 1.7, R −d+ β 2 A t (R) converges to the zero process in finite-dimensional distributions. So our proof consists in two parts: (ii) We prove R −d+ β 2 A t (R) : t ≥ 0 converges in law (hence in probability) to the zero process, as R → ∞. This will follow from the tightness of is a centered Gaussian process with  Hence given T ∈ (0, ∞), we have for any 0 < s < t ≤ T and for any k ∈ [2, ∞), where c k is the L k (Ω)-norm of Z ∼ N (0, 1) and the constant C does not depend on R, s or t. This gives us the desired tightness and hence leads to the functional CLT for Π 1 A t (R) : t ∈ R + .
It follows from (4.1) that By Lemma 2.1, there exists some absolute constant C such that R (x) ≤ C(R/n) d n −1 for n ≤ R x < n + 1. Therefore, where δ = d/(d + 1). This implies This finishes our proof.
Proof of Lemma 2.8. Notice that the condition f p ∈ L 1 (R pd ) implies F f p is uniformly continuous and bounded. We fix a generic z ∈ R d , and we write Put ϕ y (x) = ϕ(x − y), so we can rewrite which is bounded by that is, A 22,n → 0, as n → +∞. The same arguments also imply that A 21,n → 0, as n → +∞. This concludes our proof.