Forgetting of the initial condition for the filter in general state-space hidden Markov chain: a coupling approach

We give simple conditions that ensure exponential forgetting of the initial conditions of the filter for general state-space hidden Markov chain. The proofs are based on the coupling argument applied to the posterior Markov kernels. These results are useful both for filtering hidden Markov models using approximation methods (e.g., particle filters) and for proving asymptotic properties of estimators. The results are general enough to cover models like the Gaussian state space model, without using the special structure that permits the application of the Kalman filter.


Introduction and Notation
We consider the filtering problem for a Markov chain {X k , Y k } k≥0 with state X and observation Y .The state process {X k } k≥0 is an homogeneous Markov chain taking value in a measurable set X equipped with a σ-algebra B(X).We let Q be the transition kernel of the chain.The observations {Y k } k≥0 takes values in a measurable set Y (BY is the associated σ-algebra).For i ≤ j, denote Y i:j (Y i , Y i+1 , • • • , Y j ).Similar notation will be used for other sequences.We assume furthermore.that for each k ≥ 1 and given X k , Y k is independent of X 1:k−1 ,X k+1:∞ , Y 1:k−1 , and Y k+1:∞ .We also assume that for each x ∈ X, the conditional law has a density g(x, •) with respect to some fixed σ-finite measure on the Borel σ-field B(Y).
We denote by φ ξ,n [y 0:n ] the distribution of the hidden state X n conditionally on the observations y 0:n def = [y 0 , . . ., y n ], which is given by φ ξ,n [y 0:n ](A) = X n+1 ξ(dx 0 )g(x 0 , y 0 ) In practice the model is rarely known exactly and therefore suboptimal filters are computed by replacing the unknown transition kernel, likelihood function and initial distribution by approximations.
The choice of these quantities plays a key role both when studying the convergence of sequential Monte Carlo methods or when analysing the asymptotic behaviour of the maximum likelihood estimator (see e.g., (8) or (5) and the references therein).A key point when analyzing maximum likelihood estimator or the stability of the filter over infinite horizon is to ask whether φ ξ,n [y 0:n ] and φ ξ ′ ,n [y 0:n ] are close (in some sense) for large values of n, and two different choices of the initial distribution ξ and ξ ′ .
The forgetting property of the initial condition of the optimal filter in nonlinear state space models has attracted many research efforts and it is impossible to give credit to every contributors.The purpose of the short presentation of the existing results below is mainly to allow comparison of assumptions and results presented in this contributions with respect to those previously reported in the literature.The first result in this direction has been obtained by (13), who established L p -type convergence of the optimal filter initialised with the wrong initial condition to the filter initialised with the true initial distribution; their proof does not provide rate of convergence.A new approach based on the Hilbert projective metric has later been introduced in (1) to establish the exponential stability of the optimal filter with respect to its initial condition.However their results are based on stringent mixing conditions for the transition kernels; these conditions state that there exist positive constants ε − and ε + and a probability measure λ on (X, B(X)) such that for f ∈ B + (X), This condition implies in particular that the chain is uniformly geometrically ergodic.Similar results were obtained independently by ( 9) using the Dobrushin ergodicity coefficient (see (10) for further refinements of this result).The mixing condition has later been weakened by (6), under the assumption that the kernel Q is positive recurrent and is dominated by some reference measure λ: sup (x,x ′ )∈X×X q(x, x ′ ) < ∞ and essinfq(x, x ′ )π(x)λ(dx) > 0 , where q(x, •) = dQ(x,•) dλ , essinf is the essential infimum with respect to λ and πdλ is the stationary distribution of the chain Q .Although the upper bound is reasonable, the lower bound is restrictive in many applications and fails to be satisfied e.g., for the linear state space Gaussian model.
In (12), the stability of the optimal filter is studied for a class of kernels referred to as pseudo-mixing.The definition of pseudo-mixing kernel is adapted to the case where the state space is X = R d , equipped with the Borel sigma-field B(X).A kernel Q on (X, B(X)) is pseudo-mixing if for any compact set C with a diameter d large enough, there exist positive constants ε − (d) > 0 and ε + (d) > 0 and a measure λ C (which may be chosen to be finite without loss of generality) such that This condition implies that for any ( where q(x, •) def = dQ(x, •)/dλ C , and esssup and essinf denote the essential supremum and infimum with respect to λ C .This condition is obviously more general than (2), but still it is not satisfied in the linear Gaussian case (see (12,Example 4.3)).
Several attempts have been made to establish the stability conditions under the so-called small noise condition.The first result in this direction has been obtained by (1) (in continuous time) who considered an ergodic diffusion process with constant diffusion coefficient and linear observations: when the variance of the observation noise is sufficiently small, (1) established that the filter is exponentially stable.Small noise conditions also appeared (in a discrete time setting) in ( 4) and (14).These results do not allow to consider the linear Gaussian state space model with arbitrary noise variance.
More recently, (7) prove that the nonlinear filter forgets its initial condition in mean over the observations for functions satisfying some integrability conditions.The main result presented in this paper relies on the martingale convergence theorem rather than direct analysis of filtering equations.Unfortunately, this method of proof cannot provide any rate of convergence.
It is tempting to assume that forgetting of the initial condition should be true in general, and that the lack of proofs for the general state-space case is only a matter of technicalities.The heuristic argument says that either • the observations Y 's are informative, and we learn about the hidden state X from the Y s around it, and forget the initial starting point.• the observations Y s are non-informative, and then the X chain is moving by itself, and by itself it forgets its initial condition, for example if it is positive recurrent.
Since we expect that the forgetting of the initial condition is retained in these two extreme cases, it is probably so under any condition.However, this argument is false, as is shown by the following examples where the conditional chain does not forget its initial condition whereas the unconditional chain does.On the other hand, it can be that observed process, {Y k } k≥0 is not ergodic, while the conditional chain uniformly forgets the initial condition.
Here is a slightly less extreme example.Consider a Markov chain on the unit circle.All values below are considered modulus 2π.We assume that X i = X i−1 + U i , where the state noise {U k } k≥0 are i.i.d. .The chain is hidden by additive white noise: where W i is Bernoulli random variable independent of V i .Suppose now that U i and V i are symmetric and supported on some small interval.The hidden chain does not forget its initial distribution under this model.In fact the support of the distribution of X i given Y 0:n and X 0 = x 0 is disjoint from the support of its distribution given Y 0:n and X 0 = x 0 + π.
On the other hand, let {Y k } k≥0 be an arbitrary process.Suppose it is modeled (incorrectly!) by a autoregressive process observed in additive noise.We will show that under different assumptions on the distribution of the state and the observation noise, the conditional chain (given the observations Y s which are not necessarily generated by the model) forgets its initial condition geometrically fast.
The proofs presented in this paper are based on generalization of the notion of small sets and coupling of the two (non-homogenous) Markov chains sampled from the distribution of X 0:n given Y 0:n .The coupling argument is based on presenting two chains {X k } and {X ′ k }, which marginally follow the same sequence of transition kernels, but have different initial distributions of the starting state.The chains move independently, until they coupled at a random time T , and from that time on they remain equal.
Roughly speaking, the two copies of the chain may couple at a time k if they stand close one to the other.Formally, we mean by that, that the the pair of states of the two chains at time k belong to some set, which may depend of the current, but also past and future observations.The novelty of the current paper is by considering sets which are in fact genuinely defined by the pair of states.For example, the set can be defined as {(x, x ′ ) : x − x ′ < c}.That is, close in the usual sense of the word.
The prototypical example we use is the non-linear state space model: where the filtering problem for the linear version of this model with independent Gaussian noise is solved explicitly by the Kalman filter.But this is one of the few nontrivial models which admits a simple solution.Under the Gaussian linear model, we argue that whatever are Y 0:n , two independent chains drawn from the conditional distribution will be remain close to each other even if the Y s are drifting away.Any time they will be close, they will be able to couple, and this will happen quite frequently.
Our approach for proving that a chain forgets its initial conditions can be decomposed in two stages.We first argue that there are coupling sets (which may depend on the observations, and may also vary according to the iteration index) where we can couple two copies of the chains, drawn independently from the conditional distribution given the observations and started from two different initial conditions, with a probability which is an explicit function of the observations.We then argue that a pair of chains are likely to drift frequently towards these coupling sets.
The first group of results identify situations in which the coupling set is given in a product form, and in particular in situations where X × X is a coupling set.In the typical situation, many values of Y i entail that X i is in some set with high probability, and hence the two conditionally independent copies are likely to be in this set and close to each other.In particular, this enables us to prove the convergence of (nonlinear) state space processes with bounded noise and, more generally, in situations where the tails of the observations error is thinner than those of dynamics innovations.
The second argument generalizes the standard drift condition to the coupling set.The general argument specialized to the linear-Gaussian state model is surprisingly simple.We generalize this argument to the linear model where both the dynamics innovations and the measurement errors have strongly unimodal density.

Notations and definitions
Let n be a given positive index and consider the finite-dimensional distributions of {X k } k≥0 given Y 0:n .It is well known (see (5, Chapter 3)) that, for any positive index k, the distribution of X k given X 0:k−1 and Y 0:n reduces to that of X k given X k−1 only and Y 0:n .The following definitions will be instrumental in decomposing the joint posterior distributions.
Definition 1 (Backward functions).For k ∈ {0, . . ., n}, the backward function β k|n is the non-negative measurable function on Y n−k × X defined by for k ≤ n − 1 (with the same convention that the rightmost product is empty for k = n − 1); β n|n (•) is set to the constant function equal to 1 on X.
The term "backward variables" is part of the HMM credo and dates back to the seminal work of Baum and his colleagues (2, p. 168).The backward functions may be obtained, for all x ∈ X by the recursion operating on decreasing indices k = n − 1 down to 0 from the initial condition Definition 2 (Forward Smoothing Kernels).Given n ≥ 0, define for indices k ∈ {0, . . ., n − 1} the transition kernels for any point x ∈ X and set A ∈ B(X).For indices k ≥ n, simply set where Q is the transition kernel of the unobservable chain {X k } k≥0 .
Note that for indices k ≤ n − 1, F k|n depends on the future observations Y k+1:n through the backward variables β k|n and β k+1|n only.The subscript n in the F k|n notation is meant to underline the fact that, like the backward functions β k|n , the forward smoothing kernels F k|n depend on the final index n where the observation sequence ends.Thus, for any x ∈ X, A → F k|n (x, A) is a probability measure on B(X).Because the functions x → β k|n (x) are measurable on (X, B(X)), for any set Given n, for any index k ≥ 0 and function f More generally, For any integers n and m, function f ∈ F b X m+1 and initial probability ξ on (X, B(X)), where {F k|n } k≥0 are defined by ( 8) and ( 9) and φ ξ,k|n is the marginal smoothing distribution of the state X k given the observations Y 0:n .Note that φ ξ,k|n may be expressed, for any A ∈ B(X), as where φ ξ,k is the filtering distribution defined in (1) and β k|n is the backward function.

Coupling constant and the coupling construction
As outlined in the introduction, our proofs are based on coupling two copies of the conditional chain started from two different initial conditions.For any two probability measures µ 1 and µ 2 we define the total variation distance Let n and m be integers, and let k ∈ {0, . . ., n− m}.Define the m-skeleton of the forward smoothing kernel as follows: Definition 3 (Coupling constant of a set).Let n and m be integers, and let k ∈ {0, . . ., n − m}.The coupling constant of the set C ⊂ X × X is defined as The definition of the coupling constant implies that, for any (x, x ′ ) ∈ C, where where for any measures µ and ν on (X, B(X)), µ ∧ ν is the largest measure for which (µ ∧ ν)(A) ≥ min(µ(A), ν(A)), for all A ∈ B(X).
We may now proceed to the coupling construction.Let n be an integer, and for any k ∈ {0, . . ., ⌊n/m⌋}, let Ck|n be a set-valued function, Ck|n : Y n → B(X)⊗B(X), where B(X) ⊗ B(X) is the smallest σ-algebra containing the sets A × B with A, B ∈ B(X).We define Rk|n as the Markov transition kernel satisfying, for all (x, x ′ ) ∈ Ck|n and for all A ∈ B(X) and (x, where we have omitted the dependence upon the set Ck|n in the definition of the coupling constant ε k,m|n and of the minorizing probability ν k,n|m .For all (x, x ′ ) ∈ X × X, we define Fk,m|n (x, where, for two kernels K and L on X, K ⊗ L is the tensor product of the kernels K and L, i.e., for all (x, x ′ ) ∈ X × X and A, A ′ ∈ B(X) Define the product space Z = X × X × {0, 1}, and the associated product sigmaalgebra B(Z).Define on the space (Z N , B(Z) ⊗N ) a Markov chain ) according to the kernel Fi,m|n ( Xi , X′ i ; •) and set d i+1 = 0.For µ a probability measure on B(Z), denote P Y µ the probability measure induced by the Markov chain Z i , i ∈ {0, . . ., n} with initial distribution µ.It is then easily checked that for any i ∈ {0, . . ., ⌊n/m⌋} and any initial distributions ξ and ξ ′ , and any A, A ′ ∈ B(X), where δ x is the Dirac measure and ⊗ is the tensor product of measures and φ ξ,k|n is the marginal posterior distribution given by (11) imsart-generic ver.2007/09/18 file: hmmGSS1.texdate: February 2, 2008 Note that d i is the bell variable, which shall indicate whether the chains have coupled (d i = 1) or not (d i = 0) by time i.Define the coupling time with the convention inf ∅ = ∞.By the Lindvall inequality, the total variation distance between the filtering distribution associated to two different initial distribution ξ and ξ ′ , i.e., P ξ ( In the following section, we consider several conditions allowing to bound the tail distribution of the coupling time.

Coupling sets
Of course, the construction above is of interest only if we may find set-valued function Ck|n such whose coupling constant ε k,m|n ( Ck|n ) is non-zero 'most of the time'.
Recall that this quantity are typically functions of the whole trajectory y 0:n .It is not always easy to find such sets because the definition of the coupling constant involves the product F k|n forward smoothing kernels, which is not easy to handle.In some situations (but not always), it is possible to identify appropriate sets from the properties of the unconditional transition kernel Q.
Definition 4 (Strong small set).A set C ∈ B(X) is a strong small set for the transition kernel Q, if there exists a measure ν C and constants σ − (C) > 0 and σ + (C) < ∞ such that, for all x ∈ C and A ∈ B(X), The following Lemma helps to characterize appropriate sets where coupling may occur with a positive probability from products of strong small sets.
Proposition 1. Assume that C is a strong small set.Then, for any n and any k ∈ {0, . . ., n}, C × C is a coupling set for the forward smoothing kernels F k|n ; more precisely, there exists a probability distribution ν k|n such that, for any A ∈ B(X), Proof.The proof is postponed to the appendix.
Assume that X = R d , and that the kernel satisfies the pseudo-mixing condition (3).Let C be a compact set C with diameter d = diam(C) large enough so that (3) is satisfied.Then, for any n and any k ∈ {0, . . ., n}, C = C × C is a coupling set for F k|n , and ε( C) may be chosen to be equal to ε − (d)/ε + (d).(12) gives nontrivial examples of pseudo-mixing Markov chains which are not uniformly ergodic.Nevertheless, though the existence of small sets is automatically guaranteed for phi-irreducible Markov chains, the conditions imposed for the existence of a strong small set are much more stringent.As shown below, it is sometimes worthwhile to consider coupling set which are much larger than products of strong small sets.

Coupling over the whole state-space
The easiest situation is when the coupling constant of the whole state space ε k,m|n (X× X) is away from zero for sufficiently many trajectories y 0:n ; for unconditional Markov chains, this property occurs when the chain in uniformly ergodic (i.e., satisfies the Doeblin condition).This is still the case here, through now the constants may depend on the observations Y 's.As stressed in the discussion, perhaps surprisingly, we will find non trivial examples where the coupling constant ε k,m|n (X × X) is bounded away from zero for all y 0:n , whereas the underlying unconditional Markov chain is not uniformly geometrically ergodic.We state without proof the following elementary result.
Theorem 2. Let n be an integer and m ≥ 1.Then, Remark 1.Consider the case where the kernel is uniformly ergodic, i.e., One may thus take m = 1 and, using Proposition To go beyond this example, we have to find verifiable conditions upon which we may ascertain that X × X is an m-coupling set.
Definition 5 (Uniform accessibility).Let k, ℓ, n be integers satisfying ℓ ≥ 1 and k ∈ {0, . . ., n − ℓ}.A set C is uniformly accessible for the forward smoothing kernels F k,ℓ|n if there exists a constant κ k,ℓ (C) > 0 satisfying, The next step is to find conditions upon which a set is uniformly accessible.For any set A ∈ B(X), define the function α : where we have set and α(y Of course, the situations of interest are when α(y 1:ℓ−1 ; A) is positive or, equivalently, α(y 1:ℓ−1 ; A) < ∞.In such case, we may prove the following uniform accessibility condition: Proposition 3.For any integer n and any k ∈ {0, . . ., n − ℓ}, If in addition C is a strong small set for Q, then X × X is a (ℓ + 1)-coupling set, The proof is given in Section 6.

Bounded noise
Assume that a Markov chain {X k } k≥0 in X = R dX is observed in a bounded noise.The case of bounded error is of course particular, because the observations of the Y 's allow to locate the corresponding X's within a set.More precisely, we assume that {X k } k≥0 is a Markov chain with transition kernel Q having density q with respect to the Lebesgue measure and Y k = b(X k ) + V k where, • {V k } is an i.i.d., independent of {X k }, with density p V .In addition, p V (|x|) = 0 for |x| ≥ M .• the transition density (x, x ′ ) → q(x, x ′ ) is strictly positive and continuous.

Functional autoregressive in noise
It is also of interest to consider cases where both the X's and the Y 's are unbounded.We consider a non-linear non-Gaussian state space model (borrowed from (12, Example 5.8)).We assume that X 0 ∼ ξ and for k ≥ 1, where {U k } and {V k } are two independent sequences of random variables, with probability densities pU and pV with respect to the Lebesgue measure on X = R dX and Y = R dY , respectively.In addition, we assume that where b − is the lower bound for the Jacobian of the function b.
The condition on the state noise {U k } is satisfied by Pareto-type, exponential and logistic densities but obviously not by Gaussian density, because the tails are in such case too light.
The fact that the tails of the state noise U are heavier than the tails of the observation noise V (see ( 29)) plays a key role in the derivations that follow.In Section 5 we consider a case where this restriction is not needed (e.g., normal).
The following technical lemma (whose proof is postponed to section 7), shows that any set with finite diameter is a strong small set.
where γ is defined in (28) and z 0 is an arbitrary element of C. In addition, for all x 0 ∈ X and where z 1 is an arbitrary point in C. By Lemma 4, the denominator of ( 25) is lower bounded by Therefore, we may bound α(y 1 , C), defined in (25), by In the sequel, we choose where K is a constant which will be chosen later.Since, by construction, the diameter of the set C K (y) is 2K uniformly with respect to y, the constants ε(C K (y)) (defined in (31)) and ν(C K (y)) (defined in (34)) are functions of K only and are therefore uniformly bounded from below with respect to y.We will first show that, for K large enough, CK (y) g(x 1 , y)dx 1 is uniformly bounded from below, as shown in the following Lemma (whose proved is postponed to Section 7).The following two Lemmas bound the terms appearing in the RHS of (37).

Lemma 5.
lim We set z 0 = b −1 (y) in the definition (32) of h C(y) and z 1 = b −1 (y) in the definition (35).We denote The following Lemma shows that K may be chosen large enough so that I K (x 0 , x 2 , y) is uniformly bounded over x 0 , x 2 and y.
The proof is postponed to Section 7.

The pair-wise drift condition
In the situations where coupling over the whole state-space leads to trivial result, one may still use the coupling argument, but this time over smaller sets.In such cases, however, we need a device to control the return time of the joint chain to the set where the two chains are allowed to couple.In this section we obtain results that are general enough to include the autoregression model with Gaussian innovations and Gaussian measurement error.Drift conditions are used to obtain bounds on the coupling time.Consider the following drift condition.
where Rk|n is defined in (16) and Fk|n is defined in (17).
We set ε k|n = ε k|n ( Ck|n ), the coupling constant of the set Ck|n , and we denote For any vector {a i,n } 1≤i≤n , denotes by [↓ a] (i,n) the i-th largest order statistics, Theorem 7. Let n be an integer.Assume that for each k ∈ {0, . . ., n − 1}, there exists a set-valued function Ck|n : Y n+1 → B(X) ⊗ B(X) such that the forward smoothing kernel F k|n satisfies the pairwise drift condition toward the set Ck|n .Then, for any probability ξ, ξ ′ on (X, B(X)), where The proof is in section 6.1.

Corollary 9.
If there exists a sequence {m(n)} of integers such that m(n) ≤ n for any integer n, lim inf m(n)/n = α > 0 and P Y -a.s.

Gaussian autoregression
where m k|n and ρ k|n can be computed for k = {0, . . ., n − 2} using the following backward recursions (see (6)) initialized with m n−1|n = Y n and ρ n−1|n = σ 2 + τ 2 .The conditional transition kernel F i|n (x, •) has a density with respect to to the Lebesgue measure given by φ(•; µ i|n (x), γ 2 i|n ), where φ(z; µ, σ 2 ) is the density of a Gaussian random variable with mean µ and variance σ 2 and From (46), it follows that for any i ∈ {0, . . ., n − 1}, σ 2 ≤ ρ 2 i|n ≤ σ 2 + τ 2 .This implies that, for any (x, x ′ ) ∈ X × X, and any i ∈ {0, . . ., n − 1}, the function µ i|n is Lipshitz and with Lipshitz constant which is uniformly bounded by some β < |α|, and that the variance is uniformly bounded Therefore, for any c < ∞, all sets of the form are coupling sets.Note indeed that, for any i ∈ {0, . . ., n − 1}, where erf is the error function.More precisely, for any (x, x ′ ) ∈ C and any integer n and any i ∈ {0, . . ., n − 1}, and ν i,1,n is defined as in (15).For c large enough, the drift condition is satisfied with V (x, The condition (40) with where c is the width of the coupling set in (49).The condition (41) is satisfied with λ i|n = β2 for any β and c satisfying β < β < 1 and c 2 > (1 − β2 + γ 2 + )/( β2 − β 2 ). it is worthwhile to note that all these bounds are uniform with respect to n, i ∈ {0, . . ., n − 1} and realization of the observations y 0:n .Therefore, for any m ∈ {0, . . ., n}, we may take upper bound A m,n (defined in (44)) by , where ε is defined in (50), ρ is defined in (51).Taking m = [δn] for some δ > 0 such that B δ β2 < 1, this upper bound may be shown to go to zero exponentially fast and uniformly with respect to the observations y 0:n .

State space models with strongly modal distributions
The Gaussian example can be generalized to the more general case where the distribution of the state noise and the measurement noise are strongly unimodal.Recall that a density is strongly modal if the log of its density is concave.
First note that if f and g are two strongly unimodal density, then the density h = f g/ f g is also strongly unimodal, with mode that lies between the two modes; its second-order derivative of log h is smaller that the sum of the second-order derivative of log f and log g.Let the state noise density be denoted by p U (•) = e ϕ(•) and that of the measurements' errors be p V (•) = e ψ(•) .Define by the recursion operating on the decreasing indices βi|n (x) = p V (y i − x) q(x, x i+1 ) βi+1|n (x i+1 )dx i+1 , with the initial condition βn|n (x) = p V (y n − x).These functions are the conditional distribution of the observations Y i:n given X i = x.They are related to the backward function through the relation βi|n (x) def = β i|n (x)p V (y i − x).We denote ψ i|n (x) def = log βi|n (x).Now, Under the stated assumptions, the forward smoothing kernel F i|n has a density with respect to the Lebesgue measure which is given by Denote by Cov i|n,x the covariance function with respect to the forward smoothing kernel density.We recall that for any probability distribution P on (X, B(X)) and any two increasing measurable functions f and g which are square integrable with imsart-generic ver.2007/09/18 file: hmmGSS1.texdate: February 2, 2008 respect to P, the covariance of f and g with respect to P, is non-negative.Hence, where we used a direct differentiation, integration by parts, and the fact that both φ ′ and ψ ′ i+1|n are monotone non-increasing functions (the last statement follows by applying (54) inductively from n backward).
We conclude that ψ i|n is strongly unimodal with curvature at least as that of the original likelihood function.Hence the curvature of the logarithm of the forward smoothing density is smaller than the sum of the curvature of the state and of the measurement noise, Lemma 10 shows that the variance of X i+1 given X i and Y i+1:n is uniformly bounded where c is defined in (56).Now let Similarly as above Note that x i+1 → e i|n (x) − x i+1 , x i+1 → ϕ ′ (x i+1 − αx), and x i+1 → ψ ′ i+1|n (x i+1 ) are monotone non-increasing and therefore their correlation is positive with respect to any probability measure.Hence imsart-generic ver.2007/09/18 file: hmmGSS1.texdate: February 2, 2008 by integration by parts.Put as before V (x, x ′ ) = 1 + (x − x ′ ) 2 .It follows from the discussion above that Fi|n where v i|n (x) and v i|n (x ′ ) are uniformly bounded with respect to x and x ′ and |e i|n (x) − e i|n (x ′ )| ≤ α|x − x ′ |.The rest of the argument is like that for the normalnormal case.
We conclude the argument by stating and proving a lemma which was used above.
Lemma 10.Suppose that Z is a random variable with probability density function f satisfying sup x (∂ 2 /∂x 2 ) log f ≤ −c.Then, Z is square integrable and Var(Z) ≤ c −1 .
Proof.Suppose, w.l.o.g., that the maximum of f is at 0. Under the stated assumption, there exist constants a ≥ 0 and b such that f (x) ≤ ae −c(x−b) 2 .This implies that Z is quare integrable.Denote z → ζ(z) = log f (z)+cz 2 /2 which by assumption is a concave function.Let m be the mean of Z.
By construction, z → ξ ′ (z) is a non-increasing function.Since the inequality Cov(ϕ(Z), ψ(Z)) ≥ 0 holds for any two non-decreasing function ϕ and ψ which have finite second moment, the second term in the RHS of the previous equation is negative.Since (cz − ξ ′ (z)) e ξ(z)−cz 2 /2 = −f ′ (z), the proof follows by integration by part:
• For any y ∈ Y = R dY , pV (y) = p V (|y|) where p V is a bounded positive lower semi-continuous function, p V is non increasing on [M, ∞[, and satisfies