Consistency of the posterior distribution and MLE for piecewise linear regression

We prove the weak consistency of the posterior distribution and that of the Bayes estimator for a two-phase piecewise linear regression mdoel where the break-point is unknown. The non-differentiability of the likelihood of the model with regard to the break- point parameter induces technical difficulties that we overcome by creating a regularised version of the problem at hand. We first recover the strong consistency of the quantities of interest for the regularised version, using results about the MLE, and we then prove that the regularised version and the original version of the problem share the same asymptotic properties.


Introduction
We consider a continuous segmented regression model with 2 phases, one of them (the rightmost) being zero. Let u be the unknown breakpoint and γ ∈ R be the unknown regression coefficient of the non zero phase. The observations X 1:n = (X 1 , . . . , X n ) depend on an exogenous variable that we denote t 1:n = (t 1 , . . . , t n ) via the model given for i = 1, . . . , n by where (ξ i ) i∈N is a sequence of independent and identically distributed (i.i.d.) random variables with a common centered Gaussian distribution of unknown variance σ 2 , N (0, σ 2 ), and where 1 A denotes the indicator function of a set A. Such a model is for instance used in practice to estimate and predict the heating part of the electricity demand in France. See Bruhns et al. (2005) for the definition of the complete model and Launay et al. (2012) for a Bayesian approach. In this particular case, u corresponds to the heating threshold above which the temperatures t 1:n do not have any effect over the electricity load, and γ corresponds to the heating gradient i.e. the strength of the described heating effect.
The work presented in this paper is most notably inspired by the results developed in Ghosh et al. (2006) and Feder (1975).
Feder proved the weak consistency of the least squares estimator in segmented regression problems with a known finite number of phases under the hypotheses of his Theorem 3.10 and some additional assumptions disseminated throughout his paper, amongst which we find that the empirical cumulative distribution functions of the temperatures at the n-th step t n1 , . . . , t nn are required to converge to a cumulative distribution function, say F n converges to F , which is of course to be compared to our own Assumption (A1). Feder also derived the asymptotic distribution of the least squares estimator under the same set of assumptions. Unfortunately there are a few typographical errors in his paper (most notably resulting in the disappearance of σ 2 0 from the asymptotic variance matrix in his main theorems), and he also did not include σ 2 n in his study of the asymptotic distribution.
The asymptotic behaviour of the posterior distribution is a central question that has already been raised in the past. For example, Ghosh et al. worked out the limit of the posterior distribution in a general and regular enough i.i.d. setup. In particular they manage to derive the asymptotic normality of the posterior distribution under third-order differentiability conditions. There are also a number of works dealing with some kind of non regularity, like these of Sareen (2003) which consider data the support of which depends on the parameters to be estimated, or those of Ibragimov and Has'minskii (1981) which offer the limiting behaviour of the likelihood ratio for a wide range of i.i.d. models whose likelihood may present different types of singularity. Unfortunately, the heating part model presented here does not fall into any of these already studied categories.
In this paper, we show that the results of Ghosh et al. can be extended to a non i.i.d. twophase regression model. We do so by using the original idea found in Sylwester (1965) 1 : we introduce a new, regularised version of the problem called pseudo-problem, later reprised by Feder. The pseudo-problem consists in removing a fraction of the observations in the neighbourhood of the true parameter to obtain a differentiable likelihood function. We first recover the results of Ghosh et al. for this pseudo-problem and then extend these results to the (full) problem by showing that the estimates for the problem and the pseudo-problem have the same asymptotic behaviour.
From this point on, we shall denote the parameters θ = (γ, u, σ 2 ) = (η, σ 2 ) and θ 0 will denote the true value of θ. We may also occasionally refer to the intercept of the model as β = −γu. The log-likelihood of the n first observations X 1:n of the model will be denoted l 1:n (X 1:n |θ) = n i=1 l i (X i |θ) (1.2) where l i (X i |θ) designates the log-likelihood of the i-th observation X i , i.e.
(1.4) Notice that we do not mention explicitly the link between the likelihood l and the sequence of temperatures (t n ) n∈N in these notations, so as to keep them as minimal as possible. The least square estimator θ n of θ being also the maximum likelihood estimator of the model, we refer to it as the MLE. Throughout the rest of this paper we work under the following assumptions Assumption (A1). The sequence of temperatures (exogenous variable) (t n ) n∈N belongs to a compact set [u, u] and the sequence of the empirical cumulative distribution functions (F n ) n∈N of (t 1 , . . . , t n ), defined by converges pointwise to a function F where F is a cumulative distribution function itself, which is continuously differentiable over [u, u]. Remark 1. Due to a counterpart to Dini's Theorem (see Theorem 7.1 taken from Polya and Szegö, 2004, (p81)), F n converges to F uniformly over [u, u].
Remark 2. Let h be a continuous, bounded function on [u, u]. As an immediate consequence of this assumption, for any interval I ⊂ [u, u], we have, as n − → +∞ the convergence holding true by definition of the convergence of probability measures (see Billingsley, 1999, pages 14-16). In particular, for I = [u, u] and I =] − ∞, u] we get, as n − → +∞ Remark 3. It is a general enough assumption which encompasses both the common cases of i.i.d. continuous random variables and periodic (non random) variables under a continous (e.g. Gaussian) noise. Assumption (A2). θ 0 ∈ Θ, where the parameter space Θ is defined (for identifiability) as where R * = {x ∈ R , x = 0} and R * + = {x ∈ R , x > 0}. Assumption (A3). f = F ′ does not vanish (i.e. is positive) on ]u, u[. Assumption (A4). There exists K ⊂ Θ a compact subset of the parameter space Θ such that θ n ∈ K for any n large enough.
The paper is organised as follows. In Section 2, we present the Bayesian consistency (the proofs involved there rely on the asymptotic distribution of the MLE) and introduce the concept of pseudo-problem. In Section 3, we prove that the MLE for the full problem is strongly consistent. In Section 4 we derive the asymptotic distribution of the MLE using the results of Section 3: to do so, we first derive the asymptotic distribution of the MLE for the pseudo-problem and then show that the MLEs for the pseudo-problem and the problem share the same asymptotic distribution. We discuss these results in Section 5. The extensive proofs of the main results are found in Section 6 while the most technical results are pushed back into Section 7 at the end of this paper. Notations. Whenever mentioned, the O and o notations will be used to designate a.s. O and a.s. o respectively, unless there are indexed with P as in O P and o P , in which case they will designate O and o in probability respectively.
Hereafter we will use the notation A c for the complement of the set A and B(x, r) for the open ball of radius r centred at x i.e. B(x, r) = {x ′ , x ′ − x < r}.

Bayesian consistency
In this Section, we show that the posterior distribution of θ given (X 1 , . . . , X n ) asymptotically favours any neighbourhood of θ 0 as long as the prior distribution itself charges a (possibly different) neighbourhood of θ 0 (see Theorem 2.1). We then present in Theorem 2.2 the main result of this paper i.e. the convergence of posterior distribution with suitable normalisation to a Gaussian distribution.
To prove (2.2) we adequately majorate its numerator and minorate its denominator. The majoration mainly relies on Proposition 7.11 while the minoration is derived without any major difficulties. The comprehensive proof of (2.2) can be found in Section 6.1 on page 11.
Let θ ∈ Θ, we now define I(θ), the asymptotic Fisher Information matrix I(θ) of the model, as the symmetric matrix given by It is obviously positive and definite since all its principal minor determinants are positive. The proof of the fact that it is indeed the limiting matrix of the Fisher Information matrix of the model is deferred to Lemma 7.10.
The proof Theorem 2.2 relies on the consistency of the pseudo-problem, first introduced in Sylwester (1965), that we define in the next few paragraphs.

Pseudo-problem
The major challenge in proving Theorem 2.2 is that the typical arguments usually used to derive the asymptotic behaviour of the posterior distribution (see Ghosh et al., 2006, for example) do not directly apply here. The proof provided by Ghosh et al. requires a Taylor expansion of the likelihood of the model up to the third order at the MLE, and the likelihood of the model we consider here at the n-th step is very obviously not continuously differentiable w.r.t. u in each observed temperature t i , i = 1, . . . , n. Note that the problem only grows worse as the number of observations increases.
To overcome this difficulty we follow the original idea first introduced in Sylwester (1965), and later used again in Feder (1975): we introduce a pseudo-problem for which we are able to recover the classical results and show that the differences between the estimates for the problem and the pseudo-problem are, in a sense, negligeable. The pseudo-problem is obtained by deleting all the observations within intervals D n of respective sizes d n centred around u 0 . The intervals D n are defined as and their sizes d n are chosen such that as n − → +∞ This new problem is called pseudo-problem because the value of u 0 is unknown and we therefore cannot in practice delete these observations. Note that the actual choice of the sequence (d n ) n∈N does not influence the rest of the results in any way, as long as it satisfies to conditions (2.6). It thus does not matter at all whether one chooses (for instance) d n = n − 1 4 or d n = log −1 n. Let us denote n * * the number of observations deleted from the original problem, and n * = n − n * * the sample size of the pseudo-problem. Generally speaking, quantities annotated with a single asterisk * will refer to the pseudo-problem. l * 1:n (X 1:n |θ) will thus designate the likelihood of the pseudo-problem i.e. (reindexing observations whenever necessary) (2.7) On one hand, from an asymptotic point of view, the removal of those n * * observations should not have any kind of impact on the distribution theory. The intuitive idea is that deleting n * * observations takes away only a fraction n * * /n of the information which asymptotically approaches zero as will be shown below. The first condition (2.6) seems only a natural requirement if we ever hope to prove that the MLE for the problem and the pseudo-problem behave asymptotically in a similar manner (we will show they do in Theorem 4.2, see equation (4.1)).
On the other hand, assuming the MLE is consistent (we will show it is, in Theorem 3.3) and assuming that the sizes d n are carefully chosen so that the sequence ( u n ) n∈N falls into the designed sequence of intervals (D n ) n∈N (see Proposition 4.1, whose proof the second condition (2.6) is tailored for), these regions will provide open neighbourhoods of the MLE over which the likelihood of the pseudo-problem will be differentiable. The pseudo-problem can therefore be thought of as a locally regularised version of the problem (locally because we are only interested in the differentiability of the likelihood over a neighbourhood of the MLE). We should thus be able to retrieve the usual results for the pseudo-problem with a bit of work. It will be shown that this is indeed the case (see Theorem 2.3).
If the sequence (d n ) n∈N satisfies to conditions (2.6), then as n − → +∞, Using the uniform convergence of F n to F over any compact subset (see Assumption (A1), and its Remark 1), we indeed find via a Taylor-Lagrange approximation where u n ∈ D n , so that in the end, since u n − → u 0 and f is continuous and positive at u 0 , we have a.s.
We now recover the asymptotic normality of the posterior distribution for the pseudo problem.
Proof of Theorem 2.3. The extensive proof, to be found in Section 6.1, was inspired by that of Theorem 4.2 in Ghosh et al. (2006) which deals with the case where the observations X 1 , . . . , X n are independent and identically distributed and where the (univariate) log-likelihood is differentiable in a fixed small neighbourhood of θ 0 . We tweaked the original proof of Ghosh et al. so that we could deal with independent but not identically distributed observations and a (multivariate) log-likelihood that is guaranteed differentiable only on a decreasing small neighbourhood of θ 0 .

From the pseudo-problem to the original problem
We now give a short proof of Theorem 2.2. As we previously announced, it relies upon its counterpart for the pseudo-problem, i.e. Theorem 2.3.
Proof of Theorem 2.2. Recalling the definition of t and t * given in (2.4) and (2.8) we observe that Thus the posterior distribution of t * and that of t, given X 1:n are linked together via π n (t|X 1:n ) = π * n (t − α n |X 1:n ) (2.10) Relationship (2.10) allows us to write Theorem 2.3 ensures that the first integral on the right hand side of this last inequality goes to zero in probability. It therefore suffices to show that the second integral goes to zero in probability to end the proof, i.e. that as n − → +∞ But the proof of (2.11) is straightforward knowing that α n P − → 0 (see (4.1)) and using dominated convergence.
As an immediate consequence of Theorem 2.2 we want to mention the weak consistency of the Bayes estimator.
Observe that, under conditions (2.6), the same arguments naturally apply to the pseudoproblem and lead to a strong consistency (a.s. convergence) of its associated Bayes estimator due to Theorem 2.3, thus recovering the results of Ghosh et al. (2006) for the regularised version of the problem.

Strong consistency of the MLE
In this Section we prove the strong consistency of the MLE over any compact set including the true parameter (see Theorem 3.1). It is a prerequisite for a more accurate version of the strong consistency (see Theorem 3.3) which lies at the heart of the proof of Theorem 2.3.
Theorem 3.1. Under Assumptions (A1)-(A4), we have a.s., as n − → +∞, Proof of Theorem 3.1. Recall that K is a compact subset of Θ, such that θ n ∈ K for any n large enough. We denote l 1:n (X 1:n |S) = sup θ∈S l 1:n (X 1:n |θ), for any S ⊂ K, K n (a) = {θ ∈ Θ, l 1:n (X 1:n |θ) log a + l 1:n (X 1:n |K)} , for any a ∈]0, 1[. All we need to prove is that since for any n large enough we have θ n ∈ K n (a) for any a ∈]0, 1[. We control the likelihood upon the complement of a small ball in K and prove the contrapositive of (3.1) using compacity arguments. The extensive proof of (3.1) is to be found in Section 6.2 .
We strengthen the result of Theorem 3.1 by giving a rate of convergence for the MLE (see Theorem 3.3). This requires a rate of convergence for the image of the MLE through the regression function of the model, that we give in the Proposition 3.2 below.
Proof of Proposition 3.2. The proof is given in Section 6.2.
Theorem 3.3. Under Assumptions (A1)-(A4), we have a.s., as n − → +∞, Proof of Theorem 3.3. We show that a.s. (3.2) holds for each coordinate of θ n − θ 0 . The calculations for the variance σ 2 are pushed back into Section 6.2. We now prove the result for the parameters γ and u. It is more convenient to use a reparametrisation of the model in terms of slope γ and intercept β where β = −γu. Slope γ and intercept β. Let V 1 and V 2 be two non empty open intervals of ]u, u 0 [ such that their closures V 1 and V 2 do not overlap. For any (t 1 , and observe that for any τ = (β, γ), Observe that by some basic linear algebra tricks we are able to write for any (t 1 , Thus, using the equivalence of norms and a simple domination of the first term of the product in the inequality above, we find that there exists a constant C ∈ R * + , such that for any (t 1 , Taking advantage of Proposition 3.2, we are able to exhibit two sequences of points (t 1,n ) n∈N in V 1 and (t 2,n ) n∈N in V 2 such that a.s., for i = 1, 2 Combining (3.3) and (3.4) together (using t i = t i,n for every n), it is now trivial to see that a.s.
which immediately implies the result for the γ and β components of θ. Break-point u. Recalling that u = −βγ −1 and thanks to the result we just proved, we find that a.s.

Asymptotic distribution of the MLE
In this Section we derive the asymptotic distribution of the MLE for the pseudo-problem (see Proposition 4.1) and then show that the MLE of pseudo-problem and that of the problem share the same asymptotic distribution (see Theorem 4.2).
Proof of Theorem 4.1. The proof is divided in two steps. We first show that the likelihood of the pseudo-problem is a.s. differentiable in a neighbourhood of the MLE θ * n for N large enough. We then recover the asymptotic distribution of the MLE following the usual scheme of proof, with a Taylor expansion of the likelihood of the pseudo-problem around the true parameter. The details of these two steps are given in Section 6.3.
where the asymptotic Fisher Information Matrix I(·) is defined in (2.3).
Proof of Theorem 4.2. It is a direct consequence of Proposition 4.1 as soon as we show that as n − → +∞ To prove (4.1), we study each coordinate separately. For γ and u, we apply Lemmas 4.12 and 4.16 found in Feder (1975) with a slight modification: the rate of convergence d n he uses may differ from ours but it suffices to formally replace (log log n) 1 2 by (log n) all throughout his paper and the proofs he provides go through without any other change. We thus get It now remains to show that To do so, we use (4.2) and the decomposition (6.40) The details of this are available in Section 6.3.

Discussion
In this Section, we summarise the results presented in this paper. The consistency of the posterior distribution for a piecewise linear regression model is derived as well as its asymptotic normality with suitable normalisation. The proofs of these convergence results rely on the convergence of the MLE which is also proved here. In order to obtain all the asymptotic results, a regularised version of the problem at hand, called pseudo-problem, is first studied and the difference between this pseudo-problem and the (full) problem is then shown to be asymptotically negligeable.
The trick of deleting observations in a diminishing neighbourhood of the true parameter, originally found in Sylwester (1965) allows the likelihood of the pseudo-problem to be differentiated at the MLE, once the MLE is shown to asymptotically belong to that neighbourhood (this requires at least a small control of the rate of convergence of the MLE). This is the key argument needed to derive the asymptotic distribution of the MLE through the usual Taylor expansion of the likelihood at the MLE. Extending the results of Ghosh et al. (2006) to a non i.i.d. setup, the asymptotic normality of the posterior distribution for the pseudo-problem is then recovered from that of the MLE, and passes on almost naturally to the (full) problem.
The asymptotic normality of the MLE and the posterior distribution are proved in this paper in a non i.i.d. setup with a non continuously differentiable likelihood. In both cases we obtain the same asymptotic results as for an i.i.d. regular model: the rate of convergence is √ n and the limiting distribution is Gaussian (see Ghosh et al., 2006;Lehmann, 2004). For the piecewise linear regression model, the exogenous variable t 1:n does not appear in the expression of the rate of convergence as opposed to what is known for the usual linear regression model (see Lehmann, 2004): this is due to our own Assumption (A1) which implies that t ′ 1:n t 1:n is equivalent to n. Note that for a simple linear regression model, we also obtain the rate √ n under Assumption (A1). In the litterature, several papers already highlighted the fact that the rate of convergence and the limiting distribution (when it exists) may be different for non regular models in the sense that the likelihood is either non continuous, or non continuously differentiable, or admits singularities (see Dacunha-Castelle, 1978;Ghosh et al., 1994;Ghosal and Samanta, 1995;Ibragimov and Has'minskii, 1981). For the piecewise regression model, the likelihood is continuous but non continuously differentiable on a countable set (but the left and right derivatives exist and are finite): the rate of convergence √ n is not so surprising in our case, because this rate was already obtained for a univariate i.i.d. model the likelihood of has the same non regularity at a single point. In that case, the rate of convergence of the MLE is shown to be n (see Dacunha-Castelle, 1978, for instance).
We have Our aim is to show that a.s.
Notice that (6.27) follows from (6.29) and (6.30) if we manage to show that a.s.
It thus now suffices to prove that a.s., for any g ∈ G | ζ, g | = ζ g · o(1), (6.32) where the o(1) mentioned in (6.32) is uniform in g over G (i.e. a.s. ζ is asymptotically uniformly orthogonal to G), for (6.31) is a direct consequence of (6.32) and Lemma 6.1 whose proof is found in Feder (1975).
Lemma 6.1. Let X and Y be two linear subspaces of an inner product space E. If there exists α < 1 such that where x * (resp. y * ) is the orthogonal projection of x + y onto X (resp. Y).
We immediately deduce that a.s. (6.32) holds i.e. a.s. ζ is asymptotically uniformly orthogonal to G, which completes the proof.

Proofs of Section 4
Proof of Proposition 4.1. We proceed as announced.
Step 1. We first prove that a.s.
Let us notice that anything proven for the problem remains valid for the pseudo-problem. Because n * ∼ n, we have a.s., thanks to Theorem 3.3 and conditions (2.6), as n − → +∞ and thus deduce from the ratio of these two quantities that and this directly implies the desired result.
Since θ * n − → θ 0 , we also have θ n − → θ 0 and using both Lemmas 7.9 and 7.10 we immediately find that as n − → +∞ which means, remembering both that n * ∼ n and that I(θ 0 ) is positive definite and thus invertible that as n − → +∞ Proof of Theorem 3.3. We now prove that Variance of noise σ 2 . Observe that where we denote for i = 1, . . . , n, It is thus easy to see that a.s. (6.44) and also that, via Corollary 7.7, a.s.
Proof of Theorem 4.2. To finish the proof, we need to show (4.3) i.e. that We use the decomposition (6.40) Having proved in Proposition 4.1 that we add these relationships to those from (4.2) and find that We now use (6.47) together with (6.42), we are able to write It is hence easy to see that which once both substituted into (6.40) yield What was done above with the problem and σ 2 n can be done with the pseudo-problem and σ 2 * n without any kind of modification so that We observe that , using the Central Limit Theorem, and in the end we get

Technical results
Theorem 7.1 (Polya's Theorem). Let (g n ) n∈N be a sequence of non decreasing (or non increasing) functions defined over I = [a, b] ⊂ R. If g n converges pointwise to g (i.e. g n (x) − → g(x) as n − → +∞, for any x ∈ I) and g is continuous then Proof of Lemma 7.1. Assume the functions g n are non decreasing over I (if not, consider their opposites −g n ). g is continuous over I and thus bounded since I is compact. g is also non decreasing over I as the limit of a sequence of non decreasing functions. Let ǫ > 0 and k > g(b)−g(a) ǫ such that ∃a = a 0 < . . . < a k = b ∈ I k+1 , ∀i = 0, . . . , k − 1, g(a i+1 ) − g(a i ) < ǫ.
Now let x ∈ I and let i ∈ N such that a i x a i+1 . Since g n and g are non decreasing, we find that The pointwise convergence of g n to g and the finiteness of k together ensure that which implies with both of the inequations mentioned above that Lemma 7.2. Let k ∈ N * , there exists a constant C ∈ R * + such that for any (u, u The mean value theorem guarantees that there exists v between u and u ′ such that We thus have sup t∈ [u, u] And now (7.1) is a simple consequence of (7.2), (7.3) and (7.4). u, u] |t − u| And now (7.5) is a simple consequence of Lemma 7.2.
and then use the triangle inequality. To see that the claim holds, it suffices, thanks to Lemma 7.3, to exhibit a finite and tight enough grid of A such that any point in A lies close enough to a point of the grid. The existence of such a grid is obviously guaranteed since A ⊂ R 2 is bounded.
Proof of (7.10). Thanks to Assumption (A1), it is easy to see that Lemma 7.6. Let A ⊂ R × [u, u] be a bounded set, and let η 0 ∈ A, then under Assumptions (A1)-(A4), Proof of Lemma 7.6. Let ǫ > 0, η ∈ A, and apply Lemma 7.4 to get the corresponding m(ǫ) ∈ N, {η 1 , . . . , η m(ǫ) } ⊂ A, j, j ′ ∈ {1, . . . , m(ǫ)}. We can write with the triangle inequality Let us now recall Kolmogorov's criterion, a proof of which is available in Section 17 of Loève (1991) on pages 250-251. This criterion guarantees that for any sequence (Y i ) i∈N of independent random variables and any numerical sequence For each couple (j, j ′ ) ∈ {1, . . . , m(ǫ)}, Kolmogorov's criterion ensures that Having only a finite number of couples (j, j ′ ) ∈ {1, . . . , m(ǫ)} 2 to consider allows us to write (7.13) By (7.13), the first term on the right hand side of (7.12) converges almost surely to zero. The Strong Law of Large Numbers ensures that the second term on the right hand side of (7.12) converges almost surely to ǫ · (2π −1 σ 2 ) 1 2 , and the result follows, since all the work done above for (ξ n ) n∈N can be done again for (−ξ n ) n∈N .
Lemma 7.7. Let (Z i ) i∈N be a sequence of independent identically distributed random variables such that for all i ∈ N, either Z i ∼ N (0, σ 2 ) with σ 2 > 0, or Z i ∼ χ 2 (k) with k > 0. Then a.s., as n − → +∞ Proof of Lemma 7.7. Denote Y n = Z n when the random variables are Gaussian, and Y n = Z n /5 when the random variables considered are chi-squared (so that Ee 2Y1 and Ee −2Y1 are both finite). We will show that a.s. Y n = O(log n).
For any ǫ > 0, from Markov's inequality we get: From there it is easy to see that for any ǫ > 0 we have which directly implies via Borel-Cantelli's Lemma (see for example Billingsley, 1995, Section 4, page 59) that a.s.
In particular, a.s. for any n large enough, Y n log n.
What was done with (Y n ) n∈N can be done again with (−Y n ) n∈N so that in the end we have a.s for any n large enough, − log n Y n log n.
Lemma 7.8. Under Assumptions (A1)-(A4), for any η 0 ∈ R × [u, u], there exists C ∈ R * + such that for any n large enough, and for any η Proof of Lemma 7.8. We have already almost proved this result in (3.3) (see Theorem 3.3). There is however a small difficulty since the majoration was obtained for τ = (β, γ) and not η = (γ, u). Let V 1 and V 2 two non empty open intervals of ]u, u 0 [ such that their closures V 1 and V 2 are do not overlap. We have Using the same arguments we used to prove (3.3), we find that there exists C ∈ R * + such that (remembering the definition of the intercept β of the model) and since for j = 1, 2 we have there exists C ∈ R * + such that for any n large enough Notice now that From here, since u ∈ [u, u] is bounded, it is straightforward that there exists C ∈ R * + such that for any n large enough which ends the proof.
Let us now check that the random variables Z i meet Lyapounov's Theorem (see Billingsley, 1995, page 362) requirements before wrapping up this proof. The random variables Z i are independent and trivially L 2 . We denote V * 2 n = n * i=1 Var Z i and claim that Lyapounov's condition holds, that is Indeed we have (δ = 1) The first term of this last product is O n * − 1 2 thanks to (7.17), and recalling the definition of Z i from (7.15), there is no difficulty in showing that the last term of the product, namely 1 n * n * i=1 E |Z i | 3 converges to a finite limit. Indeed we find, using trivial dominations and Assumption (A1) once again, Lyapounov's Theorem thus applies here and leads to i.e. multiplying numerator and denominator by σ −2 0 we get α, A * 1:n (θ 0 ) Var that is α, A * 1:n (θ 0 ) n * 1 2 α, I * 1:n (θ 0 )α 1 2 d − → N (0, 1), and because of (7.17) we can also write, which, remembering that a.s. n * ∼ n, is equivalent to (7.14).
Proof of Lemma 7.10. We will prove each claim separately.
Proof of (7.18). Differential calculus provides the following expressions for the coefficients of 1 n * B * 1:n (θ).
The convergence we claim is then a direct consequence of Assumption (A1) and the fact that n * ∼ n and, depending on the coefficients, either the Strong Law of Large Numbers or Kolmogorov's criterion. Notice that 1 n * B * 1:n (θ 0 ) − I * 1:n (θ 0 ) a.s.
− − → 0, which will end the proof since n * ∼ n. We will consider each coefficient of C * 1:n (θ) in turn, making use of Assumption (A1) once again and apply repeatedly the Strong Law of Large Numbers and Kolmogorov's criterion as well as Lemma 7.2, whenever needed.
then last equality holding true because of Lemma 7.2.
the last equality holding true because of the uniform convergence of F n * to F over any compact subset such as [u, u] (see Assumption (A1), and its Remark 1).
where the two last o(1) are direct consequences of Lemmas 7.3 and 7.6.
Those same Lemmas used together with Lemma 7.2, the Strong Law of Large Numbers as well as the well-known Cauchy-Schwarz inequality imply that a.s.
and also that a.s.
and finally that a.s.
( 7.22) sup θ∈B c (θ0,δρn) 1 nρ 2 n [l 1:n (X 1:n |θ) − l 1:n (X 1:n |θ 0 )] −ǫ. (7.23) Proof of Proposition 7.11. This proposition is to be compared to the regularity condition imposed in Ghosh et al. (2006) (see their condition (A4) in Chapter 4). The aim of this proposition is to show that our model satisfies to a somewhat stronger version of that condition. Let 0 < δ. Notice first that, similarly to what was done in (6.20), we are able to deduce that a.s.
Step 1 shows that for a given n the supremum considered is reached on a point θ n .
Step 2 and 3 focus on obtaining useful majorations of the supremum.
Step 4 is dedicated to proving that the sequence θ n admits an accumulation point (the coordinates of which satisfy to some conditions), while step 5 makes use of this last fact to effectively dominate the supremum.
Step 6 wraps up the proof.
Step 1. We first show that a.s. for any n there exists θ n ∈ R×[u, u]×R * + such that θ n −θ 0 δρ n and i n (θ n ) = sup Θ∈B c (θ0,δρn) i n (θ). (7.27) Let n ∈ N and let (θ n,k ) k∈N be a sequence of points in B c (θ 0 , δρ n ) such that lim k− →+∞ i n (θ n,k ) = sup Θ∈B c (θ0,δρn) i n (θ). From (7.25) it is obvious that σ 2 n,k is bounded: if it was not, we would be able to extract a subsequence such that σ 2 n,kj would go to +∞ and thus i n (θ n,kj ) would go to −∞. For the very same reason, γ n,k too is bounded. Recalling that u n,k is bounded too by definition, we now see that there exists a subsequence (θ n,kj ) j∈N in B c (θ 0 , δρ n ) and a point θ n in B c (θ 0 , δρ n ) (i.e. in R × [u, u] × R + , and such that θ n − θ 0 δρ n ) such that (θ n,kj ) j∈N − −−−− → j− →+∞ θ n .
Finally from (7.25) again it is easy to see that σ 2 n > 0 for if it was not i n (θ n,kj ) would go to −∞ once again, unless (by continuity of µ with regard to η) ξ i + µ(η 0 , t i ) − µ(η n , t i ) = 0 for all i n which a.s. does not happen.
Step 5. We will now end the proof by showing that there exists ǫ > 0 such that for any n large enough i n (θ n ) −ǫρ 2 n . (7.33) We consider the two following mutually exclusive situations. Situation A: σ 2 ∞ = σ 2 0 . In this situation, from (7.29) we get There hence exists ǫ > 0 such that for any n large enough i n (θ n ) −ǫ.