On clustering procedures and nonparametric mixture estimation

This paper deals with nonparametric estimation of conditional den-sities in mixture models in the case when additional covariates are available. The proposed approach consists of performing a prelim-inary clustering algorithm on the additional covariates to guess the mixture component of each observation. Conditional densities of the mixture model are then estimated using kernel density estimates ap-plied separately to each cluster. We investigate the expected L 1 -error of the resulting estimates and derive optimal rates of convergence over classical nonparametric density classes provided the clustering method is accurate. Performances of clustering algorithms are measured by the maximal misclassification error. We obtain upper bounds of this quantity for a single linkage hierarchical clustering algorithm. Lastly, applications of the proposed method to mixture models involving elec-tricity distribution data and simulated data are presented.


Introduction
Finite mixture models are widely used to account for population heterogeneities.In many fields such as biology, econometrics and social sciences, experiments are based on the analysis of a variable characterized by a different behavior depending on the group of individuals.A natural way to model heterogeneity for a real random variable Y is to use a mixture model.In this case, the density f of Y can be written as (1.1) Here M is the number of subpopulations, α i and f i are respectively the mixture proportion and the probability density function of the i th subpopulation.We refer the reader to Everit and Hand (1981), McLachlan and Basford (1988), McLachlan and Peel (2000) for a broader picture of mixture density models as well as for practical applications.
When dealing with mixture density models such as (1.1), some issues arise.
In some cases, the number of components M is unknown and needs to be estimated.To this end, some algorithms have been developed to provide consistent estimates of this parameter.For instance, when M corresponds to the number of modes of f , Cuevas et al. (2000) and Biau et al. (2007) propose an estimator based on the level sets of f .Model identifiability is an additional issue that has received some attention in the literature.Actually, model (1.1) is identifiable only by imposing restrictions on the vector (α 1 , . . ., α M , f 1 , . . ., f M ).In order to provide the minimal assumptions such that (1.1) becomes identifiable, Celeux and Govaert (1995), Bordes et al. (2006) (see also the references therein) assume that the density functions f i 's belong to some parametric or semi-parametric density families.However, in a nonparametric setting, it turns out that identifiability conditions are more difficult to provide.Hall and Zhou (2003) define mild regularity conditions to achieve identifiability in a multivariate nonparametric setting while Kitamura (2004) considers the case where appropriate covariates are available.
When the model (1.1) is identifiable, the statistical problem consists of estimating mixture proportions α i and density functions f i .In the parametric case, some algorithms have been proposed such as maximum likelihood techniques (Lindsay (1983a,b), Redner and Walker (1984)) as well as Bayesian approaches (Diebolt and Robert (1994), Biernacki et al. (2000)).When the f i 's belong to nonparametric families, it is often assumed that training data are observed, i.e., the component of the mixture from which Y is distributed is available.In that case, the model is identifiable and some algorithms allow to estimate both the α i 's and the f i 's (see Titterington (1983), Hall andTitterington (1984, 1985), Cerrito (1992)).However, as pointed out by Hall and Zhou (2003), inference in mixture nonparametric density models becomes more difficult without training data.These authors introduce consistent nonparametric estimators of the conditional distributions in a multivariate setting.We also refer to Bordes et al. (2006) who provide efficient estimators under the assumption that the unknown mixed distribution is symmetric.These estimates are extended by Benaglia et al. (2009Benaglia et al. ( , 2011) ) for multivariate mixture models.
The framework we consider takes place between the two above situations.More precisely, training data are not observed but we assume to have at hand some covariates that may provide information on the components of the mixture from which Y is distributed.Our approach consists of performing a preliminary clustering algorithm on these covariates to guess the mixture component of each observation.Density functions f i are then estimated using a nonparametric density estimate based on the predictions of the clustering method.
Many authors have already proposed to carry out a preliminary clustering step to improve density estimates in mixture models.Ruzgas et al. (2006) conduct a comprehensive simulation study to conclude that a preliminary clustering using the EM algorithm allows to some extent to improve performances of some density estimates (see also Jeon and Landgrebe (1994)).
However, to our knowledge, no work has been devoted so far to measure the effects of the clustering algorithm on the resulting estimates of the distribution functions f i .This paper proposes to fill this gap, studying the L 1 -error of these estimates.To do so, we measure the performance of clustering methods by the maximal misclassification error (2.3).This criterion allows us to derive optimal rates of convergence over classical nonparametric density classes, provided the clustering method used in the first step performs well with respect to this notion.
The paper is organized as follows.In Section 2, we present the two-step estimator and give the main results.Examples of clustering algorithms are worked out in Section 3. In particular, the maximal misclassification error of a hierarchical clustering algorithm is studied under mild assumptions on the model.Applications on simulated and real data are presented in Sections 4 and 5.A short conclusion including a discussion of the implications of the work is given in Section 6 and proofs are gathered in Section 7.

The statistical problem
Our focus is on the estimation of conditional densities in a univariate mixture density model.Formally we let (Y, I) be a random vector taking values in R× 1, M where M ≥ 2 is a known integer.We assume that the distribution of Y is characterized by a density f defined, for all t ∈ R, by where, for all i ∈ 1, M , α i = P(I = i) are the prior probabilities (or the weights of the mixture) and f i are the densities of the conditional distributions L(Y |I = i) (or the components of the mixture).
If we have at hand n observations (Y 1 , I 1 ), . . ., (Y n , I n ) drawn from the distribution of (Y, I), one can easily find efficient estimates for both the α i 's and the f i 's.For example, if we denote N i = # {k ∈ 1, n : I k = i}, then we can estimate α i using the empirical proportion ᾱi = N i /n and f i by the kernel density estimate fi defined for all t ∈ R by For the definiteness of fi we conventionally set fi (t) = 0 if N i = 0.
Here K is a kernel which belongs to L 1 (R, R) and such that K = 1, h > 0 is a bandwidth and is the classical convolution kernel located at point t (see Rosenblatt (1956) and Parzen (1962) for instance).Estimate (2.1) is just the usual kernel density estimate defined from observations in the i th subpopulation.It follows that, under classical assumptions regarding the smoothing parameter h and the kernel K, fi has similar properties as those of the well-known kernel density estimate.In particular, the expected L 1 -error achieves optimal rates when f i belongs to regular density classes such as Hölder or Lipschitz classes (see Devroye and Györfi (1985)).
The problem is more complicated when the random variable I is not observed.In this situation, ᾱi and fi are not computable and one has to find another way to define efficient estimates for both α i and f i .In this work, we assume that one can obtain information on I through another covariate X which takes values in R d where d ≥ 1.This random variable is observed and its conditional distribution L(X|I = i) is characterized by a density g i = g i,n : R d → R which could depend on n.In this framework, the statistical problem is to estimate both the components and the weights of the mixture model (1.1) using the n-sample

Discussion on the model
Estimating components of a mixture model is a classical statistical problem.
The new feature proposed here is to include covariates in the model which can potentially improve traditional algorithms.These covariates are represented by a random vector X which provides information on the unobserved group I.This model includes many practical situations.Three examples are provided in this section.
The classical mixture problem without covariates.A traditional problem in mixture models is the estimation of the components f i , i ∈ 1, M in (1.1) from (only) an i.i.d sample Y 1 , . . ., Y n drawn from f : no covariates are available.In this context, many parametric methods such as the EM algorithm (and its derivatives) as well as nonparametric procedures (under suitable identifiability constraints) can be used and are widely studied.Even if this model is formally a particular case of ours (we just have to take X = Y ), the approach presented in this paper is not designed to be competitive in this situation with dedicated parametric or nonparametric methods.Indeed, our model focus on practical situations where covariates can be used to obtain useful information about the hidden variable I. Below, we offer two realistic situations where such covariates are naturally available.

Medical example.
Many diseases evolve over time and exhibit different stages of development which can be represented by a variable I that takes a finite number of values.In many situations, the problem is not to study the stage I but some variables that can potentially have different behavior according to I.For instance, the survival time Y and its conditional distributions with respect to I are typically of interest in many situations.In practice, the stage I is generally not observed.It is assessed by the medical team from several items such as physiological data, medical examinations, interviews with the patient (and so on) that can be represented by covariates X in our model.

Electricity distribution.
A distribution network may locally experience minor problems, due for example to bad weather, that may affect some customers during a fixed period of time in a given geographical area.To better understand the origin and/or consequences of the dysfunctions, and thus better forecast network operations, electricity distributors are interested in the distribution behavior of several quantities Y for two different groups of customers: those affected by the malfunction and the others.Variables Y may for instance represent averages or variations of consumption after the disruption period.In this situation the group is represented by a variable I: I = 1 for the users affected by the disruption and 2 for the others.This binary variable I is not directly observed but it can be guessed from individuals curves of consumptions during the disruption period.In our framework, discrete versions of these curves correspond to the covariate X.This example is explained in-depth and analyzed in Section 5 using real data from ERDF, the main French distributor of electricity.

A kernel density estimate based on a clustering approach
To estimate densities f i of the conditional distributions L(Y |I = i), i ∈ 1, M , we propose a two-step algorithm that can be summarized as follows.
1. Apply a clustering algorithm on the sample X 1 , . . ., X n to predict the label I k of each observation X k ; 2. Estimate conditional densities f i by kernel density estimates (2.1) where unobserved labels are substituted by predicted labels.
Formally, we first perform a given clustering algorithm to split the sample We do not specify the clustering method here, some examples are discussed in Sections 3 and 4. Observe that we define M + 1 clusters instead of M .
The cluster X 0 (which could be empty) contains the observations for which the clustering procedure is not able to predict the label.For example, if the clustering procedure reveals some outliers, they are collected in X 0 and we do not use these outliers to estimate the f i 's.
Once the clustering step is performed, we define the predicted labels I k as Observation X k is not correctly assigned to its group with probability P( I k = I k ).We measure the performance of the clustering algorithm by the maximal probability to not correctly attribute an observation: We call this error term the maximal misclassification error.It will be studied for two clustering algorithms in Section 3.
To define our estimates, we just replace in (2.1) the true labels I k by the predicted labels I k .Formally, prior probabilities α i are estimated by while for the conditional densities f i , we consider the kernel density estimator with kernel K : R → R and bandwidth h > 0 where K h is defined in (2.2).Observe that since for all i ∈ 1, M the clusters X i are nonempty, the estimates f i are well defined.
Kernel estimates f i are defined from observations in cluster X i .The underlying assumption is that, for all i ∈ 1, M , each cluster X i collects almost all of the observations X k such that Y k is randomly drawn from f i .Under this assumption, ϕ n is expected to be small and f i to be closed to the oracle estimates fi defined by equation (2.1).This closeness is measured in the following theorem which makes the connection between the expected L 1 -errors of fi and f i .
Theorem 2.1 There exist positive constants A 1 −A 3 such that, for all n ≥ 1 and i ∈ 1, M and (2.6) Constants A 1 − A 3 are specified in the proof of the theorem.We emphasize that inequalities (2.5) and (2.6) are non-asymptotic, that is, the bounds are valid for all n.If we intend to prove any consistency results regarding f i and α i , inequality (2.5) says that the maximal misclassification error ϕ n should tend to zero.Moreover, if ϕ n tends to zero much faster than the L 1 -error of fi , then the asymptotic performance is guaranteed to be equivalent to the one of the oracle estimate fi .The L 1 -error of fi , with properly chosen bandwidth h and kernel K, is known to go to zero, under standard smoothness assumptions, at rate n − s 2s+1 where s > 0 is typically an index representing the regularity of f i .For example, when we consider Lipschitz or Hölder classes of functions with compact supports, s corresponds to the number of absolutely continuous derivatives of the functions Remark 2.1 Note that even if clusters X 1 , . . ., X M are arbitrarily indexed, inequalities (2.5) and (2.6) are true whatever the choice of the indexes.However, when indexes are not chosen according to the true labels, ϕ n could be large even if the clustering procedure performs well.In this situation there exists a permutation of the indexes such that, after this permutation, the maximal misclassification error is small.More precisely it can be readily seen, using Theorem 2.1, that where Π M denotes the set of all permutations of 1, M and ϕ n (π) is the maximal misclassification error of the clustering method after the permutation of the indexes: Remark 2.2 As usual, the choice of the bandwidth h reveals crucial for the performance of the kernel density estimates.However, this paper does not provide any theory to select this parameter.If automatic or adaptive procedures are needed, they can be obtained by adjusting traditional automatic selection procedures for classical nonparametric estimators (see for example Berlinet and Devroye (1994) or Devroye and Lugosi (2001)).

Clustering procedures
The proposed procedure requires a preliminary clustering algorithm performed on the sample X 1 , . . ., X n .Even if any clustering algorithm could be applied in practice, it should be chosen according to the conditional distributions L(X|I = i), i ∈ 1, M .More precisely, each cluster should match up with observations drawn from one of those conditional distributions.From a theoretical point of view, for a given clustering procedure, the problem is to find upper bounds for the maximal misclassification error ϕ n to apply Theorem 2.1.In a parametric setting, i.e., when conditional distributions are identified by unknown parameters, clustering algorithms are often based on efficient estimators of these unknown parameters.We provide an example in Section 3.1.Without parametric assumptions on the distribution, the problem is more complicated.Contrary to data analysis methods such as regression or classification, there are many ways to define clustering.One of the most popular approach consists of defining clusters as the connected components of the level sets of the density (see Hartigan (1975)).This amounts to saying that clusters represent high density regions of the data separated by low density regions.In this context, many authors have studied theoretical performances of clustering algorithms based on neighborhood graphs such as hierarchical or spectral clustering algorithms.In Section 3.2, we extend results of Maier et al. (2009) and Arias-Castro (2011) to our framework for a hierarchical clustering algorithm based on pairwise distances.This procedure is challenged with other clustering methods in the simulation part.

A parametric example
We consider a mixture of two uniform univariate densities where we recall that g i,n is the density of the conditional distribution L(X|I = i), i = 1, 2.Here (λ n ) n is a non-increasing sequence which tends to 0 as n goes to infinity.In this parametric situation, a natural way to guess the unobserved label I k of the observation X k is to find an estimator λ n of λ n and to predict the labels (see Figure 1) according to The accuracy of these predictions depends on the choice of the estimator λ n .
Here we choose λ n = 2 − X (n) where X (n) = max 1≤k≤n X k .Note that in this situation, we have for i = 1, 2 It means that all classified observations (with non-zero estimated label) are well-classified and that misclassified observations are collected in X 0 (see Figure 1).
The following proposition establishes a performance bound for the maximal misclassification error ϕ n of this clustering procedure.
Proposition 3.1 There exists a positive constant A 4 such that for all n ≥ 1 Unsurprisingly, ϕ n decreases as λ n decreases.Moreover, since in most cases of interest, the expected L 1 -error of fi tends to zero much slower than 1/ √ n, this property means that, asymptotically, the expected L 1 -error of f i is of the same order as the expected L 1 -error of fi provided

A hierarchical clustering algorithm
Assuming that clusters are defined as connected components of level sets of a density, many authors have studied theoretical properties of various clustering algorithms.For instance, Maier et al. (2009) and Arias-Castro (2011) prove that algorithms based on pairwise distances (k-nearest neighbor graph, spectral clustering...) are efficient as soon as these connected components are separated enough.In this section, we extend results of these authors to bound the maximal misclassification error ϕ n for a hierarchical clustering algorithm.

The clustering algorithm
Given X 1 , . . ., X n , we consider a single linkage hierarchical clustering algorithm based on pairwise distances to extract exactly M disjoint clusters X 1 , . . ., X M from the observations (see Arias-Castro (2011)).This algorithm consists of finding a data-driven radius r n > 0 such that the set has exactly M connected components.Here B(x, r) stands for the closed Euclidean ball with center x ∈ R d and radius r > 0. Cluster X i is then naturally composed by observations X k which belong to the i th connected component of the set (3.2).
The radius r n can be defined in a formal way to derive statistical properties of the clustering procedure.To this end, we define for each positive real number r the n × n affinity matrix where x 2 stands for the Euclidean norm of x ∈ R d .This matrix induces a non-orientated graph on the set 1, n and two different observations X k and X belong to the same cluster if k and belong to the same connected component of the graph.We let M r be the number of connected components of the graph and we denote by X 1 (r), . . ., X Mr (r) the associated clusters.The radius is selected as follows Note that r n is well-defined since the random set R M = {r > 0 : M r ≤ M } is lower bounded (by 0) and non-empty since r * = max k, X k − X 2 always belongs to this set ( M r * = 1).Moreover, since r → M r is non-increasing and right-continuous, one can easily prove that r n = min R M and M rn = M almost surely when n ≥ M .Let X 1 ( r n ), . . ., X M ( r n ) be the M clusters induced by A rn , the aim is to study the maximal misclassification error (2.3) of this clustering algorithm.
Remark 3.1 The algorithm requires that the connected components of the graph induced by the n × n matrix A r be computed for different values of r.Some algorithms can be performed to obtain these connected components.
For instance, we can use the Depth-First search algorithm (see Cormen et al. (1990)) which can be performed efficiently in O(V n + E n ) operations, where V n and E n denote respectively the number of vertices and edges of the graph.

The clustering model
Recall that the clustering algorithm is performed on the sample X 1 , . . ., X n .
To study the maximal misclassification error, some assumptions on the distribution of X are needed.
Assumption 1 Let g n denotes the probability density of X.We assume that there exists a positive sequence (t n ) n such that the set where we recall that g i,n stands for the density of the conditional distribution x − y 2 .
Assumption 2 There exist two positive constants c 1 and c 2 , and a family of N ∈ N Euclidean balls {B } =1,...,N with radius r n /2 such that where Leb denotes the Lebesgue measure on R d and r n is defined by Assumption 1 is classical to study performances of clustering algorithm (see Maier et al. (2009)) or to estimate the number of clusters (see Biau et al. (2007)).It implies that clusters reflect high-density regions separated by lowdensity regions.Condition (3.5) is required to be sure that the connected components of (3.4) are correctly indexed.It makes it possible to avoid that most of the observation in S i,n are drawn from g j,n with j = i.Assumption 2 is more technical and pertains to the diameter and regularity of the sets S i,n .
Our approach consists of identifying sets S i,n with the connected components of n k=1 B(X k , r).Thus, when diameter of S i,n increases, large values of radius r are necessary to connect observations in S i,n .However for too large values of r, the number of connected components of n k=1 B(X k , r) becomes smaller than M and the method fails.Consequently, we need to constraint the diameter of S i,n .This is ensured by assumption 2 since it implies that S n can be covered by N Euclidean balls such that Finally, inequality Leb(S n ∩ B ) ≥ c 2 r d n in assumption 2 can be seen as a smoothness assumption on the boundaries of S n (see Biau et al. (2008)).

Remark 3.2
In dimension 1, since each S i,n is connected, it is a segment of the real line.Thus, under assumption 1, its diameter is bounded by 1/t n and assumption 2 is satisfied.For higher dimensions, things turn out to be more complicated.Indeed, even if the measure of the compact set S n is upper bounded by 1/t n , its diameter can be as large as we want.Consider for example the density where a n > 1.Since a n could be chosen to be arbitrarily large, the diameter of S n could also be arbitrarily large and assumption 2 does not hold.This assumption restricts to some extent the shape of S n .It is satisfied for regular sets such that the diameter does not increase too quickly as n goes to infinity.For example, consider the two dimensional situation where S n is a rectangle with length u n and width v n .In such a scenario, one can easily prove that if there exist two positive constants a 1 and a 2 such that u n ≥ a 1 r n and v n ≥ a 2 r n , then assumption 2 holds.Note also that this assumption is verified for sets S n that do not depend on the sample size n with smooth boundaries (see Biau et al. (2007), Maier et al. (2009)).
Remark 3.3 Assumption 1 is clearly satisfied when supports of conditional densities g i,n are disjoint.This assumption could also be verified when these supports overlap.As an example, consider the Laplace mixture model: where σ n > 0 and µ i,n ∈ R (see Figure 2).Let n = |µ 1,n − µ 2,n | be the distance between the two location parameters µ 1,n and µ 2,n and define Then direct calculations yield that for any

The maximal misclassification error
The algorithm described in Section 3.2.1 provides a partition of To apply Theorem 2.1, we have to find an upper bound of the maximal misclassification error for the predicted rule Observe that, for this clustering algorithm, clusters X 1 ( r n ), . . ., X M ( r n ) defined in Section 3.2.1 are arbitrarily indexed.Thus there is no guarantee that the predicted labels are correctly indexed.To circumvent this problem, as suggested in Remark 2.1, we study the maximal misclassification error up to a permutation of the indexes.
The proposed clustering algorithm has been studied by Maier et al. (2009) and Arias-Castro (2011).They prove that each cluster corresponds to one of the connected components of (3.4) with high probability in a model similar to ours.In other words, clusters make it possible to identify each connected components of (3.4).Even if the identification of these connected components is important in our setting, it is not sufficient since our goal is to find an upper bound of the misclassification error (2.8).Moreover, since supports of conditional densities g i,n can overlap, observations in the connected components S i,n of (3.4) are not guaranteed to emerge from the distribution of L(X|I = i).This leads us to define where for S ⊂ R d and r > 0 Observe that ψ n is the maximal probability that an observation from the i th group does not belong to S i,n + r n .This parameter reflects the degree of difficulty for the model to correctly predict the label of the observations: the larger ψ n , the more difficult it is.We can now set forth the main result of this section.
Theorem 3.1 Suppose that Assumption 1 and Assumption 2 hold.Moreover, if then for all 0 < a ≤ c 2 τ − 1, we have where A 5 is positive constant.
This theorem provides minimal assumptions to make accurate predictions of the labels I k .Inequality (3.7) gives the minimum distance between the connected components S i,n to make the clustering method efficient.When supports of the conditional densities g i,n are disjoints, it is easily seen that ψ n = 0 and I k = I k almost surely for n large enough provided inequality (3.7) is satisfied.When the supports overlap, inequality (3.8) ensures that the algorithm performs well provided the probability ψ n tends to zero much faster than 1/n.In the Laplace example presented in Remark 3.3, it can be easily seen that It implies that as soon as n /σ n ≥ 3 log(n)/2, nψ(n) ≤ n −1/2 and the kernel density estimates defined in (2.4) satisfy min Finally, note that when ψ n = 0, inequality (3.8) implies that each cluster X i ( r n ) belong to one of the connected components of (3.4) with high probability.This result was obtained by Arias-Castro (2011) in a context similar to ours under assumption (3.7).Theorem 3.1 extends this result for ψ n > 0.
Note also that proof of this theorem (see Section 7) is different from Arias-Castro ( 2011) and rely on support density estimation tools proposed by Biau et al. (2008).

Simulation study
In this section, we provide simulation results enlightening the efficiency of the proposed estimator.To this end, Y is simulated from mixtures of univariate Gaussian laws whereas several scenarios on the distribution of X are considered.
To illustrate Theorem 2.1 and Theorem 3.1, we compare the accuracy of our two-step estimate f i (see (2.4)) with the accuracy of the oracle estimate fi (see (2.1)).Such comparisons are made in both Sections 4.1 and 4.2.However, each of these sections focus on special points.
In Section 4.1, the two-step estimate is also compared with the classical EM algorithm.Even if this algorithm is known to be efficient under the parametric assumption made on the distribution of Y , it does not take advantage of the presence of covariates X.It allows our method to outperform the EM algorithm in favorable situations.
In Section 4.2, different clustering procedures on X are considered on several classical data sets.In particular the behavior of the spectral clustering and the k-means algorithm are studied.Both of them are compared with the hierarchical method studied in Section 3.2.

Comparison with the EM algorithm
In this simulation section, density of Y is given by where f 1 and f 2 stand for the densities of the normal distribution with mean −∆ and ∆ and variance 1. Parameter ∆ measures the separation between the components f 1 and f 2 (see Figure 3).Two scenarios are considered for the distribution of X.In the first one, conditional densities g i,n , i = 1, 2 are uniform univariate densities: where δ n > 0 measures the distance between the supports of g 1,n and g 2,n .For the second one, we consider the mixture of Laplace distributions discussed in Section 3.2.2:conditional densities g i,n , i = 1, 2 are given by where σ n = 1, µ 1,n = 1 and µ 2,n = µ 1,n + n where n > 0. Observe that supports of g i,n are disjoints in the uniform scenario while they overlap in the Laplace example.The separation between these conditional distributions is represented by the location parameters δ n and n .
For the two proposed scenarios, estimators f 1 and f 2 defined in (2.4) are computed using the hierarchical clustering procedure proposed in Section 3.2.These estimates are compared in terms of L 1 -error with the oracle (but unobservable) estimates f1 and f2 defined in (2.1).Nonparametric kernel estimates fi and f i are computed with a Gaussian kernel.Recall that this paper does not put forth any theory for selecting the bandwidth h in an optimal way (see Remark 2.2).Here we use the default data-driven procedure proposed in the GNU-R library np (see Hayfield and Racine (2008)).In addition, these nonparametric density estimates are compared with the EM algorithm (Dempster et al. (1977)) known to perform well to estimate parameters in a Gaussian mixture model.Formally, we run this algorithm on the sample Y 1 , . . ., Y n to estimate Gaussian parameters of f 1 and f 2 .We use the GNU-R library mclust and denote by f em 1 and f em 2 the resulting estimates.They are used as a benchmark.We set n = 300 and, for the sake of clarity, we present the results regarding f 1 only since conclusions are the same for f 2 .Table 1 presents, for different values of ∆, δ n and n , the ratio where f1 is either f 1 or f1 .Expectations are evaluated over 500 Monte Carlo replications.
Uniform: Laplace: R( f1 ) R( f 1 ) for δ n = ... R( f 1 ) for n = ... 0.03 0.05 0.1 4.5 5.5 6.5 ∆ = 0.1 0.636 0.563 0.464 0.817 0.509 0.476 0.464 ∆ = 0.5 1.156 0.923 0. As expected, the performances of the EM algorithm clearly depend on the separation distance between the target densities f 1 and f 2 .For large ∆ values, parametric estimates resulting from the EM algorithm outperform (EM), the oracle estimate f1 (OR) and the two-step estimate f 1 (TS) for the Laplace example.The separation distance ∆ between f 1 and f 2 vary from 0.1 (left) to 2 (right) and n = 5.5. the nonparametric estimates proposed in this paper (e.g.∆ = 2 in Figure 4).This is not the case when f 1 is closed to f 2 : L 1 -performance of f 1 over f em 1 is significantly better for ∆ = 0.1 and ∆ = 0.5 and roughly similar for ∆ = 1.Note also that the L 1 -error of f 1 does not depend on ∆ (see Figure 4).Figure 5 displays scatterplots of the L 1 -error of f 1 versus those of the oracle f1 for ∆ = 1.As proved in Theorem 2.1, most points are above the diagonal.The distance from a point to the first bisector measures to some extent the distance between f 1 and f1 in terms of L 1 -error.The closer to the bisector, the better f 1 .In other words, this distance represents the performance of the clustering algorithm.We observe that points move closer to the first bisector as separation parameters δ n and n increase.As explained in Theorem 3.1, performances of the hierarchical clustering algorithm depend on the separation parameters δ n and n : when these parameters increase, performances of f 1 become similar to those of the oracle f1 .Indeed, in our simulations, we observe that L 1 -error of f 1 and f1 are quite the same for δ n = 0.1 (resp.n = 6.5) in the uniform case (resp.Laplace case).

A comparison of clustering algorithms
As discussed in Section 3, any clustering algorithm could be applied in practice.However, it is clear that L 1 -performances of the proposed estimate depend largely on the performances of the clustering method.The problem is to find the appropriate clustering algorithm according to the covariates Figure 5: L 1 -error of f1 (x-axis) and f 1 (y-axis) for the uniform (up) and Laplace (down) example.
X.In this section, we propose to compare three standard clustering procedures: the hierarchical clustering algorithm presented in Section 3.2, the spectral clustering algorithm performed with a Gaussian kernel (see Arias-Castro (2011)) and the k-means algorithm.
The model is as follows.The density of Y is now given by where f 1 and f 2 stand for the densities of the normal distribution with mean −1 and 1 and variance 1.Here, random variable X takes values in R 2 and we again consider two scenarios for its distribution: • "Circle-Square" model (see Baudry (2009)): g 1,n is the density of the Gaussian distribution with mean (a, 0) and identity variance covariance matrix; g 2,n is the density of the uniform distribution over the square [−1, 1] 2 (see Figure 6).
The difficulty encountered in identifying each group depends on parameters a and r 2 .The smaller a and r 2 , the harder to identify the clusters.For the two described examples, we use the two-step kernel density estimator for three clustering algorithms: hierarchical, spectral and k-means.The resulting estimates are compared with the oracle estimates f1 and f2 .We keep the same setting as above to compute estimates f 1 and f 2 : Gaussian kernel and bandwidth selected with the library np.For the sake of clarity, we again present only results on f 1 since we observe the same conclusions for f 2 .Table 2 and Table 3 present the ratio for many values of a, r 2 and n.Expectations are evaluated over 500 Monte-Carlo replications and Figure 8 presents boxplots of the L 1 -error of the different estimates.For each replications, we also compute the error of the clustering procedure 1 n n k=1 and we display in Table 2 and Table 3 this error term averaged over the 500 replications (it is denoted err n ).Observe that this term is closely related to the maximal misclassification error ϕ n .Table 2: Error ratio (4.2) evaluated over 500 Monte Carlo replications for the "Circle-Square" example.As proved in Theorem 2.1, performances of f 1 depend on the accuracy of the clustering approach: the lower err n , the better f 1 .For the "Circle Square" dataset, unsurprisingly k−means algorithm overperforms the two other clustering methods.Indeed, k-means is well appropriate to this dataset since clusters can be identified by their distances to two particular points (the centers of the uniform and Gaussian distributions).It is not the case for the "Concentric circle" dataset where estimates defined from hierarchical and spectral clustering algorithms achieve the best estimated L 1 -error.

Context of the study
ERDF is the contract-holder of the public electricity distribution network in France.ERDF is in charge of operating, maintaining and developing the network.With 36,000 employees and 35 million customers served over 34,220 communes, ERDF is the largest electricity distributor in Europe.It operates more than 1.3 million km in power lines and runs more than 11 million operations per year.ERDF also plays an essential role in ensuring the proper functioning of the competitive electricity market by providing quality electricity supply among the best in Europe, and serving all network users without resorting to guaranteeing discriminatory practices.
In recent years, the electricity sector has entered a period of profound changes resulting from the emergence of decentralized and intermittent (wind, solar) means of production and new electricity uses (e.g.electric vehicle).The increasing integration of these new means of production and new uses has a major impact on ERDF's core business: connecting new users (producers, terminals electric vehicle), and adaptating rules of conduct and network planning/investment to meet the new specifications.ERDF has initiated its digital transformation plan so as to take advantage of new information technologies, and by meeting its new challenges, offer better public service.
ERDF launched the "smart grid" experimental programs in order to run the network with more flexibility and efficiency.To do so, these programs use detailed network status and mine/produce information from different users.These more detailed data (including from a new generation of electricity meters, called smart meters) will accordingly be used to improve network monitoring (predictive maintenance).
In this section we focus on the detection of customers who experience a significant decrease in consumption, for a given period of time, i.e., a period when overall malfunction of the network could be observed.This will make it possible to better understand the origin of dysfunctions and thus better forecast network operation.For this study, we have the benefit of a set of consumption curves for 226 customers with observations taken at regularly spaced instants.Based on the observation of the individual consumption curves, we can cluster individuals into two groups (those who have suffered an abnormal decline and the others) and estimate, in each group, distributions of many variables using the approach proposed in this paper.

Application of the two-step estimator
The consumption curves of n = 226 ERDF's customers are observed at 9 regularly spaced instants t 1 , . . ., t 9 .The time interval [t 1 , t 9 ] covers a known period of disruption between times t 4 and t 6 .The observations consist of n vectors Z k = (Z k1 , . . ., Z k9 ) ∈ R 9 where Z kj stands for the consumption of user k at time t j .
Since ERDF is interested in comparing the behavior of customers of both sub-populations (those who have suffered from the disruption and others) before and after the disruption period, we consider 6 different variables Y (j) in relation with the consumption around the disruption period.These variables, presented below, are observed for each customer and thus are defined for any k ∈ 1, n .
1. Average consumptions before, during and after the disruption period defined by: 2. Evolutions of consumption around the disruption period defined by: Let I be the random variable taking value 1 if a customer has been affected by the disruption, 2 otherwise.If we denote by f (j) 1 and f 2 the conditional densities of L(Y (j) |I = 1) and L(Y (j) |I = 2), the problem is to compare f for each j ∈ 1, 6 .Even if ERDF can measure consumptions during the disruption period (between t 4 and t 6 ), it does not have the capacity to identify consumers affected by the perturbation.It means that random variables I k , k = 1, . . ., n are not observed.However, we know that users impacted by the disruption posted a decline in consumption during t 4 and t 6 .Figure 9 provides examples of customers potentially affected by the disruption (for confidentiality reasons, representations are anonymous and scales of power are not specified).Figure 9: Consumptions of users suspected to be affected (up) or not (down) by the perturbation.
Using the approach developed in this paper, we first have to identify users impacted by the disruption with a clustering algorithm.As the disruption influences the consumptions of user k between t 4 and t 6 we define X k = (X k1 , X k2 ), k = 1, . . ., n with where Observe that v k,ij measures the relative variation of consumption for user k between t i and t j .It follows that X k = (X k1 , X k2 ) captures the development of consumption of user k during the disruption period.We use these covariates to cluster users into two groups: the first contains consumers assumed to be affected by the disruption, the second contains the others.
Two clustering algorithms have been tested: the hierarchical method studied in section 3.2 and the k-means algorithm.Since these methods lead to approximately the same clusters, we only present results for the hierarchical method.Figures 10 and 11 present kernel density estimates (2.4) of conditional densities f (j) 1 and f (j) 2 for j ∈ 1, 6 .Parameters (bandwidth and kernel) of the kernel estimates are chosen as in the simulation part.For confidentiality reasons, scales of power are again not specified.(solid lines) and f (j) 2 (dashed lines) for j = 4 (left), 5 (center) and 6 (right).
Figure 10 strongly supports the idea that the clustering procedure allows to correctly identify users impacted by the disruption.Indeed we observe that the average consumption during the disruption period is lower for consumers in the first group (second graph in Figure 10).We can also observe that average consumptions are quite the same for the two groups before and after the disruption period.It means that users impacted by the perturbation do not over-consume after the disruption period.This conclusion is also supported by the second graph in Figure 11: distributions representing the evolution of consumptions are similar for the two clusters.

Conclusion
This paper provides a new framework to estimate conditional densities in mixture models in the presence of covariates.To our knowledge, no clear probabilistic model has been proposed to take into account of the presence of covariates.The model we consider includes such covariates and Theorem 2.1 precisely describes the interest of a preliminary clustering step on these covariates to estimate components of the mixture model.It is shown that the performances of these estimates depend on the maximal misclassification error (2.3) of the clustering algorithm.This criterion is natural to measure performances of clustering algorithms but, as far as we know, it has not been addressed before.We obtain non-asymptotic upper bounds of this error term is section 3.2 for a particular hierarchical algorithm.This algorithm is not new but it has not been studied in this context.Results obtained for this algorithm could be extended to other clustering algorithms based on pairwise distances such as spectral clustering (Arias-Castro ( 2011)) or on clustering methods based on neighborhoods graphs (Maier et al. (2009)).Even if main contributions of this work are theoretical, both the simulation study and the application on real data enlighten the efficiency of the proposed estimator in the presence of covariates.

Acknowledgement
We would like to thank the editor, an associate editor as well as two anonymous referees for very thoughtful and detailed comments.We are also grateful to Datastorm and ERDF for providing us with the dataset used in the case study.

Proof of Theorem 2.1
We first prove inequality (2.5).Since we need only find an upper bound of the second term in the right-hand side of the previous inequality.Since fi = 0 when N i = 0 and f i 1 = K 1 , we have For the sake of readability, let E denote the conditional expectation with respect to (I 1 , . . ., I n ) and E the conditional expectation with respect to (I 1 , . . ., I n , X 1 , . . ., X n ).Moreover, let Using these notations it is easily seen that Moreover, inserting N i I {i} ( I k ) in the previous expectation, we obtain Combining (7.1), (7.2) and ( 7.3) leads to The expectation on the right-hand side of this inequality can be bounded in the following way For the first term of this bound, we have (7.6) while for the second term, we obtain from Hölder inequality that Now, it can be easily seen that (7.8) where the last inequality follows from Hengartner and Matzner-Løber (2009).Using Hoeffding's inequality (see Hoeffding (1963)) we obtain for the second term in (7.5) (7.9) From (7.4) -(7.9), we deduce that Putting all of the pieces together, we obtain which concludes the first part of the proof.Inequality (2.6) is proved as follows

Proof of Proposition 3.1
Let k be an arbitrary integer in 1, n .We have to bound P( I k = i|I k = i) for i = 1, 2. To do so, we first consider the case i = 2: because, by definition, I k = 2 ⇐⇒ X k < 1.Thus (7.10) Next, if i = 1 it is easy to see that P( I k = 1|I k = 1) = P(X k ≥ 1 − λ n |I k = 1).Let us consider Using these notations we obtain This leads to the following inequality (7.11)Since X and I k are independent for k = , we obtain the following bound for the last probability The independence of the X 's and simple calculations lead to (7.12) where the last inequality follows, for n ≥ 2, from the fact that 1 − u ≤ e −u for all u ≥ 0. Taking together equations (7.11) and (7.12), we finally obtain (7.13) Proposition follows from equations (7.10) and (7.13).

Proof of Theorem 3.1
Since δ n > 2r n we have for all (i, j) ∈ 1, M 2 with i = j: then M rn = M and the affinity matrices A rn and A rn defined in (3.3) induce the same clusters X 1 (r n ), . . ., X M (r n ).Furthermore, if (7.15) is verified, it is easily seen that ∀i ∈ 1, M , ∃j ∈ 1, M such that For simplicity, when (7.15) is satisfied, we index clusters X 1 (r n ), . . ., X M (r n ) such that According to assumption 2 and inequality (3.6), we obtain Since c 2 τ ≥ 1 + a we have For the second term on the right hand side of (7.17), we have (7.19) Taking (7.16), (7.18) and (7.19) together, result follows.

Figure 2 :
Figure 2: Connected components of level sets for a mixture of Laplace distributions.

Figure 3 :
Figure 3: Density of Y for various values of ∆.

Figure 4 :
Figure 4: Boxplot of the L 1 -error for the estimate f em 1

Figure 8 :
Figure8: Boxplot of the L 1 -error for the oracle estimate f1 (OR) and two-step estimator f 1 using the hierarchical algorithm (Hier), spectral clustering algorithm (Spect) and k-means algorithm (KM).Results are for "Circle-Square" dataset with a = 4 and n = 500 (left) and "Concentric circles" dataset with r 2 = 0.75 and n = 500 (right).

Figure 11 :
Figure 11: Kernel estimates f (j) 1 (7.14)   where, for S ⊂ R d and r > 0, we recall thatS + r = {x ∈ R d : ∃y ∈ S such that x − y 2 ≤ r}.Inclusion (7.14) implies M rn ≥ M .Moreover, observe that if r n ∈ R M = {r > 0 : M r ≤ M }(7.15) k = I k ) ≤ P({ I k = I k } ∩ {r n ∈ R M }) + P(r n / ∈ R M ) ≤ P({ I k = I k } ∩ {r n ∈ R M } ∩ {X k ∈ (S n + r n )}) + P(X k / ∈ (S n + r n )) + P(r n / ∈ R M ) k / ∈ (S i,n + r n )|I k = i)P(I k = i) + ψ n + P(r n / ∈ R M ) ≤ 2ψ n + P(r n / ∈ R M ) (7.16) since P(X k / ∈ (S n + r n )) ≤ ψ n .To complete the proof, we have to find an upper bound for the probability of the event{r n / ∈ R M }. κ n = {k ∈ 1, M : X k ∈ S n }.For the first term on the right hand side of the above equation, remark that inclusionS n ⊆ k∈κn B(X k , r n )holds when for all ∈ 1, N , the balls B defined in assumption 2 contain at least one observation among {X k , k ∈ κ n }.ThusP   S n ⊆ k∈κn B(X k , r n )   ≤ P (∃ ∈ 1, N , ∀k ∈ κ n , X k / ∈ B ) S n } ∩ {X k / ∈ B }} ∪ {X k / ∈ S n }} ≤ N =1 (P({X k ∈ S n } ∩ {X k / ∈ B })