Adaptive Density Estimation in the Pile-up Model Involving Measurement Errors

Motivated by fluorescence lifetime measurements this paper considers the problem of nonparametric density estimation in the pile-up model. Adaptive nonparametric estimators are proposed for the pile-up model in its simple form as well as in the case of additional measurement errors. Furthermore, oracle type risk bounds for the mean integrated squared error (MISE) are provided. Finally, the estimation methods are assessed by a simulation study and the application to real fluorescence lifetime data.


Introduction
This paper is concerned with nonparametric density estimation in a specific inverse problem.Observations are not directly available from the target distribution, but suffer from both measurement errors and the so-called pile-up effect.The pile-up effect refers to some right-censoring, since an observation is defined as the minimum of a random number of i.i.d.variables from the target distribution.The pile-up distribution is thus the result of a nonlinear distortion of the target distribution.In our setting we also take into account measurement errors, that is the pile-up effect applies to the convolution of the target density and a known error distribution.The aim is to estimate the target density in spite of the pile-up effect and additive noise.
The pile-up model is encountered in time-resolved fluorescence when lifetime measurements are obtained by the technique called Time-Correlated Single-Photon Counting (TCSPC) (O'Connor and Phillips, 1984).The fluorescence lifetime is the duration that a molecule stays in the excited state before emitting a fluorescence photon (Lakowicz, 1999;Valeur, 2002).The distribution of the fluorescence lifetimes associated with a sample of molecules provides precious information on the underlying molecular processes.Lifetimes are used in various applications as e.g. to determine the speed of rotating molecules or to measure molecular distances.This means that the knowledge of the lifetime distribution is required to obtain information on physical and chemical processes.
In the TCSPC technique, a short laser pulse excites a random number of molecules, but for technical reasons, only the arrival time of the very first fluorescence photon striking the detector can be measured, while the arrival times of the other photons are unobservable.The arrival time of a photon is the sum of the fluorescence lifetime and some noise, which is some random time due to the measuring instrument as e.g. the time of flight of the photon in the photon-multiplier tube.Hence, TCSPC observations can be described by a pile-up model with measurement errors.The goal is to recover the distribution of the lifetimes of all fluorescence photons from the piled-up observations.
Until recently TCSPC was operated in a mode where the pile-up effect is negligible.However, a shortcoming of this mode is that the acquisition time is very long.Recent studies have made clear that from an information viewpoint it is a better strategy to operate TCSPC in a mode with considerable pile-up effect (Rebafka et al., 2010(Rebafka et al., , 2011)).Consequently, an estimation procedure is required that takes the pile-up effect into account.The concern of this paper is to provide such a nonparametric estimator of the target density and furthermore to include measurement errors in the model in order to deal with real fluorescence data.Therefore, we develop adequate deconvolution strategies for the correction in the pile-up model and test those methods on simulated data as well as on real fluorescence data.
It is noteworthy that the pile-up model is connected to survival analysis, since it can be considered as a special case of the nonlinear transformation model (Tsodikov, 2003).Indeed, it is straightforward to extend the methods proposed in this paper to the more general case.Moreover, the model can also be viewed as a biased data problem with known bias (Brunel et al., 2005).As a consequence, the first part of the study is rather classical.Nonetheless, the consideration of measurement errors in the second part is new and fruitful.Indeed, we show that deconvolution methods can be used to complete the study in the spirit of Comte et al. (2006).These techniques are of unusual use in both survival analysis and pile-up model studies.Numerical results confirm the adequacy of these methods in practice.
In Section 2 a nonparametric estimation strategy for the pile-up model (without measurement errors) is presented to recover the target density.More precisely, a projection estimator is developed based on finite dimensional functional spaces and a tool is proposed to automatically select the model dimension achieving the best possible rate of convergence.In Section 3 additional measurement errors are taken into consideration leading to an estimator based on Fourier deconvolution methods.The rates obtained in this framework depend on the smoothness of the error density and on the choice of a cut-off parameter.Furthermore, a cut-off selection strategy is proposed to achieve an adequate bias-variance trade-off.In Section 4 the performance of the methods is assessed via simulations and by an application on a dataset of fluorescence lifetime measurements.All proofs are relegated to Section 5.

The pile-up model
Let {Y k , k ≥ 1} be a sequence of independent positive random variables with target probability density function (pdf) f Y and cumulative distribution function (cdf) F .Moreover, let N be a random variable taking its values in N * = {1, 2, . . .} independently of this sequence.Then an observation of the pile-up model is distributed as the random variable Z taking values in R + defined by Z = min{Y 1 , . . ., Y N }.In Rebafka et al. (2010) it is shown that the cdfG of Z, referred to as the pile-up distribution function, is given by where M is the probability generating function associated with N defined as ) for all u ∈ [0, 1], the pile-up density g is given by Note that the generating function M : [0, 1] → [0, 1] is bijective for any distribution of N and we denote its inverse function by ] < ∞ and P(N = 1) = 0, P(N = 2) = 0, then the functions Ṁ and M are bounded by some constants 0 < a < b < +∞ satisfying Remark 2.1 In the more general nonlinear transformation model the function M : 1) is not necessarily a probability generating function, but any function M such that G given by ( 1) is a cdf (Tsodikov, 2003).That is G is still the result of a distortion of the target distribution F , but the interpretation as a minimum is no longer valid.Those models are studied in survival analysis.The estimators proposed in this paper for the pile-up model are also applicable for nonlinear transformation models.
Main example.In the fluorescence application it is assumed that the number N of photons per excitation cycle follows a Poisson distribution with known parameter µ.

Estimator of the target density in the pile-up model
The goal is to estimate the target density f Y from i.i.d.observations Z 1 , . . ., Z n of the pile-up distribution G.We propose a nonparametric estimator by searching in a collection of functions the one that best fits the data or, in other words, the orthogonal projection of f Y onto the function space.If S is an adequate subspace of L 2 , the orthogonal projection of f Y on S in the L 2 -sense is the minimizer of f Y − h 2 for h in S, or equivalently, the minimizer of , we need an approximation of moments E[h(Y )] based on pile-up observations.We note that inverting relation (1 .
This allows us to relate moments of the target distribution F with moments of the pile-up distribution G.More precisely, for any bounded function h the following equality holds To construct an estimator of the moment E[h(Y )] based on pile-up observations, relation (4) suggests to replace the distribution function G by its empirical version Ĝn (z) = n i=1 1 {Z i ≤z} /n.Then an estimator of E[h(Y )] is given by as w • Ĝn (Z (i) ) = w(i/n) and where Z (i) denotes the i-th order statistic associated with (Z 1 , . . ., Z n ) satisfying In the literature such weighted sums of order statistics are known as L-statistics.
The approximation of moments E[h(Y )] by an L-statistic is the key property used in the nonparametric estimation strategy that is proposed in the following.In the pile-up model the weights w(i/n) can be viewed as "corrections" of the observations Z i as they do not follow the target distribution F , but the pile-up distribution G.The weights are bounded because inequality (3) ensures that there exist constants The computation of the estimator in (5) requires the knowledge of the weight function w, which is entirely determined by the distribution of N. Hence, in the example above on the Poisson distribution w writes with corresponding constants w 0 = (1 − e −µ )/µ and w 1 = (e µ − 1)/µ.
A standard estimation approach of the target density f Y consists in approximating the orthogonal projection of f Y onto some function space.More precisely, we suppose that the restriction of f Y on some interval A is square integrable, i.e. f Y 1 A ∈ L 2 (A).For a given orthonormal sequence (ϕ λ ) λ∈Λm in L 2 (A) define the subspace S m = Span(ϕ λ , λ ∈ Λ m ).The cardinality of Λ m (which is also the dimension of S m ) is denoted by D m and supposed to be finite.
By using the moment estimator proposed in ( 5), an approximation of the projection of f Y onto S m can be defined as )].Note that the explicit formula of the estimate is given by For this estimator the following risk bound is shown in Section 5.
Assume moreover that there exists where C depends on Φ 0 , w 1 and the Lipschitz constant c w of w.
Remark 2.2 It follows from equation (3) that the Lipschitz constant c w verifies 3 .In the Poisson example where w is given by (7) we have c w = (e µ − 1) 2 /µ.

Examples of model collections
Our goal is the estimation of f Y in a nonparametric setting without knowledge of the best approximation space.Instead of a single space S m , we rather consider a collection {S m , m ∈ M n } of models and we thus have to face the problem of model selection.Before presenting an estimator of the model m, we give some illustrating examples of model collections S m and we discuss some general conditions for the approximation spaces under which our estimation approach performs well.
We now give the key properties that a general model collection {S m , m ∈ M n } must fulfill to fit into our framework.
Let (ϕ λ ) λ∈Λm be an orthonormal basis of S m , where |Λ m | = D m .It follows from Birgé and Massart (1997) that Property (12) in the context of (H 1 ) is equivalent to (10) for all m ∈ M n .This condition is easily checked for collection [T] with Φ 0 = 1.For collection [DP] see a detailed description in Birgé and Massart (1997), Section 2.2, showing that condition (10) holds with Φ 2 0 = r + 1.It is known that (10) is also satisfied for wavelet bases [W].
Additionally, for results concerning adaptive estimators the following assumption is required.
This condition ensures that D m ≤ N n for all m ∈ M n .
Another key property of those spaces lies in the bias evaluation.Indeed, if we assume that f Barron et al., 1999, Lemma 12).Thus, choosing D m * = O(n 1/(2α+1) ) in Inequality (11) yields that the mean square risk satisfies 2α+1) ).This rate is known to be optimal in the minimax sense for density estimation for direct observations (Donoho et al., 1996).

Adaptive estimator
From the risk bound (11) it is clear that a bias-variance trade-off must be achieved.The idea consists in searching the model m that minimizes the risk bound (11).As where the penalty term pen(m) is of the same order as the variance, i.e.CD m /n.Using this approach the following result can be shown.
Let m be defined by ( 13) with Then there exists a numerical constant κ such that we have where C is a numerical constant and K depends on c w , f Y ∞ and the basis.
Risk bounds of the form (15 ) are often called oracle inequality.Note that the last term c ln 2 (n)/n is clearly negligible with respect to the order of the infimum (in particular, in all Besov cases described above).In practice, the numerical constant κ is calibrated by simulation experiments based on a few samples.The selection of m in ( 13) is numerically easy, since the values of γ n ( fm ) are given by = − λ∈Λm â2 λ with âλ is defined in (8).The proof of the theorem relies on Talagrand's inequality and follows the line of the proof of Theorem 4.2 in Brunel and Comte (2005).Therefore, only a sketch of the proof is provided in Section 5.

Pile-up Model with Measurement Errors
In this section we consider the context where the random variables Y i are affected by additional measurement errors.More precisely, the observations have the following form Z = min{Y 1 +η 1 , . . ., Y N +η N }, where the measurement errors η i are independent of Y i and have known density the Fourier transform of an integrable function u.

Estimation procedure and risk bound
In the context of piled-up observations with measurement errors, since obviously du, provided that the Fourier transform of f m exists.However, this approach leads to an accumulation of the estimation errors of the two stages.It is known that especially the application of the inverse Fourier transform is particularly unstable.Hence a better solution may be obtained by a direct approach.
To this end we note that in this set-up the "pile-up property" given by ( 4) holds for and finally an estimator of the target density f Y can be defined as For this estimator, the following risk bound can be shown.
Proposition 3.1 Assume that w satisfies ( 6) and ( 9).Let f Y,m denote the function and C depends on 1 0 w 2 (u)du and on the Lipschitz constant c w of w.
Obviously, the variance depends crucially on the rate of decrease to 0 of f * η near infinity.For instance, if f η is the standard normal density, the variance is proportional to |u|≤πm e u 2 /2 du/n, whereas for the Laplace distribution (i.e.f η (x) = e −|x| /2) we have 1/f * η (u) = 1 + u 2 and a variance of order O(m 4 /n).

Other ways to view the estimator
The estimator fm can also be derived in a different way.Recall that in Subsection 2.2 we defined an estimator by minimizing the contrast γ n (h) which is an approximation of where f * X is given by ( 16).Now we can see that the estimator fm minimizes the contrast γ Therefore, fm = arg min h∈ Sm γ † n (h).Another expression of the estimator is obtained by describing more precisely the functional spaces Sm on which the minimization is performed.To that aim, let us define the sinc function and its translated-dilated versions by where m is an integer that can be taken equal to 2 ℓ .It is well known that {ϕ m,j } j∈Z is an orthonormal basis of the space of square integrable functions having Fourier transforms with compact support in [−πm, πm] ( Meyer, 1990, p.22).Indeed, as ϕ n , this yields that the estimator fm can be written in the following convenient way fm = j∈Z ām,j ϕ m,j with ām,j = 1 2π ϕ * m,j (−u) Consequently fm 2 = j |ā m,j | 2 .Finally, one can see that j∈Z ϕ * m,j (u)ϕ m,j (x) = e −ixu 1 |x|≤πm .This is another way to see that ( 20) and ( 17) actually define the same estimator.
Remark 3.1 An interesting remark follows from equation (20).In the case where no noise has to be taken into account, i.e.
We recognize the coefficients of the estimators given by formula (8) of the setting in Subsection 2.2, when the orthonormal basis (ϕ λ ) λ is the sinc basis.

Discussion on the type of noise
To determine the rate of convergence of the MISE, it is necessary to specify the type of the noise distribution.Here two cases are considered.First, the noise distribution can be exponential with density given by f η (x) = θe −θx 1 x>0 , for some θ > 0. Then we have ).In the fluorescence setting, we found that TCSPC noise distributions can be approximated by densities of the following form with constraints α > β, ν < τ , βτ /(αν) ≥ 1. Figure 1 presents a dataset with 259,260 measurements from the noise distribution of a TCSPC instrument (independently from the fluorescence measurements) and the corresponding estimated density having form (21) obtained by least squares fitting.Even though the fit is not perfect, the estimated density captures the main features of the dataset.Thus densities of the form ( 21) can be considered as a good approximative model of the noise distribution in the fluorescence setting.In the general case of (21) we have In the simulation study we will consider a noise distribution of the form (21) with parameters α = 2, β = 1, ν = 1, τ = 2.In this case we get From the application viewpoint it is hence interesting to consider the class of noise distributions η whose characteristic functions decrease in the ordinary smooth way of order γ, denoted by η ∼ OS(γ), defined by

Rates of convergence on Sobolev spaces
In classical deconvolution the regularity spaces used for the functions to estimate are Sobolev spaces defined by The optimization of this upper bound provides the optimal choice of m by m opt = O(n 1/(2a+2γ+1) ) with resulting rate 2a+2γ+1) ).More formally, one can show the following result.
Proposition 3.2 Assume that the assumptions of Proposition 3.1 are satisfied and that f Y ∈ C(a, L) and η ∼ OS(γ), then for m opt = O(n 1/(2a+2γ+1) ), we have Obviously, in practice the optimal choice m opt is not feasible since a is and part of the constants involved in the order are unknown.Therefore, another model selection device is required to choose a relevant fm in the collection.

Model selection
The general method consists in finding a data driven penalty pen(.)such that the following model m = arg min achieves a bias-variance trade-off, where M n has to be specified.In contrast to this general approach our result involves an additional ln(n)-factor in the penalty compared to the variance order, which implies a loss with respect to the expected rate derived in Section 3.4.
Theorem 3.1 Assume that f Y is square integrable on R, η ∼ OS(γ) and w satisfies (6) and ( 9).Consider the estimator f m with model m defined by ( 23) with penalty where κ ′ and κ ′′ are numerical constants.Assume moreover that η is ordinary smooth, i.e. η ∼ OS(γ), and that the model collection is described by where C is a numerical constant and C ′ depends on c w and the bounds on w.
As previously, the numerical constants κ ′ and κ ′′ are calibrated via simulations.In practice, to compute m by ( 23), we approximate γ † n ( fm ) by − |j|≤Kn |ā m,j | 2 , where the sum is truncated to K n of order n.
In the fluorescence set-up, the noise distribution f η is generally unknown.However, independent, large samples of the noise distribution are available.Hence one may still use the procedure proposed above by replacing f * η with the estimate f * η (u) = n k=1 e −iuη −k /n, where (η −k ) 1≤k≤M denotes the independent noise sample.In Comte and Lacour (2009) the same substitution is considered for deconvolution methods.It is shown that for ordinary smooth noise this leads to a risk bound exactly analogous to the one given in (25).The main constraint given in Comte and Lacour (2009) is that M ≥ n 1+ǫ , for some ǫ > 0. As the noise samples provided in fluorescence have huge size, this condition is certainly fulfilled in our practical examples.In the following numerical study we consider the estimator with both the exact f * η and an estimated f * η .

Numerical results for simulated and real data
In this section we first give details on the practical implementation of the estimation methods.Then a simulation study is conducted to test the performance of the methods in different settings.Finally, an application to a sample of fluorescence data shows that the estimation method gives satisfying results on real measurements.

Practical computation of estimators
In the case of no additional noise, we apply the method described in Section 2 with the trigonometric basis [T].To determine the best model m we compute γ n (m) + pen(m) for all m = 1, . . ., [n/2] − 1.This is computationally easy as the following recursive relation can be used.We have γ n (0) + pen(0 The coefficients are given by (8).Then m is the value where γ n (m) + pen(m) achieves its minimum.Finally, the estimator of f is given by f m = λ∈Λ m âλ ϕ λ .
In the case of additional noise, we use the estimator proposed in Section 3 based on the sinc basis.Its computation is more intensive as no similar recursive relation holds.First one has to compute the coefficients ām,j defined in (20).For j ≥ 0 they can be approximated as follows where IFFT(H) is the inverse fast Fourier transform of the T -vector H whose t-th entry equals f * X (πm(2t/T −1))/f * η (πm(2t/T − 1)).Similarly, for j < 0 the coefficients ām,j are approximated by ȃm,j = (−1) j √ m(IFFT(H)) j .
The integral ∆ η (m) appearing in the penalty term pen(m) defined in ( 24) is explicitly known if f η is known (see Section 3.3).In the case when we only have an estimator fη , ∆ η (m) can be approximated by a Riemann sum of the form (m/S) Then the best model m is selected as the point of minimum of the criterion given in ( 23).Finally, we obtain the estimator f m = T j=−T ȃ m,j ϕ m,j with the sinc functions ϕ m,j defined in (19).

Simulation study
When no noise is added we applied the method described in Section 2 with the simple trigonometric basis.The numerical constant κ of the penalty ( 14) is set to 0.5 resulting from a previous calibration by simulation.The Poisson parameter varies from 0.01 over 0.5 to 2. The mean MISE over 25 paths are computed on the intervals of representation.From Figure 2  spite of small side effects which would be avoided with piecewise polynomial bases.
From this point of view all representations in Figure 2 are cut on the right.We see that the estimator performs well for a large range of values of the Poisson parameter.
The first row corresponds to data where the pile-up effect is negligible, as the Poisson parameter is equal to 0.01, and hence serves as a benchmark.Here estimation errors are mainly due to the choice of a trigonometric basis, that easily recovers the Gamma density while the Weibull density is much harder to approximate in this basis.In the other rows the pile-up effect is considerably increased, however the accuracy is hardly affected and the estimator is still rather stable.The pile-up effect is hence correctly taken into account in the estimation procedure.
The adaptive estimator described in Section 3 is tested with the numerical constants κ ′ = 1 and κ ′′ = 0.001 in (24).The value of κ ′′ is very small and makes the logarithmic term in general negligible except when c 2 w is large (for instance c 2 w ≈ 416 for µ = 2).The results are given in Figure 3. Now the observations are Y = X + η, where η = σε.In the first row, the pile-up effect is almost negligible (µ = 0.01), but σ is rather large.That is, the first row illustrates the performance of the deconvolution step of the estimation procedure.In contrast, for the last row σ is taken to be small, but the pile-up effect is significant (µ = 2), to see how the estimator copes with the pile-up effect.The second row is an intermediate situation, illustrating how the estimator performs when the variance of the noise and the pile-up effect are both non negligible.
The 25 curves indicate variability bands for the estimation procedure.They show that the estimator is quite stable, especially in the last rows.Moreover, the selected model order m is different from one example to the other.Globally the dimension m increases when going from example 1 to 4. That means that the estimator adapts to the peaks that are more and more difficult to recover.
In Table 1 the MISE of the estimation procedure is analyzed.The table gives the empirical mean and standard deviation of the MISE obtained over 100 simulated datasets.This is done for the same four examples of distributions as above.We compare the error for the estimator using the exact noise distribution to the estimator based on an approximation of the noise distribution based on an independent noise sample of size 500.Moreover, we study the influence of the noise distribution on the estimator.Therefore, we consider, on the one hand exponential noise with variances σ 2 ∈ {0.2, 1}, and on the other hand density ( 21) with α = 2, β = 1, ν = 1, τ = 2 (multiplied with adequate constants to have same variance σ 2 as for the exponential distributions).
From Table 1 it is clear that increasing the variance of the noise distribution increases the error.Furthermore, changing the type of the noise does not influence a lot the estimation procedure.Indeed, the second case ( 21) is just slightly less favorable than the exponential distribution.This difference is in accordance with Proposition 3.2 that holds with γ = 1 for the exponential and with γ = 2 for the other density.The comparison with the results based on an approximated noise distribution (second lines) reveals that there is rarely a difference between the two methods.Indeed, using an approximation of the noise does not corrupt the results, in some cases we even observe an improvement of the error.We show in Figure 4 that it is indispensable to take into account both the pile-up correction (which is omitted in (b) where w(i/n) is replaced by i/n ) and the deconvolution correction (which is omitted in (c) where the estimation is done with the method of Section 2 and the trigonometric basis).Thus, we conclude from these simulation results for the fluorescence setting that it is justified to use an estimate of the noise instead of the theoretical distribution.

Application to Fluorescence Measurements
We finally applied the estimation procedure to real fluorescence lifetime measurements obtained by TCSPC.The data analyzed here are graphically presented in Figure 5  The same sample of the noise distribution has already been considered in Figure 1, where it is compared to the parameterized density given by ( 21).In this setting the true density is known to be an exponential distribution with mean 2.54 nanoseconds and the Poisson parameter equals 0.166.The knowledge of the true density allows to evaluate the performance of our estimator.More details on the data and their acquisition can be found in Patting et al. (2007).We applied the estimator from Section 3 with the sinc basis to this dataset.The numerical constants are κ ′ = 1 and κ ′′ = 0.001.Figure 5 (b) shows the estimation result in comparison to the exponential density with mean 2.54.We observe that the estimated function is quite close to the 'true' one.This indicates that the estimation procedure takes the errors present in the real data adequately into account and that the modeling by the pile-up distortion and additive measurement errors is appropriate.
We conclude that the estimation methods proposed in this paper have a satisfactory behavior in various settings and give rather good results on both synthetic and real data.Nevertheless, we observed that the performance depends on the choice of the basis and on the smoothness of the target density.Here only two bases are considered, but others should work as well and may improve the results in certain settings.

Sketch of proof of Theorem 2.1
We can write γ , where ν n and R n are defined by ( 26) and ( 27).By definition of f m we have for all m ∈ M n , γ n ( f m)+pen( m) ≤ γ n (f m )+pen(m).This can be rewritten as Using this and and that 2xy ≤ x 2 /θ + θy 2 for all nonnegative x, y, θ, we obtain Then the term E sup t∈S m+Sm, t =1 [ν n (t)] 2 − (pen(m) + pen( m))/4 + is bounded by C/n by using Talagrand Inequality in a standard way (see e.g.Brunel et al., 2005).For the last term Now, we know from Massart (1990) that For the first term, we have This yields n)/n, which ends the proof.

Proof of Proposition 3.1.
We The expectation of the first term on the right-hand side of (32) is less than or equal to 2 n n k=1 πm by using E( Ĝn − G 2k ∞ ) ≤ c k /n k (see e.g.Lemma 6.1 p. 462, Brunel and Comte (2005) which is a straightforward consequence of Massart (1990)).Here c k is a numerical constant that depends on k only.The expectation of the second term on the right-hand side of ( 32) is a variance and less than or equal to Gathering the terms completes the proof of Proposition 3.1.
We have the following decomposition of the contrast for functions s, t in Sm , where We start with decomposition (33).We take t = f m and s = f Y,m .Since γ † n ( f m) + pen( m) ≤ γ † n (f m ) + pen(m), we get (36) where νn (t) and Rn (t) are defined by ( 34) and ( 35) and B m = {t ∈ Sm , t = 1}, and B m,m ′ = {t ∈ Sm + Sm ′ , t = 1}.Following a classical application of Talagrand Inequality in the deconvolution context for ordinary smooth noise (Comte et al., 2006), we deduce the following Lemma.Then Parseval Formula gives t * 2 = 2π t 2 and we find Now, we write sup t∈Bm, m | Rn (t)| 2 = R 1 +R 2 by inserting again the indicator functions 1 Ω G and 1 Ω c G where Ω G is defined by (30).Therefore Next ( Ĝn − Ĝ 2 ∞ 1 Ω G − ln(n)/n)) ≤ 0 by definition of Ω G for the first right-hand-side term of (37).For the second term, ∆(m n ) ≤ n by the definition of m n , Ĝn −G ∞ ≤ 1 and it follows from (31) that P(Ω c G ) ≤ 2/n 2 .Therefore Gathering the bounds gives the result of Lemma 5.2.

Proposition 2. 1
Let f m be the orthogonal projection in the L 2 -sense of f Y on S m .Assume that (6) holds and that w is Lipschitz continuous, i.e. there exists c w > 0 such that |w(x) − w(y)| ≤ c w |x − y| .
where the term − f m 2 can be estimated by − fm 2 = γ n ( fm ).Consequently, we propose the following model selection device m = arg min m∈Mn [γ n ( fm ) + pen(m)] ,

Figure 2 :
Figure 2: True density and 25 estimated curves without measurement errors.Estimation with the trigonometric basis for different levels of the pile-up effect.Numbers below the figures are the MISE.

Table 1 :
one can see that the results are rather good, in 100 × mean MISE and standard deviation in parentheses.First lines correspond to exact noise distribution, second lines give results obtained with estimated noise distribution.