Semiparametric Minimax Rates

We consider the minimax rate of testing (or estimation) of non-linear functionals defined on semiparametric models. Existing methods appear not capable of determining a lower bound on the minimax rate of testing (or estimation) for certain functionals of interest. In particular, if the semiparametric model is indexed by several infinite-dimensional parameters. To cover these examples we extend the approach of [1], which is based on comparing a "true distribution" to a convex mixture of perturbed distributions to a comparison of two convex mixtures. The first mixture is obtained by perturbing a first parameter of the model, and the second by perturbing in addition a second parameter. We apply the new result to two examples of semiparametric functionals:the estimation of a mean response when response data are missing at random, and the estimation of an expected conditional covariance functional.


Introduction
Let X 1 , X 2 , . . ., X n be a random sample from a density p relative to a measure ν on a sample space (X , A).It is known that p belongs to a collection P of densities, and we wish to estimate the value χ(p) of a functional χ: P → R. In this setting the mimimax rate of estimation of χ(p) relative to squared error loss can be defined as the root of inf where the infimum is taken over all estimators T n = T n (X 1 , . . ., X n ).Determination of a minimax rate in a particular problem often consists of proving a "lower bound", showing that the mean square error of no estimator tends to zero faster than some rate ε 2 n , combined with the explicit construction of an estimator with mean square error ε 2 n .The lower bound is often proved by a testing argument, which tries to separate two subsets of the set {P n : p ∈ P} of possible distributions of the observation (X 1 , . . ., X n ).Even though testing is a statistically easier problem than estimation under quadratic loss, the corresponding minimax rates are often of the same order.The testing argument can be formulated as follows.If P n and Q n are in the convex hull of the sets {P n : p ∈ P, χ(p) ≤ 0} and {P n : p ∈ P, χ(p) ≥ ε n } and there exist no sequence of tests of P n versus Q n with both error probabilities tending to zero, then the minimax rate is not faster than a multiple of ε n .Here existence of a sequence of tests with errors tending to zero (a perfect sequence of tests) is determined by the asymptotic separation of the sequences P n and Q n and can be described, for instance, in terms of the Hellinger affinity dP n dQ n .
If ρ(P n , Q n ) is bounded away from zero as n → ∞, then no perfect sequence of tests exists (see e.g.Section 14.5 in [2]).
One difficulty in applying this simple argument is that the relevant (least favorable) two sequences of measures P n and Q n need not be product measures, but can be arbitrary convex combinations of product measures.In particular, it appears that for nonlinear functionals at least one of the two sequences must be a true mixture.This complicates the computation of the affinity ρ(P n , Q n ) considerably.[1] derived an elegant nice lower bound on the affinity when P n is a product measure and Q n a convex mixture of product measures, and used it to determine the testing rate for functionals of the type f (p) dν, for a given smooth function f : R → R, the function f (x) = x 2 being the crucial example.
In this paper we are interested in structured models P that are indexed by several subparameters and where the functional is defined in terms of the subparameters.It appears that testing a product versus a mixture is often not least favorable in this situation, but testing two mixtures is.Thus we extend the bound of [1] to the case that both P n and Q n are mixtures.In our examples P n is equal to a convex mixture obtained by perturbing a first parameter of the model, and Q n is obtained by perturbing in addition a second parameter.We also refine the bound in other, less essential directions.
The main general results of the paper are given in Section 2. In Section 3 we apply these results to two examples of interest.
For p λ and q λ densities of the measures P λ and Q λ that are jointly measurable in the parameter λ and the observation, and π a probability measure on Λ, define p = p λ dπ(λ) and q = q λ dπ(λ), and set A for all j and B ≤ p λ ≤ B for positive constants A, B, B, then there exists a constant C that depends only on A, B, B such that, for any product probability measure Proof.(1 − a j ) ≥ 1 − k j=1 a j for any nonnegative numbers a 1 , . . ., a k .The expected values on the binomial variables N j can be evaluated explicitly, using the identities, for N a binomial variable with parameters n and p, Under the assumption that np(1∨a∨b∨c) 1, the right sides of these expressions can be seen to be bounded by multiples of (npb) 2 , np and (np) 2 a, respectively.We substitute these bounds in the first display of the proof, and use the equality j p j = 1 to complete the proof.Remark 2.1.If min p j ∼ max j p j ∼ 1/n 1+ε for some ε > 0, which arises for equiprobable partitions in k ∼ n 1+ε sets, then there exists a number n 0 such that P(max j N j > n 0 ) → 0. (Indeed, the probability is bounded by k(n max j p j ) n0+1 .)Under this slightly stronger assumption the computations need only address N j ≤ n 0 and hence can be simplified.
The proof of Theorem 2.1 is based on two lemmas.The first lemma factorizes the affinity into the affinities of the restrictions to the partitioning sets, which are next lower bounded using the second lemma.The reduction to the partioning sets is useful, because it reduces the n-fold products to lower order products for which the second lemma is accurate.
Define probability measures P λj and Q λj on X j by where (N 1 , . . ., N k ) is multinomially distributed on n trials with success probability vector (p 1 , . . ., p k ) and Proof.Set Pn : = P n λ dπ(λ) and consider this as the distribution of the vector (X 1 , . . ., X n ).Then, for p λ and q λ densities of P λ and Q λ relative to some imsart-generic ver.2007/04/13 file: minimax_final.texdate: August 6, 2009 dominating measure, the left side of the lemma can be written as .
Because by assumption on each partitioning set X j the measures Q λ and P λ depend on λ j only, the expressions i:Xi∈Xj q λ (X i ) and i:Xi∈Xj p λ (X i ) depend on λ only through λ j .In fact, within the quotient on the right side of the preceding display, they can be replaced by i:Xi∈Xj q j,λj (X i ) and i:Xi∈Xj p j,λj (X i ) for q j,λj and p j,λj densities of the measures Q j,λj and P j,λj .Because π is a product measure, we can next use Fubini's theorem and rewrite the resulting expression as .
Here the two products over j can be pulled out of the square root and replaced by a single product preceding it.A product over an empty set (if there is no Define variables I 1 , . . ., I n that indicate the partitioning sets that contain the observations: I i = j if X i ∈ X j for every i and j, and let The measure Pn arises as the distribution of (X 1 , . . ., X n ) if this vector is generated in two steps.First λ is chosen from π and next given this λ the variables X 1 , . . ., X n are generated independently from P λ .Then given λ the vector (N 1 , . . ., N k ) is multinomially distributed on n trials and probability vector P λ (X 1 ), . . ., P λ (X k ) .Because the latter vector is independent of λ and equal to (p 1 , . . ., p k ) by assumption, the vector (N 1 , . . ., N k ) is stochastically independent of λ and hence also unconditionally, under Pn , multinomially distributed with parameters n and (p 1 , . . ., p k ).Similarly, given λ the variables I 1 , . . ., I n are independent and the event I i = j has probability P λ (X j ), which is independent of λ by assumption.It follows that the random elements (I 1 , . . ., I n ) and λ are stochastically independent under Pn .
The conditional distribution of X 1 , . . ., X n given λ and I 1 , . . ., I n can be described as: for each partitioning set X j generate N j variables independently from P λ restricted and renormalized to X j , i.e. from the measure P j,λj ; do so independently across the partitioning sets; and attach correct labels {1, . . ., n} consistent with I 1 , . . ., I n to the n realizations obtained.The conditional distribution under Pn of X 1 , . . ., X n given I n is the mixture of this distribution relative to the conditional distribution of λ given (I 1 , . . ., I n ), which was seen to be the unconditional distribution, π.Thus we obtain a sample from the conditional distribution under Pn of (X 1 , . . ., X n ) given (I 1 , . . ., I n ) by generating for each partitioning set X j a set of N j variables from the measure P Nj j,λj dπ j (λ j ), independently across the partitioning sets, and next attaching labels consistent with I 1 , . . ., I n .Now rewrite the right side of the last display by conditioning on I 1 , . . ., I n as The product over j can be pulled out of the conditional expectation by the conditional independence across the partitioning sets.The resulting expression can be seen to be of the form as claimed in the lemma.
The second lemma does not use the partitioning structure, but is valid for mixtures of products of arbitrary measures on a measurable space.For λ in a measurable space Λ let P λ and Q λ be probability measures on a given sample space (X , A), with densities p λ and q λ relative to a given dominating measure ν, which are jointly measurable.For a given (arbitrary) density p define functions λ = q λ − p λ and κ λ = p λ − p, and set Lemma 2.2.For any probability measure π on Λ and every n ∈ N, Proof.Consider the measure Pn = P n λ dπ(λ), which has density pn ( x n ) = n i=1 p λ (x i ) dπ(λ) relative to ν n , as the distribution of (X 1 , . . ., X n ).Using the inequality E √ 1 + Y ≥ 1 − EY 2 /8, valid for any random variable Y with 1 + Y ≥ 0 and EY = 0 (see for example [1], we see that 2) It suffices to upper bound the expected value on the right side.
To this end we expand the difference imsart-generic ver.2007/04/13 file: minimax_final.texdate: August 6, 2009 , where the sum ranges over all nonempty subsets I ⊂ {1, . . ., n}.We split this sum in two parts, consisting of the terms indexed by subsets of size 1 and the subsets that contain at least 2 elements, and separate the square of the sum of these two parts by the inequality If n = 1, then there are no subsets with at least two elements and the second part is empty.Otherwise the sum over subsets with at least two elements contributes two times To derive the first inequality we use the inequality (EU ) 2 /EV ≤ E(U 2 /V ), valid for any random variables U and V ≥ 0, which can be derived from Cauchy-Schwarz' or Jensen's inequality.The last step follows by writing the square of the sum as a double sum and noting that all off-diagonal terms vanish, as they contain at least one term λ (x i ) and λ dν = 0.The order of integration in the right side can be exchanged, and next the integral relative to ν n can be factorized, where the integrals p λ dν are equal to 1.This yields the contribution 2 |I|≥2 b |I| to the bound on the expectation in (2.2).
The sum over sets with exactly one element contributes two times Here we expand where the sum is over all nonempty subsets I ⊂ {1, . . ., n} that do not contain j.Replacement of i =j p λ (x i ) by i =j p(x i ) changes (2.3) into imsart-generic ver.2007/04/13 file: minimax_final.texdate: August 6, 2009 In the last step we use that 1/EV ≤ E(1/V ) for any positive random variable V .The integral with respect to ν n in the right side can be factorized, and the expression bounded by n 2 c n−1 d.Four this this must be added to the bound on the expectation in (2.2).Finally the remainder after substituting i =j p(x i ) for i =j p λ (x i ) in (2.3) contributes We exchange the order of integration and factorize the integral with respect to ν n to bound this by n 2 |I|≥1,j / ∈I a |I| b.

Estimating the mean response in missing data models
Suppose that a typical observation is distributed as X = (Y A, A, Z) for Y and A taking values in the two-point set {0, 1} and conditionally independent given Z.We think of Y as a response variable, which is observed only if the indicator A takes the value 1, and are interested in estimating the mean response EY .The covariate Z is chosen such that it contains all information on the dependence between response and missingness indicator ("missing at random").We assume that Z takes its values in Z = [0, 1] d .The model can be parameterized by the marginal density f of Z relative to Lebesgue measure measure ν on Z, and the probabilities b(z) = P(Y = 1| Z = z) and a(z) −1 = P(A = 1| Z = z).Alternatively, the model can be parameterized by the function g = f /a, which is the conditional density of Z given A = 1 up to the norming factor P(A = 1).Under this latter parametrization which we adopt henceforth, the density p of an observation X is described by the triple (a, b, g) and the functional of interest is expressed as χ(p) = abg dν.
In the case that α = β these results can be proved using the method of [1], but in general we need a construction as in Section 2 with P λ based on a perturbation of the smoothest parameter of the pair (a, b) and Q λ constructed by perturbing in addition the coarsest of the two parameters.Proof.Let H: R d → R be a C ∞ function supported on the cube [0, 1/2] d with H dν = 0 and H 2 dν = 1.Let k be the integer closest to n 2d/(2α+2β+d) and let Z 1 , . . ., Z k be translates of the cube k −1/d [0, 1/2] d that are disjoint and contained in [0, 1] d .For z 1 , . . ., z k the bottom left corner of these cubes and These functions can be seen to be contained in C α [0, 1] d and C β [0, 1] d with norms that are uniformly bounded in k.We choose a uniform prior π on λ, so that λ 1 , . . ., λ k are i.i.d.Rademacher variables.
We parameterize the model by the triple (a, b, g).The likelihood can then be written as (a − 1) Because H dν = 0 the values of the functional abg dν at the parameter values (a λ , 1/1, 1/2) and (2, b λ , 1/2) are both equal to 1/2, whereas the value at (a λ , b λ , 1/2) is equal to Thus the minimax rate is not faster than (1/k) α/d+β/d for k = k n such that the convex mixtures of the products of the perturbations do not separate completely as n → ∞.We choose the mixtures differently in the cases α ≤ β and α ≥ β. α ≤ β.We define p λ by the parameter (a λ , 1/2, 1/2) and q λ by the parameter (a λ , b λ , 1/2).Because a λ dπ(λ) = 2 and b λ dπ(λ) = 1/2, we have Therefore, it follows that the number d in Theorem 2.1 vanishes, while the numbers a and b are of the orders k −2α/d and k −2β/d times max respectively.Theorem 2.1 shows that For k ∼ n 2d/(2α+2β+d) the right side is bounded away from 0. Substitution of this number in the magnitude of separation (1/k) α/d+β/d leads to the rate as claimed in the theorem.α ≥ β.We define p λ by the parameter (2, b λ , 1/2) and q λ by the parameter (a λ , b λ , 1/2).The computations are very similar to the ones in the case α ≤ β.

Estimating an expected conditional covariance
Suppose that we observe n independent and identically distributed copies of X = (Y, A, Z), where as in the previous section, Y and A are dichotomous, and Z takes its values in Z = [0, 1] d with joint density given by f .Let b(z) = P (Y = 1|Z = z) and a(z) = P (A = 1|Z = z).We note that b so that by combining the last two equations above, we can write where ∆ (Z) = P (Y = 1|A = 1, Z) − P (Y = 1|A = 0, Z) .This allows us to parametrize the density p of an observation by (∆, a, b, f ) .The functional χ (p) is given by expected conditional covariance We consider the models We are mainly interested in the case (α + β) /2 < d/4 when the rate of estimation of χ (p) becomes slower than 1/ √ n.The paper [3] constructs and estimator that attains the rate n −(2α+2β)/(2α+2β+d) uniformely over B 2 if equation 3.1 of the previous section holds.We will show that this rate is optimal by showing that the minimax rate over the smaller model B 1 is not faster than n −(2α+2β)/(2α+2β+d) .