Computational Implications of Reducing Data to Sufficient Statistics

Given a large dataset and an estimation task, it is common to pre-process the data by reducing them to a set of sufficient statistics. This step is often regarded as straightforward and advantageous (in that it simplifies statistical analysis). I show that -on the contrary- reducing data to sufficient statistics can change a computationally tractable estimation problem into an intractable one. I discuss connections with recent work in theoretical computer science, and implications for some techniques to estimate graphical models.


Introduction
The idea of sufficient statistics is a cornerstone of statistical theory and statistical practice.Given a dataset, evaluating a set of sufficient statistics yields a concise representation that can be subsequently used to design (for instance) optimal statistical estimation procedures.To quote a widely adopted textbook [LC98]: 'It often turns out that some part of the data carries no information about the unknown distribution and that X can therefore be replaced by some statistic T = T (X) without loss of information.' The main point of the present paper is the following.While optimal statistical estimation can be performed solely on the basis of sufficient statistics, reduction to sufficient statistics can lead to an explosion in computational complexity.This phenomenon is so dramatic that -after reduction to sufficient statistics-a tractable estimation task can become entirely intractable.
To be concrete, we shall consider the problem of estimating the parameters of an exponential family over x ∈ {0, 1} p : Here h : {0, 1} p → R >0 will be assumed strictly positive, θ ∈ R p and a, b = d i=1 a i b i is the standard scalar product in R d .We assume to be given n i.i.d.samples X 1 , . . ., X n ∼ p θ and want to estimate θ.A vector of sufficient statistics is clearly given by the empirical average where we introduced the notation X (n) = (X 1 , . . ., X n ) for the dataset of n samples.Let us reconsider the standard argument used to prove that a reduction in complexity entails no loss of information [LC98], since this is already suggestive of a possible explosion in computational complexity.Given any estimator θ(X (n) ) of the parameters θ, the argument constructs a randomized estimator θnew (T ) that only depends on the sufficient statistics T , and has the same distribution as θ(X (n) ), in particular the same risk.The construction is fairly simple.Given T , we can sample X (n) from the law p θ ( • |T ), conditional on the observed sufficient statistics T ( X (n) ) = T .By definition of sufficient statistics, the conditional law p θ ( • |T ) does not depend on θ, and hence we can sample X (n) without knowing θ.We then define θnew (T ) ≡ θ( X (n) ).This has clearly the same risk as θ(X (n) ).In words, we were able to generate new data as informative as the original one using the sufficient statistics.The problem with this argument is that sampling from the conditional distribution of p θ given T ( X (n) ) = T can be computationally hard.Indeed simple examples can be given for the weight h( • ) (see Section 3) that make sampling from p θ ( • |T ) -even approximately-impossible unless P = NP (see below).In other words, this argument is based on a reduction that is not computationally efficient.
The rest of the paper is organized as follows.In Section 2 we state formally our main results.Under technical assumptios this shows that, if there is a computationally efficient estimator θ(T ) that use only sufficient statistics, then the partition function Z(0) can be approximated, also efficiently.In Section 3 we construct a family of functions h( • ) for which approximating Z(0) in polynomial time is impossible unless P = NP.As a consequence of our main theorem, no efficient parameter estimator from sufficient statistics exists for these models (unless P = NP).Remarkably, we show that a simple consistent estimator can be developed using the data X (n) , for the same models.
The example in Section 3 is an undirected graphical model, and indeed intractability of computing approximations to Z(0) is quite generic in this context.In Section 4 we discuss relations to the literature in algorithms and graphical models estimation.

Notations
We will use [n] = {1, 2, . . ., n} to denote the set of first n integers.Whenever possible, we will follow the convention of denoting deterministic values by lowercase letters (e.g.x, y, z, . . . ) and random variables by uppercase letters (e.g.X, Y, Z, . . .).We reserve boldface for vectors and matrices, e.g.x is a deterministic vector or matrix, and X a random vector or matrix.For a vector x ∈ R m , and a set A ⊆ [m], x A denotes the subvector indexed by A. The components of x are denoted by (x 1 , x 2 , . . ., x m ).
Given f : R m → R, we denote by ∇f (x) ∈ R m its gradient at x, and by ∇ 2 f (x) ∈ R m×m it Hessian.Whenever useful, we add as a subscript the variable with respect to which we are differentiating as, for instance, in ∇ x f (x).

Computational hardness of consistent parameter estimation
As mentioned above, we consider the problem of consistent estimation of the parameters θ of the model (1), from the sufficient statistics (2).Throughout we will assume h(x) > 0 strictly for all x ∈ {0, 1} n .
As n → ∞, T (X (n) ) converges -by the law of large numbers-to the population mean (3) Here and below E θ denotes expectation with respect to the distribution p θ .We use the * subscript to emphasize that this τ * is a function, and distinguish it from a specific value τ ∈ (0, 1) p .A consistent estimator based on the sufficient statistics T must necessarily invert this function in the limit n → ∞.This motivates the following definition.
2. The algorithm terminates in time polynomial in 1/ξ, p, the maximum description length of any of the τ i , and the description length of h ∈ H p .
Two remaks are in order.The terminology adopted here (in particular, the use of 'consistent' and 'polynomial') is motivated by the following remarks.First, in the terminology of complexity theory, this is a 'fully polynomial time approximation scheme.'Second, as discussed below, there is indeed a unique, continuous, function θ * : (0, 1) p → R p such that τ * (θ * (τ )) = τ .This implies that θ is indeed a consistent estimator in the sense that for a sequence ξ n → 0, we have, almost surely and hence in probability, Finally, in the following we shall occasionally drop the qualification 'from sufficient statistics' when it is clear that we are considering estimators that only use sufficient statistics.We next state our main theorem.
Theorem 1. Assume H p , p ≥ 1, to be a family of weight functions such that the the following three conditions hold: C1.There exists δ ∈ (0, 1/2) such that, for any p ∈ N, any h ∈ H p , and any i ∈ [p] C2.There exists a polynomial L(p) such that, for any p ∈ N, any h ∈ H p , we have, for all θ such that τ C3.There exists a polynomial K(p) such that, for any p ∈ N, If there exists a polynomial consistent estimator for the model H, then, for any ε > 4pδ(K(p) + log(4/δ)), there exists an algorithm polynomial in the description length of h, and in (1/ε), returning Z such that The rest of this section is devoted to the proof of this theorem.
The main implication of this theorem is negative.If the problem of approximating Z(0) is intractable, it follows from the above that there is no polynomial consistent estimator.We will show in Section 3 that we can use recent results in complexity theory to construct a fairly simple class H = {h} such that: • An approximation scheme does not exist for Z(0) under the standard complexity theory assumption P = NP.
• Remarkably, a consistent and computationally efficient estimator θ = θ(X (n) ) of the model parameters exists!However, this is not a function only of the sufficient statistics.
As a direct consequence, we have the following.
Corollary 2.2.There are classes of models H for which no polynomial consistent estimator of the parameters from sufficient statistics exists, unless P = NP.
Before outlining the proof of Theorem 1, it is useful to recall a few well-known properties of exponential families, as specialized to the present setting.While these are consequences of fairly standard general statements (see for instance [Efr78,LC98]), we present self-contained proofs in Appendix A for the readers' convenience.
We will use standard statistics notations for the cumulant generating function (also known as log-partition function), A : R p → R, and its Legendre-Fenchel transform F : (0, 1) p → R, which we shall call the free energy, Proposition 2.3.Given a strictly positive h : {0, 1} n → R >0 , the following hold: Fact2.Recalling that Cov θ (X) ∈ R p×p denotes the covariance matrix of the law p θ , we have Fact3.Cov θ (X) c(θ) I p for some continuous strictly positive c(θ) > 0.
Fact5.The function F : (0, 1) p → R p is concave and C ∞ ((0, 1) p ) with Fact6.Let P p = P({0, 1} p ) be the set of probability distributions over {0, 1} p , and denote by H(q) = − x∈{0,1} p q(x) log q(x) the Shannon entropy of q ∈ P p .Then We are now in position to prove Theorem 1: we present here a version of the proof that omits some technical step, and complete the details in Appendix B.
Proof of Theorem 1.By Eq. ( 13) we have, letting where the second equality follows from assumption C1.
The problem of computing Z(0) is then reduced to the one of maximizing the concave function F (τ ) over the convex set D p (δ).Note that, by assumption C2 and Eq. ( 15), the gradient of F (τ ) has Lipshitz modulus bounded by L(p) on D p (δ). Namely, for all τ 1 , τ 2 ∈ D p (δ), We will maximize F (τ ) by a standard projected gradient algorithm.We will work here under the assumption that we have access to an oracle that given a point τ ∈ D p (δ), returns θ * (τ ) = −∇ τ F (τ ).While in reality we do not have access to such an oracle, Definition 2.1 (and the Theorems assumptions) imply that there exists an efficient approximation scheme for this oracle.We will show in Appendix B that indeed we can replace the oracle by such an approximation scheme.
Given the oracle, the projected gradient algorithm is defined by the iteration (with the superscript t indicating the iteration number, and letting L = L(p)) Here P δ (x) is the orthogonal projector on D p (δ), which can be computed efficiently since, for each i ∈ {1, 2, . . ., p}, and u ∈ R p , We run t 0 = t 0 (p, ε) ≡ ⌈2p L(p)/ε⌉ iterations of projected gradient.By [BT09, Theorem 3.1] we have Hence, F (τ t 0 ) provides a good approximation of F (τ * (0)) = log Z(0).We are left with the task of evaluating F (τ t 0 ).The idea is to 'integrate' the derivative of F (τ ) along a path that starts at a point τ (0) where F can be easily approximated.We will use again ∇ τ F (τ ) = −θ * (τ ) and assume that θ * is exactly computed by an oracle.Again we will see in Appendix B that this oracle can be replaced by the estimator in Definition 2.1 Let τ (0) ≡ (δ, . . ., δ) T .Next consider Eq. ( 16).For any q ∈ P p such that E q (X) = τ (0) , we have, letting s(x Where the second inequality in Eq. ( 24) follows since the joint entropy is not larger than the sum of the entropy of the marginals.The first inequality Eq. ( 25) follows from assumption C3, and the last inequality in Eq. ( 25) is Markov's.Using these bounds in Eq. ( 16), we get where the last inequality follows from the assumption ε > 4pδ(K(p) + log(4/δ)).
We finally construct our approximation Z by letting (We introduced the superscript or to emphasize that this approximation makes use of the oracle θ * .In Appendix B we control the additional error induced by the use of θ.) Let us bound the approximation error: The first term is bounded by Eq. ( 26) and the last by Eq. ( 23).As for the middle term, by the intermediate value theorem there exists, for each ℓ, a point τ (ℓ) ∈ [τ (ℓ−1) , τ (ℓ) ] such that Hence, choosing m = m 0 (p, ε) ≡ ⌈4L(p)p/ε⌉, we can bound the sum in Eq. ( 28) by ε/4.Hence the approximation error is bounded by which concludes our proof outline.

An example
In this section we describe a simple class of functions H p for which it is impossible to estimate Z(0) in polynomial time unless P = NP.Using our Theorem 1, we will show that consistent parameter estimation using sufficient statistics is intractable for these models.We will then show that -for the same models-it is quite easy to estimate the parameters from i.i.d.samples X 1 , . . ., X n ∼ p θ .

Intractability of estimation from sufficient statistics
For β > 0 a fixed number, and This is also known as the anti-ferromagnetic Ising model on graph G. Fixing k ∈ N, and β ∈ R >0 , we introduce the class of functions (Recall that a regular graph is a graph with the same degree at all vertices.The set of regular graphs is non-empty as soon as pk is even, and p ≥ p 0 (k).)Intractability of approximating Z(0) was characterized in [SS12,GSV12].We restate the main result of [SS12], adapting it to the present setting1 .
We can now state (and prove) and more concrete form of Corollary 2.2.In Section 3.2 we will show that the model parameters can be consistently estimated in polynomial time from i.i.d.samples X 1 , . . ., X n ∼ p θ .
Proof.The proof consists in checking that the assumptions C1, C2, C3 of Theorem 1 apply.It then follows from Theorem 2 that either P = NP or there is no polynomial consistent estimator from sufficient statistics.Throughout, we will write C or c for generic strictly positive constants that can depend on β, k.
Let us start with condition C1.We fix a graph G = (V = [p], E) on p vertices.For a vertex i ∈ [p], we let ∂i = {j ∈ [p] : (i, j) ∈ E} be the set of neighbors of i. Writing P 0 for P θ=0 , we have τ * ,i (0) = P 0 (X i = 1).
Note that p θ is a Markov Random Field with graph G.We therefore have min A simple direct calculation with Eq. (31) yields where n 0 (x ∂i ) and n 0 (x ∂i ) are the number of zeros and ones in the vector x ∂i .
The right hand side of Eq. ( 35) is maximized when n 0 (x ∂i ) = k − n 1 (x ∂i ) = k, and minimized when n 0 (x ∂i ) = k − n 1 (x ∂i ) = 0. We then have We conclude that there exists c = c(k, β) > 0 such that condition C1 is satisfied for all δ ∈ (0, c).
We will select the value of δ after checking condition C3.
At this point we choose δ = 1/(10pK(p)).Assuming, without loss of generality, K(p), p ≥ 10, this implies that we can take ε = 2 in Eq. ( 8), which yields the desired contradiction with Theorem 2, provided we can verify condition C2 with the stated value of δ.
To conclude our proof, consider condition C2.First we claim that τ * (θ) ∈ [δ, 1 − δ] p implies θ ∞ ≤ C log p for some constant C = C(k, β).Indeed, let us fix i ∈ [p] and prove that θ i ≤ C log p (the lower bound follows from an analogous argument).Using again the fact that p θ is a Markov Random Field, we have (since, by definition τ * ,i (θ) = P θ (X i = 1)) Proceeding as in the case of condition C1, we see that the last conditional probability is minimized when all the neighbors of i are in state 0 (i.e.x ∂i = 0).This yields which is equivalent to θ i ≤ 2βk + log((1 − δ)/δ).Substituting δ = 1/(10pK(p)) with K(p) a polynomial yields the desired bound (recall that β and k are constants).
Next, we claim that, there exists a polynomial L 0 (p) such that, for each i ∈ [p], all x ∂i and all θ with |θ i | ≤ C log p, we have The calculation is essentially the same as the one already carried out above for P θ (X i = 1|X ∂i = x ∂i ) and therefore we omit it.Finally, we want to prove Eq. ( 6) for some polynomial L(p).Equivalently, we need to prove that Var θ ( v, X ) ≥ 1/L(p) for any vector v ∈ R p , v 2 = 1.Fix one such vector v. Let i(1) be the index of the component of v with largest magnitude (i.e.|v i(1) | ≥ |v j | for all j ∈ [p] \ {i(1)}).Let i(2) be the of the component of v with largest magnitude excluded i(1) and ∂i(1) (i.e.|v i(2) | ≥ |v j | for all j ∈ [p] \ {i(1), ∂i(1), i(2)}), and so on.Namely, for each ℓ, we let i(ℓ) be the index of the component of v with the largest magnitude excluded i(1), . . ., i(ℓ − 1) and their neighbors in G. Denote by m ≥ n/(k + 1) the total number of vertices selected in this manner, and let S = {i(1), . . ., i(m)}.
Further, letting Here the identity in Eq. ( 42) follows because the (X i ) i∈S are conditionally independent given X S c (note that S is an independent set in G, i.e. there is no edge connecting two vertices in S), and because (X i ) i∈S c constant given X S c The expressions in Eq. ( 42) follow from Eqs. (39) and (40).We therefore established also condition C2, with L(p) = (k+1)L 0 (p).This finishes the proof.

Tractable estimation from samples
In this section we assume to be given n i.i.d.samples X 1 , X 2 , . . ., X n ∼ p θ and denote by . ., X n ) the entire dataset.We will seek an estimator θ = θ(X (n) ; ξ) that can be computed efficiently, and is consistent in the sense that, for a sequence in p θ -probability.
It is indeed fairly easy to construct such an estimator: we will use an approach developed in [AKN06].Fix i ∈ [p] and say we want to estimate θ i .Let N i (x i ; x ∂i ) be number of samples ℓ such that X Then by the law of large numbers we have, almost surely and in probability lim n→∞ 4 Discussion and related literature From a mathematical point of view, the phenomenon highlighted by Theorem 1 is not new.It can be traced back to two well-understood facts, and one simple remark: 1. First well-understood fact.The log partition function is the value of a convex optimization problem: 2. Second well-understood fact.Recall that a weak evaluation oracle for F is an oracle that -on input τ , ε, with τ ∈ [0, 1] p and ε > 0-returns F such that |F (τ ) − F | ≤ ε.Given such an oracle, then the optimization problem (56) can be solved in polynomial time with accuracy C(p)ε, with C(p) a polynomial.
Since we assume to have access to a gradient oracle (i.e. an oracle for ∇F (τ ) = −θ * (τ )), we can construct an evaluation oracle.Hence, by the previous facts we can approximate log Z(0).
In other words, we could have proven Theorem 1 by using the construction at point 3, and then referring to standard results in convex optimization [GLS81,Lov87].We preferred a self-contained proof, that uses additional structure of the problem (differentiability of F and existence of an oracle for ∇F ).
The specific example discussed in Section 3 can be regarded as a graphical model.While we used a specific class of models, namely antiferromagnetic Ising models defined by Eq. (31), approximating the partition function of a graphical model is -in many cases-intractable [GJP03, Sly10, SS12, GSV12, CCGL12].Hence, the conclusion is that -generally speaking-it is intractable to estimate the parameters of a graphical model from sufficient statistics (unless the model has special structure).
The literature on estimating parameters and structure of a graphical model is quite vast, and we can only provide a few pointers here.Traditional methods [HS83,AHS85] attempt at maximizing the likelihood function by gradient ascent.These approaches are necessarily based on sufficient statistics, and hence covered by our Theorem 1: in general, they cannot be implemented in polynomial time.The bottleneck is quite apparent in this case: evaluating the likelihood function or its gradient requires to compute expectations with respect to p θ which -in general-is NP-hard.Notice that this difficulty is not circumvented by regularizing the likelihood function, as in the graphical LASSO [BEGd08].
A possible approach to overcome this problem was proposed in [RWL + 10]: the basic idea is to reconstruct the neighborhood of a node i by performing a logistic regression against the other nodes.This approach generalizes to discrete graphical models a method developed in [MB06] for Gaussian graphical models.As shown in [BM09], the logistic regression method fails unless the graphical model satisfies a 'weak dependence' condition.Under the same type of condition, the log partition function can be computed efficiently.
Several papers [AKN06, CT06, BMS08, RSS12, ATHW12] develop algorithm to learn parameters and graph structures, with consistency guarantees under weak assumptions on the graphical model.As stressed in Section 3.2, these algorithms do not make use exclusively of the sufficient statistics and instead effectively estimate the joint distributions of subsets of k variables, with k depending on the maximum degree.
On the impossibility side, Bogdanov, Mossel and Vadhan [BMV08] showed that estimating graphical models with hidden nodes is intractable.Singh and Vishnoi [SV13] establish a result that is very similar to Theorem 1 with a slightly different oracle assumption.Their proof uses the ellipsoid algorithm instead of projected gradient, and their motivation is related to maximum entropy rounding techniques in optimization algorithms.Roughgarden and Kearns [RK13] establish equivalence of various computational tasks in graphical models.Again, they use convex duality and the ellipsoid algorithm.
Finally, I recently became aware of unpublished work by Bresler, Gamarnik and Shah [BGS14], that also proves intractability of learning graphical models from sufficient statistics.Their proof strategy is broadly similar to the one presented here but differs in the solution of several technical obstacles.and θ i ∈ (−∞, +∞) for all i ∈ S 0 ≡ [n] \ (S + ∪ S − ), then we have x S + =1,x S − =0 x h(x) exp θ S 0 , x S 0 , (60) We next prove that τ * is injective.Indeed, assume by contradiction that τ * (θ 1 ) = τ * (θ 2 ) for θ 1 = θ 2 .Then, by the intermediate value theorem, there exists λ ∈ [0, 1] such that, letting and hence, letting which is impossible by Fact 3 above.
In order to prove that τ * : R p → [0, 1] p is surjective, we will proceed by induction over the problem's dimension p.The claim is obvious for p = 1.We assume next that it holds for all dimensions up to (p − 1) and prove it for dimension p.This claim follows by continuity, after proving that the image of τ * contains the boundary B p ≡ [0, 1] p \ (0, 1) p .Namely, for each τ ∈ B p , there exists θ ∈ R p such that τ * (θ) = τ .
Fact5.This is a standard exercise with Legendre-Fenchel transforms and we omit its proof.

Fact6. Recall the Gibbs variational principle [MM09]
A The claim follows by comparing this expression with Eq. (13).

B Finishing the proof of Theorem 1
In this appendix we complete the proof of Theorem 1.In the simplified version presented in Section 2, we assumed to have access to an oracle returning θ * (τ ) when queried with value τ .We will show that our claim remains correct if the oracle is replaced by a polynomial consistent estimator θ( • , • ).By Definition 2.1, for any ξ > 0 we have The oracle θ * ( • ) is used in two points in the proof of Theorem 1: 1.In the implementation of the projected gradient algorithm, cf.Eq. ( 21).It is queried a total of t 0 (p, ε) ≡ 2p L(p)/ε times for this purpose.
We will replace these queries with queries to θ( • , ξ) with 1/ξ polynomial in p and 1/ε.Since the total number of calls is also polynomial in p and 1/ξ, this yields an algorithm that is polynomial in p and 1/ε.