ZERO DISTRIBUTION OF RANDOM BERNOULLI POLYNOMIAL MAPPINGS

A BSTRACT . In this note, we study asymptotic zero distribution of multivariable full sys-tem of random polynomials with independent Bernoulli coefﬁcients. We prove that with overwhelming probability their simultaneous zeros sets are discrete and the associated normalized empirical measure of zeros asymptotic to the Haar measure on the unit torus.


INTRODUCTION
A random Kac polynomial on the complex plane is of the form (1.1) where the coefficients a j are independent copies of the (real or complex) standard Gaussian.A classical result due to Kac, Hammersley and Shepp & Vanderbei [21,17,25] asserts that almost surely the normalized empirical measure of zeros δ Z(f d ) := 1 d f d (ζ)=0 δ ζ , converges to normalized arc length measure on S 1 := {|z| = 1} as d → ∞.Asymptotic zero distribution of Kac polynomials with independent identically distributed (i.i.d.) discrete random coefficients have also been studied extensively (see eg. [24,15]).More recently, Ibragimov and Zaporozhets [20] proved that the empirical measure of zeros δ Z(f d ) almost surely converges to the the normalized arc length measure if and only if the moment condition E[log(1 + |a i |)] < ∞ holds.This property can be considered as a global universality property of the zeros of random polynomials (see also [29] for a local version).
Building upon the work of Shiffman and Zelditch [28], equilibrium distribution of random systems of polynomials with Gaussian coefficients was obtained by Bloom & Shiffman [9] and Shiffman [26].More recently, these results were generalized for i.i.d.random coefficients with bounded density [1,2].We refer the reader to the survey [4] and references therein for the state of the art.On the other hand, asymptotic zero distribution of random polynomial mappings with discrete random coefficients remained open (cf.[3,8,5]).In this note, we study asymptotic zero distribution of multivariable full system of random polynomials with independent Bernoulli coefficients.where x J = x j 1 1 . . .x jn n and α i,J are ±1 Bernoulli random variables for i = 1, . . ., n.Throughout this work, we consider systems (f d,1 , . . ., f d,n ) of random Bernoulli polynomials with independent coefficients.We write f d = (f d,1 , . . ., f d,n ) for short.We denote the collection of all systems of polynomials in n variables and of degree d by P oly n,d that is endowed with the product probability measure P rob d .
Theorem 1.1.Let f d = (f d,1 , . . ., f d,n ) be a system of random polynomials with independent ±1 valued Bernoulli coefficients.Then there exists a dimensional constant K = K(n) > 0 and an exceptional set E n,d ⊂ P oly n,d such that P rob d (E n,d ) ≤ K/d and for all f d ∈ P oly n,d \ E n,d the simultaneous zeros Z(f d ) of the system f d are isolated with #Z(f d ) = d n .
For a system f d ∈ P oly n,d , if the simultaneous zeros Z(f d ) are isolated we denote the corresponding normalized empirical measure by δ Z(f d ) .That is δ Z(f d ) is a probability measure supported on the isolated zeros with equal weight on each zero.We also let ν Haar denote the Haar measure on (S 1 ) n of total mass 1.As an application of Theorem 1.1 together with a deterministic equidistribution result [14,Theorem 1.7], we obtain asymptotic zero distribution of random Bernoulli polynomial mappings: Finally, we consider the measure valued random variables and define the expected zero measure by where ϕ is a continuous function with compact support in C n and E n,d denote the exceptional set given by Theorem 1.1.
) be a system of random polynomials with independent ±1 valued Bernoulli coefficients.Then in the weak topology.
The outline of this work as follows.In §2, we review some basic properties of resultants.In particular, we recall multi-polynomial resultant and sparse resultant for polynomial systems [16,11] as well as directional resultant [13].In §3, we prove the main result Theorem 1.1.Finally, in §4 we prove Theorem 1.3.

PRELIMINARIES
In this section, we review some basic results in algebraic geometry and discrepancy theory related to our results.More precisely, we discuss the multi-homogenous (classical) resultant and the sparse eliminant as well as the relation of these two notions.For a detailed account of the subject and proofs we refer the reader to [16,11].We also discuss the sparse resultant introduced by D'Andrea and Sombra, and corresponding directional sparse resultants [14,13].
2.1.Lattice points, polytopes.For a nonempty subset P ⊂ R n , we denote its convex hull in R n by conv(P ).For two nonempty convex sets Q 1 , Q 2 , their Minkowski sum is defined as and for λ ∈ R, the scaled polytope is of the form λQ := {λq : q ∈ Q}.
In particular, if defines supporting hyperplane of Q and v is called an inward pointing normal.The intersection of Q with the supporting hyperplane in the direction v ∈ R n is denoted by for i = 0, . . ., n where J is a multi-index (j 0 , . . ., j n ) and t J := t j 0 0 • • • t jn n is the monomial of degree |J| = n i=0 j i .The set of such polynomials form an affine space by identifying |J|=d i u i,J t J with the point We also let π : Note that if d 0 = . . .= d n = 1, then the evaluation of multipolynomial resultant Res d 0 ,...,dn at the coefficients of F 0 , . . ., F n is the determinant of the coefficient matrix.Theorem 2.2 ([16], [11]).Let F 0 , . . ., F n ∈ C[t 0 , . . ., t n ] be homogenous polynomials of positive total degrees d 0 , . . ., d n .Then the system F 0 = . . .= F n = 0 has a solution in the complex projective space P n if and only if Res d 0 ...,dn (F 0 , . . ., F n ) = 0. Theorem 2.2 gives a characterization to determine the existence of nontrivial solutions for the systems of homogenous polynomials based on the coefficients of the polynomials in the system.However, not all the systems of equations are homogenous, and in the power series expansions not all the monomial terms appear.Hence, we need to introduce a more general version of the multi-homogenous resultant.
2.2.2.Sparse Eliminant.Following [16], we will recall the definition of sparse resultant.Let A 0 , . . ., A n be a collection of non-empty finite subsets of Z n , and let u i := {u i,J } J∈A i be a group of #A i variables, i = 0, . . ., n and set u = {u 0 , . . ., u n } .For each i, the general Laurent polynomial f i with support A i := supp(f i ) is given by We let A = (A 0 , . . ., A n ) and consider the incidence variety in this setting defined by where N i = #A i .Next, we consider the canonical projection on the first coordinate and let π A (W A ) denote the Zariski closure of W A under the projection π A .

Definition 2.3. The sparse eliminant, denoted by Res A , is defined as follows: if the variety π A (W A ) has codimension 1, then the sparse eliminant is the unique (up to sign) irreducible polynomial in Z[u] which is the defining equation of π
Res A is defined to be the constant polynomial 1.The expression is the evaluation of Res A at the coefficients of f 0 , . . ., f n .
The classical resultant Res d 0 ,...,dn is the special case of the sparse eliminant Res A .Indeed, by letting A i be the set of all integer points in the d i -simplex, i.e., A i = d i Σ n ∩ Z n and Σ n be the standard unit simplex one recovers Res A = Res d 0 ,...,dn up to a sign.Indeed, following [11] and [16] for simplicity we let all the sparse polynomials f 0 , . . ., f n have the same support A i = dΣ n ∩ Z n for some positive integer d and consider the system We also let t 0 , . . ., t n be the homogenous coordinates which are related to x 1 , . . ., x n by x i = t i /t 0 .Then we define the homogenous polynomials (2.5) This gives n+1 homogenous polynomials of total degree d in the variables t 0 , . . ., t n and this procedure is independent of the choice of homogeneous coordinates.Proposition 2.5 ([11]).Let A i := dΣ n ∩ Z n for each i = 1, . . ., n and consider the systems of polynomials F and f as above.Then Using the above proposition, we can give a version of Theorem 2.2 as follows.
Corollary 2.6.Let f = (f 1 , . . ., f n ) be a system of polynomials with A i = dΣ n ∩ Z n for i = 1, . . ., n. Assume that the system F = (F 0 , . . ., F n ) consists the homogenizations of f i according to process in (2.5) and denote the set of simultaneous nonzero solutions of F by Z(F ).Suppose that Z(F ) ∩ H ∞ (t 0 ) = ∅ where H ∞ (t 0 ) is the hyperplane at infinity for t 0 = 0. Then the system of polynomials f = 0 has no solution if and only if Res Proof.If Res A d (f 0 , . . ., f n ) = 0, then by definition of the sparse resultant the system f 0 (x) = . . .= f n (x) = 0 has no solution.Conversely, letting F i be the homogenization of f i as in (2.5) with the corresponding variable t = (t 0 , . . ., t n ), i.e.F i (t) = t d 0 f i (x).If the system of polynomials f = 0 has no solution then F i (t) = 0 for i = 1, . . ., n if and only if t 0 = 0 which contradicts our assumption.Hence, by Theorem 2.2 we have

Sparse Resultant.
In spite of being a generalization of the multipolynomial resultant and involving considerable large amount of the system of polynomials, the sparse eliminant does not satisfy some essential properties such as additivity property and Poisson formula which are essential in many applications.More recently, D'Andrea and Sombra [13] introduced the following version which has the desired features: Definition 2.7.The sparse resultant, denoted by Res A , is defined as any primitive polynomial in Z[u] that is the defining equation of the direct image of W A where

if this variety has codimension one, and otherwise we set Res
is the evaluation of Res A at the coefficients of f 0 , . . ., f n .
According to this definition, the sparse resultant is not irreducible but it is a power of the irreducible sparse eliminant, i.e., where deg(π A | W A ) is the degree of the projection π A .We also remark that Res A ≡ 1 whenever Res A ≡ 1.

Example 2.8. Let
For the detailed account of the subject we refer the reader to the manuscripts [13] and [14].
2.2.4.Directional Resultant.For a finite subset A ⊂ Z n and a non-zero vector v ∈ Z n we denote where Q = conv(A) and s Q (v) as in the equation (2.1).For a Laurent polynomial f (x) = J∈A u J x J with support supp(f ) = A we also define the directed polynomial Definition 2.9.Let A 1 , . . ., A n ⊂ Z n be a family of n non-empty finite subsets, v ∈ Z n \ {0}, and v ⊥ ⊂ R n the orthogonal subspace.Then there exists b i,v ∈ Z n such that The resultant of A 1 , . . ., A n in the direction of v, denoted by Res A v is defined as the sparse resultant of the family of the finite subsets ) is defined as the evaluation of the resultant Res A v at the coefficients of the g i,v .
We remark that the definition of directional resultant is independent of the choice of the vector b i,v (see [13,Proposition 3.3]).Moreover, the directional resultant Res A v ≡ 1 only if the direction vector v is an inward pointing normal to a facet of the Minkowski sum n i=1 conv(A i ) (cf. [13,Proposition 3.8]).Therefore, for a family of subsets A 1 , . . ., A n ⊂ Z n there are only finitely many directions v ∈ Z n \ {0} for which the directional resultant can vanish.
Example 2.10.Let f (x) = a 0 + . . .+ a n x n ∈ C[x] be a polynomial of degree n.Then the nontrivial directional resultants are In the last part of this section we review Bernstein's Theorem about the number of the common solutions for Laurent polynomial systems and its relation to the directional resultant.The classical Bézout's Theorem states that for n polynomials g 1 , . . ., g n ∈ C[x 1 , . . ., x n ] of (positive) degrees d 1 , . . ., d n the system has either infinite number of solutions or the number of the number of complex roots cannot exceed d 1 . . .d n .Moreover, if the solutions in the hyperplane at infinity are counted with multiplicity, the exact number of solutions in the complex projective space P n is d 1 • • • d n (see e.g.[11]).A generalization of this result to the context of Laurent polynomials was obtained by Bernstein [6] (see also Kushnirenko [23]).More precisely, we have the following: Theorem 2.11 ([6]).Let f = (f 1 , . . ., f n ) be a system of Laurent polynomials with support supp(f i ) = A i ⊂ Z n for i = 1, . . ., n.If for any nonzero vector v ∈ Z n the directed system f v = (f v 1 , . . ., f v n ) has no common zeros in (C * ) n then the set of solutions of the system f = 0 are isolated and the exact number of the solutions is #Z In particular, for a system of Laurent polynomials f = (f 1 , . . ., f n ) if the directional resultant Res A v (f v 1 , . . ., f v n ) = 0 for every direction v ∈ Z n \ {0} then simultaneous solutions of the system f are isolated.This condition on the directional resultant holds for a generic (i.e.all except for some algebraic subset) choice of f in the space of coefficients.In the next section, we prove a probabilistic version of this result for polynomial systems with Bernoulli coefficients.

EQUIDISTRIBUTION OF ZEROS
3.1.Random Polynomial Systems.First, we recall a theorem of Kozma and Zeitouni [22] asserts that overdetermined random Bernoulli polynomial systems have no common zeros with overwhelming probability: Theorem 3.1.Let f 1 , . . ., f n+1 ∈ Z[x 1 , . . ., x n ] be n + 1 independent random Bernoulli polynomials of degree d and denote the probability that the system f 1 (x) = . . .= f n+1 (x) = 0 has a common solution.Then there exists a dimensional constant Next, we prove our main result: Proof of Theorem 1.1.Let f d,i be a random Bernoulli polynomial of the form where {α i,J } is a family of independent Bernoulli random variables for i = 1, . . ., n.
We investigate the directional resultants of the system f for all nonzero primitive direction vectors v ∈ Z n .By [13,Proposition 3.8] it is enough to check the inward normals to the Minkowski sum of the supports ndΣ n which has n + 1 facets with n + 1 inward normals given by v m := e m for m = 1, . . ., n and v n+1 := − n m=1 e m where {e m } n m=1 is the standard basis of R n .
For v m = e m the intersection of the support with the supporting hyperplane in the direction e m is of the form (3.2) Hence, the polynomials f vm i can be written as for i = 1, . . ., n.Note that polynomials f vm i depend on n−1 variables.As in the Definition 2.9, we choose the vector b i,vm = 0 so that A vm − b i,vm ⊂ Z n ∩ v m ⊥ and we may take g i,vm := f vm i for each i = 1, . . ., n.
Recall that for two univariate polynomials h 1 , h 2 ∈ C[x], their resultant Res(h 1 , h 2 ) is zero if and only if h 1 and h 2 have a common solution in C. Therefore, if n = 2 the necessary and sufficient condition for g 1,vm and g 2,vm have zero resultant is that they have a common zero.Theorem 3.1 implies that there exists a constant K m which is independent of d so that the aforementioned event has probability at most K m /d.
On the other hand, when n > 2, we perform the homogenization process to each (n−1) variable polynomial g i,vm for i = 1, . . ., n as described in equation (2.5).We obtain the n variable homogenous polynomials G i,vm of the form In order to compare the sparse resultant of the polynomials g i,vm and the multipolynomial resultant of the homogeneous polynomials G i,vm , we check the conditions of Corollary 2.6.Let Z(G) be the set of nontrivial solutions of the system G = (G 1,vm , . . ., G n,vm ) and suppose that G has a solution ξ = (t, ξ 2 , . . ., ξ n ) in the hyperplane at infinity H ∞ (t).
Evaluating these homogeneous polynomials at t = 0, we obtain the top degree homogeneous part of the polynomials g i,vm for i = 1, . . ., n.Since ξ ∈ H ∞ (t), it has a nonzero coordinate ξ k for some k ∈ {2, . . ., n}.For simplicity, let us assume k = 2 and define the new variables z i := ξ i+2 /ξ 2 for i = 1, . . ., n − 2. Applying this change of variables, we obtain where ϕ : R n → R n−2 with ϕ(j 1 , . . ., j n ) = (j 3 , . . ., j n ).This gives n random Bernoulli polynomials of degree d in n − 2 variables.Hence by Theorem 3.1, there exists a positive constant C i , depending only the dimension n such that the probability that the overdetermined system of Bernoulli polynomials G i,vm (z 1 , . . ., z n−2 ) have a common solution is less than C i /d.We infer that the system of homogenized polynomials G i,vm has no common zero at hyperplane at infinity H ∞ (t) except a set that has probability at most C i /d.Then by Corollary 2.6, outside of a set of small probability, the system of polynomials consisting g i,vm has a common solution if and only if the directional resultant Res A vm (f vm 1 , . . ., f vm n ) = 0. Now, since the system of Bernoulli polynomials g i,vm contains n polynomials in n − 1 variables, by Theorem 3.1, there is a dimensional constant C i so that the probability that this system has common solution is at most C i /d.Hence outside of a set that has probability Next, we consider the inward normal vector v n+1 = − n m=1 e m and we find the minimal weighted set in this direction as A v n+1 = {J ∈ dΣ n ∩ Z n : |J| = d}.Hence, the directed polynomials in this case are of the form , hence we need to translate it by subtracting a suitable vector b i,v n+1 .For Laurent polynomial systems, the sparse resultant is invariant under translations of supports (see [13], Proposition 3.3).Since the polynomials f d,i are not Laurent, we need to determine the effects of this translations.Consider the system of Bernoulli polynomials f d and set of its simultaneous zeros Z(f d ).For a solution x = (x 1 , . . ., x n ) ∈ Z(f d ) and assume that x 1 = 0.In order to examine the incidence of this case, we evaluate the system f d at x 1 = 0 and we obtain a new system of n Bernoulli polynomials with n − 1 variables.By Theorem 3.1, there exists a constant C 1 which is independent of d such that this system has a common solution with probability at most C 1 /d.Therefore the probability of the event that x 1 = 0 is less than C 1 /d.Hence, there is no harm of translation of supports outside of a set that has probability at most C/d, where C := n i=1 C i .Now, choosing the vector b i,v n+1 = (d, 0, . . ., 0) so that , we obtain the polynomials of the form (3.7) with w : R n → R n satisfying (j 1 , j 2 , . . ., j n ) → (−d + j 1 , j 2 , . . ., j n ).We substitute the new variables y i := x i+1 /x 1 into g i,v n+1 for i = 1, . . ., n − 1 and obtain for y ∈ C n−1 and σ : R n → R n with σ(j 1 , j 2 , . . ., j n ) = (0, j 2 , . . ., j n ).The system containing the polynomials g i,v n+1 (y), i = 1, . . ., n contains n random Bernoulli polynomials with n − 1 random variable as in the cases v m = e m .By applying the same argument, we can show that Now, we define the exceptional set E n,d as a subset of P oly n,d which contains the systems f d that has a zero directional resultant for some nonzero primitive vector v or the systems f d have a common solution x ∈ C n with x i = 0 for some i = 1, . . ., n.More precisely, letting we see that there exists a positive constant K which is independent of d such that Next, we recall a deterministic equidistribution results for the solutions of systems of integer coefficient polynomials [14].For a polynomial f ∈ C[x 1 , . . ., x n ], the supremum norm of f on the unit torus is defined as Let ν Haar be the Haar measure on C n with support (S 1 ) n and of total mass 1. Assume that f ∈ P oly n,d be a polynomial mapping such that the set of simultaneous zeros Z(f ) is a discrete set.We denote by denote the discrete probability measure on C n associated to the Z(f ) by δ Z(f ) .The following result gives the asymptotic distribution of the zeros of such a system f if the coefficients are integer: Theorem 3.2.[14] Let f = (f 1 , . . ., f n ) be a polynomial mapping with

EXPECTED ZERO DISTRIBUTION
In this section, we introduce radial and angle discrepancies for random Bernoulli polynomial mappings in order to study asymptotics of expected zero measures.We adapt these concepts from [14] and refer the reader to the manuscript [14] and references therein for a detailed account of the preliminary results this section.
Let Z be a 0-dimensional effective cycle in C n that is there is a non-empty finite collection of points ξ = (ξ 1 , . . ., ξ n ) ∈ C n and m ξ ∈ N, called the multiplicity of ξ, such that Z = ξ m ξ [ξ].The degree of Z is defined by deg(Z) = ξ m ξ which is a positive number.Definition 4.1.[14] Let Z be a 0-dimensional effective cycle in C n .For each α = (α 1 , . . ., α n ) and β = (β 1 , . . ., β n ) ∈ R n such that −π ≤ α j < β j ≤ π, j = 1, . . ., n we consider the cycle The angle discrepancy of Z is defined as For 0 < ε < 1 we consider the cycle The radius discrepancy of Z with respect to ε is defined as Note that 0 < ∆ ang (Z) ≤ 1 and 0 ≤ ∆ rad (Z, ε) ≤ 1. Observe that the angle discrepancy and the radial discrepancy are generalizations of their one dimensional versions defined in [15,18].
Let A 1 , . . ., A n ⊂ Z n be a collection of finite sets and let Q i = conv(A i ) for each i = 1, . . ., n.Throughout this section we assume that D := MV n (Q 1 , . . ., Q n ) ≥ 1.For a vector w ∈ S n−1 in the unit sphere in R n , let w ⊥ be its orthogonal subspace and π w ⊥ : R n → w ⊥ be the corresponding orthogonal projection.We let MV w ⊥ denote the mixed volume of the convex bodies in w ⊥ induced by the Euclidean measure on w ⊥ .We also denote Let f = (f 1 , . . ., f n ) be a mapping such that the coordinates f i are Laurent polynomials with supp(f i ) = A i for i = 1, . . ., n.Following [14], we define the Erdös-Turán size of f by where •, • is the standard inner product in R n and the product in the denominator is taken over all non-zero primitive vectors v ∈ Z n .We remark that the Erdös-Turán size of a polynomial mapping f coincides with the bound in the Erdös-Turán Theorem [15] The following theorem gives bounds for angle discrepancy and radius discrepancy of Z(f ) in terms of the Erdös-Turán size of f .For one dimensional version see for instance [15] and [18].

Theorem 4.3. [14]
Let A 1 , . . ., A n be a non-empty finite subsets of Z n such that For a random Bernoulli polynomial mapping f d we let Z(f d ) be the set of simultaneous zeros of f d .We define the angle discrepancy ∆ ang (Z(f )) and the radius discrepancy ∆ rad (Z(f ), ε) as above whenever Z(f d ) is a discrete set of points.Otherwise, we set ∆ rad (Z(f ), ε) = ∆ ang (Z(f )) = 1.Note that as our probability space (P oly n,d , P rob d ) is discrete, measurability of these random variables is not an issue in this setting.Next, we estimate the asymptotic expected discrepancies: Let E n,d be the exceptional set which contains all the systems in P oly n,d with zero directional resultants for some nonzero primitive vector v ∈ Z n as described in the proof of Theorem 1.1.Since 0 < ∆ ang (Z(f d )) ≤ 1 there exist constants K 1 which is independent of d such that which completes the proof.
system of random polynomials with independent ±1 valued Bernoulli coefficients and E n,d ⊂ P oly n,d be as in Theorem 1.1.Then for each sequence f d ∈ P oly n,d \ E n,d we have lim d→∞ δ Z(f d ) = ν Haar . in the weak topology.In particular, δ Z(f d ) → ν Haar in probability P rob d as d → ∞.
Hence, by Theorem 3.2 lim d→∞ δ Z(f d ) = ν Haar in the weak topology.In particular, δ Z(f d ) → ν Haar in probability since P rob d (E n,d ) → 0 as d → ∞.

∆ 2 η(f d ) ≤ 1 d n d n− 1 P
ang (Z(f d ))dP rob(f d ) ≤ P rob d {E n,d } ≤ K 1 d −1 .Hence,E n,d ∆ ang (Z(f d ))dP rob d (f d ) → 0 as d → ∞.Let f d ∈ P oly n,d \ E n,d , then by Proposition 4.K 2 which is independent of d.On the other hand, by Theorem 4.3 for f d ∈ P oly n,d \ E n,d there exists constants K 3 , K 4 , K 5 and K 6 such that∆ ang (Z(f d )) ≤ K 3 η(f d ) oly n,d \E n,d deg(Z(f d ) α,β ) d n − n j=1 β j − α j 2π dP rob d (f d ) + Kn d ≤ P oly n,d \E n,d ∆ ang (Z(f d ))dP rob d (f d ) + Kn d .(4.15)Note that the set U \ U is a union of a finite number of subsets U m of the form (4.12) such that U m ∩ (S 1 ) n = ∅ for all m, we have lim d→∞ ν d (U m ) = 0 by previous case and hence lim d→∞ ν d (U \ U) = 0. Therefore, by Proposition 4.4 and (4.15), lim d→∞ ν d (U) = lim d→∞ ( U) = n j=1 β j − α j 2π = ν Haar (U) a homogenous polynomial of degree n in the variables d 1 , . . ., d n ∈ Z + where V ol n denotes the normalized volume of the subsets in R n with respect to the Lebesgue measure.The coefficient of the monomial d 1 . . .d n is called the mixed volume of Q 1 , . . ., Q n and denoted by MV (Q 1 , . . ., Q n ).One can use the polarization formula to compute the mixed volume of the convex sets Q 1 , . . ., Q n .Namely,

2. Resultant of polynomial systems.
[11]projection onto first coordinate where P(C n ) denotes the complex projective space.Then by Projective Extension Theorem (see eg.[11]) the image π(W) forms a variety in the affine space C N .The multipolynomial resultant Res d 0 ,...,dn is defined as the irreducible unique (up to a sign) polynomial in Z[u 0 , . . ., u n ] which is the defining equation of the variety π(W).The resultant of the homogeneous polynomials F 0 , . . ., F n is the evaluation of Res d 0 ,...,dn at the coefficients of F 0 , . . ., F n and it is denoted by Res d 0 ,...,dn (F 0 , . . ., F n ).
{0} and log ||f i || sup = o(d).Then lim Consider the system of Bernoulli polynomialsf d = (f d,1 , . .., f d,n ).is the dimension of space of polynomials P oly n,d .This in turn implies that log f d,i sup = o(d).Moreover, by Theorem 1.1 for each f d ∈ P oly n,d \ E n,d we have