7
$\begingroup$

Given $W \subset \mathbb C$, let $S_W$ be the set of polynomials in $\mathbb R[x]$ that vanish on $W$ and have only nonnegative coefficients.

Warm-up question: It's clear that if $W$ contains a positive real number, then $S_W = \{0\}$. Is the converse true?

I'm pretty sure the answer is "yes" but I don't know if I'd quite call what I have a proof.

More generally, how do the combinatorics of $W$ affect what you can say about $S_W$? In particular, I'd like a handle on the following problem.

Problem: Find (bounds on) the smallest number of nonzero terms in a polynomial belonging to $S_W$.

What conditions on $W$ would allow us to do this? As I alluded to above, I'm especially interested in combinatorial conditions, e.g., on where the numbers in $W$ lie on the complex plane.

Maybe the following special case is more tractable. Actually, it's the only one that really matters to me anyway.

Special case: What if $W$ is a set of complex $n$th roots of unity?

In this case, we'll want to assume $1 \not\in W$. Then the polynomial $x^{n-1}+x^{n-2}+\cdots+x+1$ gives an upper bound of $n$. If every number in $W$ lies in the left half-plane and $W$ is a self-conjugate set, then an upper bound of $|W|+1$ can be obtained by taking the product of the corresponding quadratic factors.

Meanwhile, I'm not sure I know how to derive a lower bound better than $2$ under any conditions!

Any thoughts or theory out there that might help shed light on these questions?

$\endgroup$

3 Answers 3

8
$\begingroup$

$\def\ZZ{\mathbb{Z}}$ $\def\RR{\mathbb{R}}$ $\def\QQ{\mathbb{Q}}$ $\def\Re{\mathrm{Re}}$ $\def\Im{\mathrm{Im}}$Let $|W|=n$. I will show that $2n+1$ monomials are always sufficient, and are generically necessary. (Noam Elkies has already shown that $2n+1$ is generically sufficient.) If $W$ is closed under complex conjugation and has no real points, then this argument shows that $n+1$ is enough, since we could replace $W$ by $W' := \{ \omega \in W : \Im(\omega) > 0 \}$ and any real polynomial which vanishes on $W'$ will also vanish on $W$.

Let the elements of $W$ be $\omega_j = x_j + i y_j = r_j e^{i \theta_j}$. For any nonnegative integer $m$, let $v_m$ be the vector $$v_m := (\Re(\omega_1^m), \Im(\omega_1^m), \ldots, \Re(\omega_n^{m}), \Im(\omega_n^{m}))$$ So the $v_i$ are vectors in $\RR^{2n}$, and our goal is to find a positive linear relation between them.

Generic necessity: Suppose that we had a linear relation $\sum a_i v_{m_i}=0$ using only $2n$ terms. Then the vectors $v_{m_1}$, ..., $v_{m_{2n}}$ would be linearly dependent, so the $2n \times 2n$ matrix they form would have determinant zero. This is a nontrivial polynomial relation between the $x_j$ and $y_j$ with integer coefficients. If the $\omega_j$ are chosen generically then the $x_j$ and $y_j$ will be algebraically independent over $\QQ$, and no such relation will exist.

Sufficiency: This is essentially Noam's proof when the $\theta_j$'s are linearly independent over $\mathbb{Q}$, but with a lot more checking of degenerate cases. Suppose, for the sake of contradiction, that we cannot write $0$ as $\sum_{i=1}^{2n+1} a_i v_{m_i}$ with the $a_i \geq 0$ and not all $0$. Equivalently, assume that, for any $(2n+1)$-tuple of vectors of the form $v_m$, the origin is not in the convex hull of the tuple.

By the contrapositive of Carathéodory's theorem, we conclude that $0$ is not in the convex hull of the vectors $v_m$. Let $K$ be the closure of the convex hull of the $v$'s. We conclude that $0$ is not in the interior of $K$. By Farkas's lemma, there is a linear function $\lambda : \RR^{2n} \to \RR$ with is $\geq 0$ on $K$ and not identically $0$ on $K$. Equivalently, $\lambda(v_m)$ is nonnegative for all $m$ and is positive for some $m$.

We can write $\lambda$ in the form $$(f_1, g_1, f_2, g_2, \ldots, f_n, g_n) \mapsto \Re \left( \sum (a_j+i b_j) (f_j+i g_j) \right)$$ for some $a_j$ and $b_j$. Set $\zeta_j = a_j+i b_j$. Our hypothesis now is that $$\phi(m) : = \Re \left( \sum_j \zeta_j \omega_j^m \right)$$ is nonnegative for all $m$ and positive for some $m$; our goal is to deduce a contradiction.

Let $R$ be the set of distinct values of $|\omega_j|$, and let the elements of $R$ be $r_1 > r_2 > \cdots > r_p$. Let the elements of $R$ with norm $r_j$ be $r_j \exp(i \theta^j_1)$, $r_j \exp(i \theta^j_2)$, ..., $r_j \exp(i \theta^j_{k(j)})$. Reindex the $\zeta$'s accordingly as $\zeta^1_1$, $\zeta^1_2$, ...., $\zeta^1_{k(1)}$, .... $\zeta^p_1$, ...., $\zeta^p_{k(p)}$. Put $$\phi_j(m) := \Re \left( \sum_{\ell=1}^{k(j)} \zeta^j_{\ell} \exp(i m \theta^j_{\ell}) \right)$$ so $$\phi(m) = \sum_j r_j^m \phi_j(m). \quad (\ast)$$ Since $\phi(m)$ is not identically zero, not all of the $\phi_j(m)$ are identically zero. Let $j_0$ be minimal such that $\phi_{j_0}(m)$ takes nonzero values.

Lemma There is $c>0$ so that $\phi_{j_0}(m)$ is infinitely often less than $-c$.

Proof By assumption, $\phi_{j_0}$ is not identically zero. Since none of the $\theta$'s are $0 \bmod 2 \pi \ZZ$, the Cesaro limit $\lim_{M \to \infty} \frac{1}{M} \sum_{1 \leq m \leq M} \phi_{j_0}(m)$ is $0$. (This is where we use that the $\omega$'s are not positive reals. Note that this is true even if some of the $\theta$'s are in $2 \pi \QQ$.) So $\phi_{j_0}$ is negative for some $m$, say $m_0$. Let $\phi_{j_0}(m_0)=-2c$.

Consider any $\delta>0$. By a basic pigeonhole argument, we can find infinitely many $N$ such that $| N \theta^{j_0}_{\ell} \bmod 2 \pi \ZZ| < \delta$. (Note that this is true even if the $\theta$'s are not linearly independent over $\QQ$.) Choosing $\delta$ small enough, for such $N$'s, we have $\phi_{j_0}(m_0+N) < -c$. $\square$.

Therefore, there are infinitely many $m$ for which the leading term of $(\ast)$ is as negative as $-c r_{j_0}^m$, and all the other terms are exponentially less. Thus, we have found an $m$ for which $\phi(m)<0$. This is a contradiction and the theorem follows.

Remark: Carathéodory + Farkas + (something clever) is a general proof technique. Chapter 2 of Barvinok's A Course in Convexity has many nontrivial exercises of this form.

$\endgroup$
1
  • $\begingroup$ Well, this is a great answer. The idea to apply Carathéodory's theorem in this context, in particular, was extremely helpful. Thanks! $\endgroup$ Commented Jul 16, 2015 at 11:51
8
$\begingroup$

As G.Myerson already noted we must assume $W$ is finite. Then a different approach for the "warmup" is to observe that if $\omega \in W$ is not a positive real then some power $\omega^N$ ($N \geq 1$) has non-positive real part. Therefore $\omega$ is a root of the polynomial $(X^N-\omega^N)(X^N-\overline\omega^N)$ which has only nonnegative coefficients. The product over all $\omega\in W$ gives a nonnegative polynomial of the desired kind. It also gives an upper bound $3^{\left|W\right|}$ on the number of monomials.

If $\{{\rm Im}\log\omega : \omega \in W, \omega \neq 0\} \cup \{2\pi\}$ is ${\bf Q}$-linearly independent then a single $N$ will do for all $\omega\in W$, because as $N$ varies the $\left|W\right|$-tuples of angles formed by the $N$-th powers are dense in $({\bf R}/2\pi{\bf Z})^{\left|W\right|}$. In that case a polynomial of degree $2\left|W\right|$ in $X^N$ will do, and we get an upper bound $2\left|W\right|+1$.

For a lower bound, clearly $2$ is enough iff there is some $N$ such that all nonzero elements of $W$ have the same $N$-th power and that power is negative; else we need at least $3$ monomials. If $W$ contains all $n-1$ nontrivial $n$-th roots of unity then we need at least $n$ monomials, because a polynomial $\sum_k a(k) x^k$ is a multiple of $(x^n-1)\big/(x-1)$ iff the $n$ sums $\sum_j a(k_0+nj)$ ($k_0=0,1,2,\ldots,n-1$) are all equal, and at least one of them must be positive if the $a(k)$ are nonnegative and not all zero.

[added later] Another sharp bound is obtained if $W$ consists entirely of negative numbers: then $\prod_{\omega\in W} (X-\omega)$ has $\left|W\right|+1$ monomials, and this is best possible by Descartes' rule of signs. Come to think of it, this also gives a sharp bound if $W$ consists entirely of pure imaginaries: we may assume $W = -W$, and then $P_0(X) = \prod_{\omega\in W} (X-\omega)$ is again nonnegative, and if $P$ is any multiple of $P_0$ we can apply Descartes' rule to the even and odd parts of $P$ separately.

$\endgroup$
7
$\begingroup$

For the warm-up question, if $W$ is an infinite set, then $S_W=\{0\}$ whether $W$ contains a positive real or not, so let's assume it was intended that $W$ be finite.

If $\alpha$, not a positive real, is in $W$, then $p(x)=(x-\alpha)(x-\overline\alpha)$ vanishes at $\alpha$ and not at any positive real. More generally, if $W=\{\alpha_1,\dots,\alpha_m\}$ then $p(x)=\prod_j(x-\alpha_j)(x-\overline\alpha_j)$ vanishes on $W$ and not at any positive real.

I believe that if $p(x)$ has no positive real root then for $n$ sufficiently large $(x+1)^np(x)$ has no negative coefficients. This follows from a result cited in the accepted answer to Application of polynomials with non-negative coefficients. So the answer to the warm-up question (if I have interpreted that earlier answer correctly) is, yes, the converse is true.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.