519
$\begingroup$

In this MO post, I ran into the following family of polynomials: $$f_n(x)=\sum_{m=0}^{n}\prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}.$$ In the context of the post, $x$ was a prime number, and $f_n(x)$ counted the number of subspaces of an $n$-dimensional vector space over $GF(x)$ (which I was using to determine the number of subgroups of an elementary abelian group $E_{x^n}$).

Anyway, while I was investigating asymptotic behavior of $f_n(x)$ in Mathematica, I got sidetracked and (just for fun) looked at the set of complex roots when I set $f_n(x)=0$. For $n=24$, the plot looked like this: (The real and imaginary axes are from $-1$ to $1$.)

n=24

Surprised by the unusual symmetry of the solutions, I made the same plot for a few more values of $n$. Note the clearly defined "tails" (on the left when even, top and bottom when odd) and "cusps" (both sides).

40 and 70100

You can see that after approximately $n=60$, the "circle" of solutions starts to expand into a band of solutions with a defined outline. To fully absorb the weirdness of this, I animated the solutions from $n=2$ to $n=112$. The following is the result:

n=2 to 112

Pretty weird, right!? Anyhow, here are my questions:

  1. First, has anybody ever seen anything at all like this before?
  2. What's up with those "tails?" They seem to occur only on even $n$, and they are surely distinguishable from the rest of the solutions.
  3. Look how the "enclosed" solutions rotate as $n$ increases. Why does this happen? [Explained in edits.]
  4. Anybody have any idea what happens to the solution set as $n\rightarrow \infty$? Thanks to @WillSawin, we now know that all the roots are contained in an annulus that converges to the unit circle, which is fantastic. So, the final step in understanding the limit of the solution sets is figuring out what happens on the unit circle. We can see from the animation that there are many gaps, particularly around certain roots of unity; however, they do appear to be closing.
    • The natural question is, which points on the unit circle "are roots in the limit"? In other words, what are the accumulation points of $\{z\left|z\right|^{-1}:z\in\mathbb{C}\text{ and }f_n(z)=0\}$?
    • Is the set of accumulation points dense? @NoahSnyder's heuristic of considering these as a random family of polynomials suggests it should be- at least, almost surely.
  5. These are polynomials in $\mathbb{Z}[x]$. Can anybody think of a way to rewrite the formula (perhaps recursively?) for the simplified polynomial, with no denominator? If so, we could use the new formula to prove the series converges to a function on the unit disc, as well as cut computation time in half. [See edits for progress.]
  6. Does anybody know a numerical method specifically for finding roots of high degree polynomials? Or any other way to efficiently compute solution sets for high $n$? [Thanks @Hooked!]

Thanks everyone. This may not turn out to be particularly mathematically profound, but it sure is neat.


EDIT: Thanks to suggestions in the comments, I cranked up the working precision to maximum and recalculated the animation. As Hurkyl and mercio suspected, the rotation was indeed a software artifact, and in fact evidently so was the thickening of the solution set. The new animation looks like this:

Working precision to 10^-50.

So, that solves one mystery: the rotation and inflation were caused by tiny roundoff errors in the computation. With the image clearer, however, I see the behavior of the cusps more clearly. Is there an explanation for the gradual accumulation of "cusps" around the roots of unity? (Especially 1.)


EDIT: Here is an animation $Arg(f_n)$ up to $n=30$. I think we can see from this that $f_n$ should converge to some function on the unit disk as $n\rightarrow \infty$. I'd love to include higher $n$, but this was already rather computationally exhausting.

Avoid this if you've been taking any hallucinogenic drugs.

Now, I've been tinkering and I may be onto something with respect to point $5$ (i.e. seeking a better formula for $f_n(x)$). The folowing claims aren't proven yet, but I've checked each up to $n=100$, and they seem inductively consistent. Here denote $\displaystyle f_n(x)=\sum_{m}a_{n,m}x^m$, so that $a_{n,m}\in \mathbb{Z}$ are the coefficients in the simplified expansion of $f_n(x)$.

  • First, I found $\text{deg}(f_n)=\text{deg}(f_{n-1})+\lfloor \frac{n}{2} \rfloor$. The solution to this recurrence relation is $$\text{deg}(f_n)=\frac{1}{2}\left({\left\lceil\frac{1-n}{2}\right\rceil}^2 -\left\lceil\frac{1-n}{2}\right\rceil+{\left\lfloor \frac{n}{2} \right\rfloor}^2 + \left\lfloor \frac{n}{2} \right\rfloor\right)=\left\lceil\frac{n^2}{4}\right\rceil.$$

  • If $f_n(x)$ has $r$ more coefficients than $f_{n-1}(x)$, the leading $r$ coefficients are the same as the leading $r$ coefficients of $f_{n-2}(x)$, pairwise.

  • When $n>m$, $a_{n,m}=a_{n-1,m}+\rho(m)$, where $\rho(m)$ is the number of integer partitions of $m$. (This comes from observation, but I bet an actual proof could follow from some of the formulas here.) For $n\leq m$ the $\rho(m)$ formula first fails at $n=m=6$, and not before for some reason. There is probably a simple correction term I'm not seeing - and whatever that term is, I bet it's what's causing those cusps.

Anyhow, with this, we can make almost make a recursive relation for $a_{n,m}$, $$a_{n,m}= \left\{ \begin{array}{ll} a_{n-2,m+\left\lceil\frac{n-2}{2}\right\rceil^2-\left\lceil\frac{n}{2}\right\rceil^2} & : \text{deg}(f_{n-1}) < m \leq \text{deg}(f_n)\\ a_{n-1,m}+\rho(m) & : m \leq \text{deg}(f_{n-1}) \text{ and } n > m \\ ? & : m \leq \text{deg}(f_{n-1}) \text{ and } n \leq m \end{array} \right. $$ but I can't figure out the last part yet.


EDIT: Someone pointed out to me that if we write $\lim_{n\rightarrow\infty}f_n(x)=\sum_{m=0}^\infty b_{m} x^m$, then it appears that $f_n(x)=\sum_{m=0}^n b_m x^m + O(x^{n+1})$. The $b_m$ there seem to me to be relatively well approximated by the $\rho(m)$ formula, considering the correction term only applies for a finite number of recursions.

So, if we have the coefficients up to an order of $O(x^{n+1})$, we can at least prove the polynomials converge on the open unit disk, which the $Arg$ animation suggests is true. (To be precise, it looks like $f_{2n}$ and $f_{2n+1}$ may have different limit functions, but I suspect the coefficients of both sequences will come from the same recursive formula.) With this in mind, I put a bounty up for the correction term, since from that all the behavior will probably be explained.


EDIT: The limit function proposed by Gottfriend and Aleks has the formal expression $$\lim_{n\rightarrow \infty}f_n(x)=1+\prod_{m=1}^\infty \frac{1}{1-x^m}.$$ I made an $Arg$ plot of $1+\prod_{m=1}^r \frac{1}{1-x^m}$ for up to $r=24$ to see if I could figure out what that ought to ultimately end up looking like, and came up with this:

enter image description here

Purely based off the plots, it seems not entirely unlikely that $f_n(x)$ is going to the same place this is, at least inside the unit disc. Now the question is, how do we determine the solution set at the limit? I speculate that the unit circle may become a dense combination of zeroes and singularities, with fractal-like concentric "circles of singularity" around the roots of unity... :)

$\endgroup$
23
  • 32
    $\begingroup$ Not a proper answer, but possibly relevant: have you seen math.ucr.edu/home/baez/roots ? $\endgroup$ Commented Oct 4, 2012 at 1:09
  • 5
    $\begingroup$ Before I looked at this, 31 views and already 11 upvotes and 5 favorites. Hmm, I guess it is interesting :) I will make it 12 and 6. $\endgroup$
    – GeoffDS
    Commented Oct 4, 2012 at 2:00
  • 5
    $\begingroup$ The rotating is absurd since you should have something symmetrical about the real axis, and I don't see the thickness of the shape when I plot the polynomials. Those should be software artifacts. But there does seem to be a quadratically increasing density of roots along the unit circle, with strange shapes at the roots of unity. $\endgroup$
    – mercio
    Commented Oct 4, 2012 at 10:43
  • 8
    $\begingroup$ I don't know about an egg, but that round bulge on the left hand side makes the whole thing eerily eye-like. (You know, an anatomical eye in profile.) I would be really interested to know what gives rise to that feature. $\endgroup$
    – rschwieb
    Commented Oct 4, 2012 at 12:40
  • 16
    $\begingroup$ I enjoy this question very much so far!! Keep it up! $\endgroup$
    – Asaf Karagila
    Commented Oct 9, 2012 at 0:19

7 Answers 7

156
$\begingroup$

First, has anybody ever seen anything at all like this before?

Yes, and in fact the interesting patterns that arise here are more than just a mathematical curiosity, they can be interpreted to have a physical context.

Statistical Mechanics

In a simple spin system, say the Ising model, a discrete set of points are arranged on a grid. In physics, we like to define the energy of the system by the Hamiltonian, which gives the energy of any particular microstate. In this system, if the spins are aligned they form a bond. This favorable and the energy is negative. If they are misaligned, the energy is positive. Let's consider a simple system of two points, adjacent to each other. Furthermore, let each site point up (1) or down (-1). For an Ising-like system we would write the Hamiltonian as:

$$ H = - \sum_{ij} J \sigma_i \sigma_j $$

where $\sigma_i$ is the spin of the $i$th point and the summation runs over all pairs of adjacent sites. $J$ is the strength of the bond (which we can set to one for our example).

In our simple system we have only four possible states:

0 - 0     H = -J
1 - 0     H =  0
0 - 1     H =  0
1 - 1     H = -J

Now we can write the partition function $\mathcal{Z}$, a term which encompasses all information of the Hamiltonian from the perspective of statistical mechanics:

$$ \mathcal{Z} = \sum_s \exp (H(s)/kT) $$

Here the summation runs over all possible (micro)states of the system. The partition function is really useful as it is related to the free energy $A = -kT \ln{\mathcal{Z} }$. When the partition function goes to zero, the free energy explodes and this signifies a phase change - a physically interesting event.

What about our simple system?

$$ \mathcal{Z} = 2 \exp({\beta J}) + 2 = 2x + 2 $$

You'll notice that I changed $x=\exp({\beta J})$ to make things a little neater. You may also notice that $\mathcal{Z}$ looks like polynomial. Which means if we want to find the interesting events in the system we find the zeros of the partition function $\mathcal{Z}=0$. This zero will correspond to a particular temperature $T$. In this case the only temperature we get is a complex one ...

Complex Temperatures?

Before you discount the idea that a temperature not on the real number line is impossible (and that $T<0$ is strange as well), let's see where this takes us. If we continue the to add sites to our simple little system, our polynomial will get a bit more complicated and we will find more roots on the complex plane. In fact, as we take ever more roots the points appear to form a pattern, much like the pattern you've shown above.

For a finite spin system, you'll never find a zero on the real axis, however...

Anybody have any idea what happens to the solution set as n→∞?

At the thermodynamic limit (which corresponds to an infinite number of sites) the points become dense on the plane. At this limit the points can touch the real axis (corresponding to a phase change in the system). For example, in the 2D Ising model the points do touch the real axis (and make a beautiful circle on the complex plane) where the system undergoes a phase transition from ordered to disordered.

Prior work

The study of these zeros (from a physics perspective) is fascinating and started with the seminal papers by Yang and Lee:

Yang, C. N.; Lee, T. D. (1952), "Statistical Theory of Equations of State and Phase Transitions. I. Theory of Condensation", Physical Review 87: 404–409, doi:10.1103/PhysRev.87.404

Lee, T. D.; Yang, C. N. (1952), "Statistical Theory of Equations of State and Phase Transitions. II. Lattice Gas and Ising Model", Physical Review 87: 410–419, doi:10.1103/PhysRev.87.410

Which are surprisingly accessible. For a good time, search for images of Yang-Lee zeros. In addition you can extend the fugacity to the complex plane, these are called the Fisher zeros and make even more complex patterns!

$\endgroup$
3
  • 3
    $\begingroup$ Ah, wow! I never would have thought there would be a connection to physics here - very unexpected. Do you think there is a connection with this solution set specifically, or just to large degree polynomials (i.e. to points 4 and 5)? $\endgroup$
    – Alexander Gruber
    Commented Oct 5, 2012 at 21:42
  • 1
    $\begingroup$ @AlexanderGruber perhaps. Have you seen the Padé approximant? en.wikipedia.org/wiki/Pad%C3%A9_approximant It is like a Taylor series, but uses a rational polynomial instead. You can expand the partition function as a rational polynomial. Pure speculation, but I wouldn't be surprised if there was a connection between your specific case and a particular Hamiltonian - the properties of which would be very interesting! $\endgroup$
    – Hooked
    Commented Oct 6, 2012 at 3:59
  • 32
    $\begingroup$ « For a good time, search for images of Yang-Lee zeros. » I'll take the mathematics department's nearby bathroom stall for 500, Alex. $\endgroup$
    – anon
    Commented Oct 6, 2012 at 5:11
108
+400
$\begingroup$

$\lim_{n\to\infty} f_n(x)/n$ exists for $x$ in the open unit disc, and is equal to the partition generating function $P(x)$. Convergence is uniform on closed unit balls.

To prove this, we use the following lemma:

Let $a_{n,m}$ be a double sequence of numbers such that $a_m= \lim_{n\to \infty}a_{n,m}$ exists and $|a_{n,m}| \leq |a_m|$. Let $R$ be the radius of convergence of $\sum_m a_m x^m$. Then for $x<R$, we have:

$$ \lim_{n\to\infty} \left(\sum_m a_{n,m} x^m \right) = \sum_m a_m x^m$$

and the convergence is uniform on compact subsets.

Proof of lemma: We just need to interchange the summation and limit. This follows from the dominated convergence theorem, because $a_{n,m}x^m$ is bounded by $|a_m x^m|$, which is summable as long as $|x|<R$. Because $ \sum_m |a_m x^m|$ is bounded uniformly on compact subsets of the open disc of radius $R$, the convergence is uniform there.

Now to get the result we want that the coefficient of $x^m$ in $f_n(x)/n$ converges to, and is bounded by, the number of partitions of $m$. To do that, we just need a combinatorial interpretation for that coefficient. One way to see the point count formula for the Grassmanian is in terms of Schubert cells - an $m$-dimensional Schubert cell contains $q^m$ points over $\mathbb F_q$. Because the Grassmanian decomposes into Schubert cells, this gives a polynomial formula for the number of points. The coefficient of $q^m$ would be the number of $m$-dimensional Schubert cells. From the explicit description, it is easy to see that, in the Grassmanian of $k$-planes in $n-k$-dimensional space, this is equal to the number of partitions of $m$ with at most $k$ parts and with each part at most $n-k$.

Summing over $k$ from $0$ to $n$, we get the coefficient of $x^m$, which is at most $n+1$ times the number of partitions of $n$. (In fact $n-1$ works unless $m=0$, and the error from $m=0$ is neglibigible.) For $n$ large with respect to $m$, for most $k$ between $0$ and $n$, we have $k$ and $n-k$ are both larger than $m$, so the coefficient of $x^m$ in the $k$th term is equal to the number of partitions of $m$. Hence averaging over those terms, the limit is the number of partitions of $m$. So the hypotheses of the lemma are satisfied with $a_m=$ the partition function, and the conclusion is as well.

This also explains why the roots seem to live in an annulus - because the partition function has no roots in the open unit disc, as $f_n(x)/n$ converges to it, the absolute value of the smallest root must converge to $1$.

But we are not done, because we still haven't explained why there are no roots with large norm. Unfortunately there is no limiting distribution for large $x$. However, this is easy to fix - for $x$ larger than $1$, we just divide the polynomial by the leading term: $x^{\lfloor \frac{n^2}{4} \rfloor}$. Then I claim the new, normalized $f_n$, converges to one of two limits, depending on whether $f$ is even or odd.

We can use essentially the same strategy to compute these limits. After changing variables to $y=1/x$, we see that the coefficients of $f_n(y^{-1}) / y^{ \deg(f_n)}$ converge to a limit depending on if $n$ is even or odd, and are bounded by this limit.

To get this we just use the fact that the $k$th summand of $f_n$ is a symmetric - reversing the order of its coefficients gets the same polynomial. The degree of the $k$th term is $k(n-k)$. We are interested in the coefficient of terms close to the highest degree. These come only from $k$ close to $n/2$. The contribution from the $k$th term is $y^{\lfloor \frac{n^2}{4}\rfloor -k (n-k)}$ times the polynomial for the number of $k$-dimensional spaces of an $n$-dimensional vector space. By the previous discussion, the coefficients of this polynomial converge to the partition function.

So the reversed coefficients of $f_n$ converge to the coefficients of:

$$\sum_{k \in \mathbb Z} y^{\lfloor \frac{n^2}{4}\rfloor -k (n-k)} P(y)=\left(\sum_{k \in \mathbb Z} y^{\lfloor \frac{n^2}{4}\rfloor -k (n-k)}\right) P(y) = $$

$$\left(\sum_{k\in\mathbb Z} y^{k^2} \right) P(y) $$

($n$ even)

$$\left(\sum_{k\in\mathbb Z} y^{k^2+k} \right) P(y) $$

($n$ odd)

In other words, it is a theta function times the partition function. This power series also has radius of convergence $1$, and, I believe, no roots, and its coefficients bound the coefficients of $f$, so by the same logic we get convergence.

Then the final conclusion is that all the roots are contained in an annulus which converges to a circle of radius $1$. Theoretically one could even get an effective bound for this annulus.

$\endgroup$
4
  • 2
    $\begingroup$ Nice! Why this has only 3 (soon to be 4) upvotes I have no idea. $\endgroup$
    – Stephen
    Commented Sep 10, 2014 at 1:19
  • $\begingroup$ @Stephen Look at the dates posted. The answer by myself and Gottfried Helms against the above post are about two years apart. There is often a correlation between delayed answers and score (though not always). $\endgroup$
    – Hooked
    Commented Mar 5, 2015 at 18:00
  • $\begingroup$ @Stephen Well now it has 49 :) $\endgroup$
    – user253055
    Commented Oct 22, 2015 at 3:05
  • $\begingroup$ @Stephen - it has 73 now and a big extra... :-) $\endgroup$ Commented Jan 26, 2019 at 7:30
46
+200
$\begingroup$

A nicer(?) recursive scheme (for the polynomials, not yet for the roots) is the following. We initialize $f_0(x)$ and $f_1(x)$ with series constants (notation:Pari/GP): $f_0=\operatorname{Ser}(1) $ and $f_1 = \operatorname{Ser}(2)$. Then we proceed recursively:

$$ \quad \begin{array} {rcll} f_0 & = & 1 \\ f_1 & = & 2 \\ \hline f_2 & = & 2 f_1 -f_0+f_0 x^1 \\ f_3 & = & 2 f_2 -f_1+f_1 x^2 \\ f_4 & = & 2 f_3 -f_2+f_2 x^3 \\ f_5 & = & 2 f_4 -f_3+f_3 x^4 \\ f_6 & = & 2 f_5 -f_4+f_4 x^5 \\ \vdots & & \vdots \\ f_k & = & 2 f_{k-1} - (1-x^{k-1}) f_{k-2} \end{array} $$

[update]
For the coefficients $a_{r,c}$ we get
$$a_{r,c} = 2 a_{r-1,c}- a_{r-2,c} + a_{r-2,c-(r-1)} $$
where we assume that negative column indices $c$ in the rightmost term simply produce zeros in the referred $a_{r,c}$ coefficients.

The degrees of the polynomials are $$ \begin{eqnarray*}\quad \deg(f_{2r})&=&r^2 \text{ and}\\ \quad \deg(f_{2r+1})&=&r^2+r.\\ \end{eqnarray*}$$

$\endgroup$
0
38
$\begingroup$

I just want to jump on the bandwagon to make an observation which I don't believe anyone has made so far, and which I think is interesting (if only for having been overlooked). The formula

$$f(x) = 1 + \prod_{m=1}^\infty (1-x^m)^{-1}$$

which Gottfriend and Aleks have found for the limiting function, should jump in the face of anyone who has worked with modular forms. In fact, the function

$$\eta(x) = x^{1/24}\prod_{m=1}^\infty (1-x^m)$$

is known as the Dedekind eta function. It is (more or less) a modular form of weight $1/2$ and level one. Its twenty-fourth-th power is known as the modular discriminant, and it is an important function in the theory of elliptic curves.

Why the Dedekind eta function should appear in this context, I have not a clue.

$\endgroup$
3
  • 2
    $\begingroup$ Wow - that is super interesting. Maybe there's some kind of relationship between the $\eta$ and $q$-analogs. I have a professor I can ask about this. $\endgroup$
    – Alexander Gruber
    Commented Mar 3, 2013 at 23:50
  • $\begingroup$ Dear @AlexanderGruber; if you find out anything, please let me know! Regards, $\endgroup$ Commented Mar 4, 2013 at 0:34
  • $\begingroup$ This is also appears related to the Euler function $\phi(x) = \prod_{k \geq 1}{(1 - x^{k})}$ which has the property that $1/\phi(x) = \sum_{k \geq 0}{p(k)x^{k}}$, where $p(k)$ is the number of integer partitions of $k$. $\endgroup$ Commented Mar 22, 2015 at 20:54
29
$\begingroup$

I wanted to make a quick point which is implicit in the comments (particularly Steven Stadnicki's link to John Baez). It is tempting when considering a specific family of functions with an interesting form, to conclude that this behavior is an interesting pattern about this particular family of examples. What the calculations at Baez's site suggest is that it may be that this behavior might just be the behavior of a random family of polynomials. That is, what you're seeing may not be something about this family being sufficiently structured, but rather a way in which this family is sufficiently random.

$\endgroup$
3
  • $\begingroup$ Why the downvote? Did you read the linked article? Random polynomials tend to have zeros clustered on the circle but avoiding roots of unity. $\endgroup$ Commented Oct 8, 2012 at 21:54
  • 3
    $\begingroup$ I read the article, and I find this property of random polynomials quite intriguing. However I am not sure I agree that this family is "random" in this sense. The family is built in a structured, combinatorial way, so it stands to reason that the structure I see should be some result of that construction - they weren't built randomly. Perhaps it will turn out the reason we see these behaviors will be that it "just turns out that way," but I think searching for an intuitive explanation is worth a shot. $\endgroup$
    – Alexander Gruber
    Commented Oct 9, 2012 at 21:34
  • 1
    $\begingroup$ Certainly I agree that it's worth looking for explanations from structure as well as explanations from randomness. Indeed it is often difficult to prove a result in a specific pseudo-random setting even if you know the result in the truly random setting. So even knowing a theorem along the lines of Baez's page wouldn't give a proof for this specific example. Nonetheless, people are able to prove results in deterministic pseudo-random settings by first understanding the random case. $\endgroup$ Commented Oct 9, 2012 at 22:38
29
$\begingroup$

I've so far a reformulation in terms of q-binomials, which might make it easier to understand the result. First I reformulated your sum-of-products as $$ \begin{align} f_n(x)&=\sum_{m=0}^n \prod_{k=1}^m {x^{n-(k-1)}-1\over x^k-1} \\ &=1 \\ &+{x^{n}-1\over x^1-1} \\ &+{x^{n}-1\over x^1-1} \cdot {x^{n-1}-1\over x^2-1} \\ &+{x^{n}-1\over x^1-1} \cdot {x^{n-1}-1\over x^2-1} \cdot {x^{n-2}-1\over x^3-1} \\ & \vdots \\ &+{x^{n}-1\over x^1-1} \cdot \ldots \cdot{x^{1}-1\over x^n-1} \end{align} $$ where the summands are also the "q-binomials" (here with base q=x) such that $$ f_n(x) = \sum_{k=0}^n {n\choose k}_{[x]} $$ $\qquad \qquad$(For instance, for the base x=1 we get $f_n(1)=2^n \ $ using $\lim_{x\to 0}$. )

There is some interesting literature on q-binomials online (wikipedia,mathworld,...) , maybe you can find something in it which allows to conclude on the polynomial roots this way with more ease...

The above sum $f_n(x)$ can also be factored: $$ f_n(x) = 1 + {x^n-1\over x^1-1} \left(1+ {x^{n-1}-1\over x^2-1} \left(1+ {x^{n-2}-1\over x^3-1} \left( \ldots \right) \right) \right)$$ This gives yet another access to the polynomial roots, however I don't see really the benefit of the latter reformulation.


In the wikipedia-article there is also a limit for the q-binomial where $n \to \infty$ as $$ \lim_{n \to \infty} {n\choose r}_x = {1 \over [r]_x! \cdot (1-x)^r }$$ That means that the sum $ \lim_{n \to \infty}f_n(x)$ has a (formal) expression like $$ \lim_{n \to \infty}f_n(x) = 1 + {1\over 1-x} + {1\over (1-x)(1-x^2)} + {1\over (1-x)(1-x^2)(1-x^3)} + \cdots $$ I don't see however yet, how one could extract the polynomial roots for the case of that limiting process...

$\endgroup$
2
  • $\begingroup$ You say that: "The above sum $f_n(x)$ can be factored: $$f_n(x) = 1 + {x^n-1\over x^1-1} \left(1+ {x^{n-1}-1\over x^2-1} \left(1+ {x^{n-2}-1\over x^3-1} \left( \ldots \right) \right) \right)$$ This gives yet another access to the polynomial roots...". Could you please explain how you do this? Thanks. $\endgroup$
    – Neves
    Commented Oct 10, 2012 at 9:48
  • 2
    $\begingroup$ @neves: I scribbled on the paper, using n=3 and n=4 and evolved that formula from the innermost parenthese and the zeros of the innermost fraction. Then the +1 invalidates that zeros and one has to recalculate - my hope was, that this can be made that the new roots are somehow relative to the old. But although having another view it evolves to the same formula as used anyway obviously without any new benefit or even new insight. So I didn't follow this path further. $\endgroup$ Commented Oct 10, 2012 at 11:06
25
$\begingroup$

This is by no means a complete answer but I made an interesting observation. Let

$$g_{n,m}(x) = \prod_{k=0}^{m-1}\frac{x^n-x^k}{x^m-x^k}$$

so that $f_n(x) = \sum_{m=0}^n g_{n,m}(x)$. We want to examine the behavior of $g_{n,m}(x)$ as $n\to \infty$. I made a table of these for several $n$ and $m$ in mathematica and noticed that they probably each converge to some functions: $g_{n,m}(x) \to g_m(x)$ as $n \to \infty$. Assuming $|x|<1$, I ran a limit as $n \to \infty$ of the ratio of two consecutive $g$'s and got

$$\lim_{n\to\infty} \frac{g_{n,m-1}(x)}{g_{n,m}(x)} = 1-x^m$$

(It requires proof still). Since $g_0(x) = 1$ we obtain the formula

$$\lim_{n\to\infty} g_{n,m}(x) = g_m(x) = \frac{1}{(1-x)(1-x^2)\dots(1-x^m)}$$

I think one could hope that as $n\to\infty$, that

$$f_n(x) \to 1 + \sum_{m=1}^{\infty} g_m(x) = 1 + \sum_{m=1}^{\infty} \frac{1}{(1-x)(1-x^2)\dots(1-x^m)}$$

but of course this probably won't happen, because at the very least neither $f_n$ nor this function on the RHS above converge - for example, the constant coefficient becomes unbounded. Here is something to think about though. We have the following identity by Euler

$$1+\sum_{n=1}^{\infty} \frac{s^n x^n}{(1-x)(1-x^2)\dots(1-x^n)} = \prod_{i=1}^{\infty} \frac{1}{1-sx^i}$$

which has to do with partitions. Maybe there is a way to modify $f_n(x)$ that will make it converge properly. For example, dividing $f_n(x)$ by $n+1$ keeps the constant coefficient at 1.

EDIT: Wow, this is a blast from the past and maybe it's just late at night or something but I decided to look at the series representation of $F_n(x)/n$ where

$$F_n(x) = 1+ \sum_{m=1}^{n} \frac{1}{(1-x)(1-x^2)\dots(1-x^m)}$$ for large $n$. For example for $n = 1000$ we have the series

$$F_{1000}(x) = 1.001 + 1. x + 1.999 x^2 + 2.997x^3 + 4.993x^4 + 6.987x^5 + 10.976 x^6 + 14.961 x^7 + 21.936x^8 + 29.902 x^ 9 + 41.85 x^{10}$$

These coefficients are getting suspiciously close to the numbers

$$ \{1, 1 , 2, 3 , 5, 7, 11, 15, 22, 30, 42, \ldots \} $$

which you may recognize as the number of partitions of $n$ for $n = 0, 1, 2 ,3,\ldots$. As such it's probably safe to conjecture that as $n \to \infty$

$$ \frac{F_n(x)}{n} \to P(x) = \sum_{n=0}^{\infty} p(n) x^n = \prod_{k=1}^\infty \frac {1}{1-x^k}$$

where $p(n)$ is the number of partitions of $n$ and $P(x)$ is its generating function.

$\endgroup$
4
  • 1
    $\begingroup$ I got aware, that my answer with the q-binomials equals exactly your approach - sorry, I didn't see that before. Concerning the limit: you found just the limit-at-infinity of the q-binomial at infinity as given in wikipedia. Maybe there is also a proof there... $\endgroup$ Commented Oct 8, 2012 at 11:24
  • 1
    $\begingroup$ That's alright. Your approach is slicker. I later also found that the g's are q-binomial coefficients and it kind of freaked me out. There's Cauchy's binomial theorem saying $\sum_{m=0}^n y^m x^{m(m+1)/2} \begin{\pmatrix}n\\m\end{\pmatrix}_x = \prod_{k=1}^n (1+yx^k)$ I wonder if this will help. $\endgroup$ Commented Oct 9, 2012 at 9:09
  • $\begingroup$ Aleks, I fixed the pmatrix entries in your comment, the formula occurs then as $ \sum_{m=0}^n y^m x^{m(m+1)/2} \begin{pmatrix}n\\m\end{pmatrix}_x = \prod_{k=1}^n (1+yx^k) $ $\endgroup$ Commented Oct 10, 2012 at 4:15
  • 1
    $\begingroup$ Oh, right, it has no backslash. Thanks! $\endgroup$ Commented Oct 10, 2012 at 5:17

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .