A Silly Infinite Series


A year or two ago, a couple of us were bored and somehow got to thinking about the series


It’s pretty easy to compute the series for k=0. For k=1, one simply has to reorder the sum:

\displaystyle{\begin{array}{ccccc} \frac{1}{2} & +\frac{1}{2^2} & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\& +\frac{1}{2^2} & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\&  & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\ & & &+\frac{1}{2^4} & +...\\& & & & \vdots\\ \end{array}}

In the original sum, one adds vertically and then horizontally. Adding horizontally and then vertically — and tacitly making use of Fubini’s Theorem — one obtains


In general, one can iterate this idea to determine the k^{\textrm{th}} sum in terms of the previous sums. For k=2,

\displaystyle{\begin{array}{ccccc} \frac{1}{2} & +\frac{1}{2^2} & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\& +\frac{1}{2^2} & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\& +\frac{1}{2^2} & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\& +\frac{1}{2^2} & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\&  & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\&  & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\&  & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\&  & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\&  & +\frac{1}{2^3} & +\frac{1}{2^4} & +...\\& & & & \vdots\\ \end{array}}

The first row sums to one. The next three rows each sum to 1/2. The next five rows sum to 1/4. So in general, one obtains


Performing this same idea for general k, one ends up with a sequence of rows. The first row always sums to 1. Then a number of rows follows each of which sums to 1/2. Then some rows summing to 1/2^2, and so on. To determine the number of rows summing to 2^{-\ell}, one sees that there are \displaystyle{(\ell+1)^k-\ell^k}. Hence we can write that


Note that the numerator of the right side of the equality is a polynomial of degree k-1, and so by breaking the sum apart into each constituent power of k, we have an iterative method of computing the sum for general k. In particular,




and if




As a relevant aside, while teaching calculus I went through an example in class of how to use power series to compute sums; At the board I realized I had just computed a power series which could be used to compute the sum for any denominator. If


then one can easily see that


All these series have radius of convergence 1, so in particular f_k(x) makes sense when x=1/2.

Anyway, the original series seems to grow extremely quickly in k, much faster than k!. The approximate growth rate with respect to k is pretty obvious by considering the integral,

\displaystyle{\int_{u=1}^{\infty}u^kx^udu=\int_{u=1}^{\infty}u^ke^{-u\log \frac{1}{x}}du}

Since x must be smaller than 1, we changed \log x to -\log \frac{1}{x} so that it’s more clear the integral actually converges.Making the change of variables v=u\log \frac{1}{x}, one gets to something close to the \Gamma function:

\displaystyle{\int_{u=1}^{\infty}u^ke^{-u\log \frac{1}{x}}du=\frac{1}{(\log \frac{1}{x})^{k+1}}\int_{\log \frac{1}{x}}^{\infty}v^ke^{-v}dv}

For large enough k, depending on x, one can replace the lower-limit by 0 without making too great an error. This is because the maximum of the integrand gets larger and moves further and further to the right as k gets bigger. In fact the approximation is very good as long as x isn’t too small, in particular if \frac{1}{x} isn’t bigger than 1/e. Anyway, the whole point of this is that the final integral is almost exactly equal to k!. For k=5, the integral is 119.99 rather than 5!=120.


\displaystyle{\sum_{n=1}^{\infty}\frac{n^k}{2^n}\approx \frac{k!}{(\log 2)^{k+1}}}

For k=5, this formula is good to 2 decimal places.

We worked for a while to figure out if there was a way to get a closed-form formula (i.e. a formula for finding the kth sum without having to know all the sums smaller than k), but didn’t get too far. If anyone happens to know it, feel obliged to provide a reference!

12 Responses to “A Silly Infinite Series”

  1. D. Eppstein Says:


  2. some guy on the street Says:

    My first reaction was that “Well, *of course*, it’ll involve the Bernouli numbers”, but thinking again, I may have to revise that thought … anywho, when I’ve a table to write at, I might scribble out some more.


  3. Qiaochu Yuan Says:

    I can’t find a reference, but the polynomials A(x) such that

    \sum_{n=1}^{\infty} n^k x^n = \frac{A(x)}{1 - x)^{k+1}}

    are essentially the Euler polynomials (which means yes, there is probably a formula in terms of Bernoulli numbers); then you just need to plug in x = 1/2.

    Anyway, two questions:

    – In the words of the OEIS entry, what do necklaces of sets of labeled beads have to do with the cumulants of the probability distribution of the number of tails before the first head in a sequence of fair coin tosses?

    – Can you think of a heuristic argument that gives that estimate?

  4. oenamen Says:

    I believe you are studying the polylogarithm for negative k, [tex]Li_{-k}(x)[/tex]. There are many nice formulae for it—relating it to Bernoulli numbers, finite sums for it, etc.
    Have fun!

  5. oenamen Says:


  6. Qiaochu Yuan Says:

    Nope. The general term of the polylogarithm is (exponential in the index) / (polynomial in the index), not the other way around.

  7. Qiaochu Yuan Says:

    Whoops, scratch that.

  8. Peter Luthy Says:

    Thanks for the comments. You are probably right about the Bernoulli numbers. That is a partially satisfactory answer, I suppose, except that the Bernoulli numbers don’t seem to have a closed form either (although folks have computed quite a few of them). So, probably my desire for a truly closed form is impossible or at least as hard as finding a closed form for the Bernoulli numbers.

    As for the OEIS entry, I don’t really get what they mean by “sets of labeled beads” — it seems somewhat ambiguous. For n=1, the sum is 2, so there are two necklaces that can be made from a single bead. I guess the empty necklace counts? But then for n=2, I seem to get more than 6 if the empty necklace counts. Maybe I’m not understanding what they mean.

    As for a more heuristic argument, I’m not sure. One can certainly argue from a more elementary point of view. For instance, the summand hits its maximum at k/(\log2). So, for x\le k/(\log 2), one can replace a single n in the numerator of the summand with k/(\log 2) so that \sum_{n=1}^{n=k/(\log 2)}n^k/2^n\le k/(\log 2) \sum_{n=1}^{n=k/(\log 2)}n^{k-1}/2^n. It’s not hard to see the same inequality holds for larger values of n by a more direct comparison. Then by iterating this estimate, one gets the correct upper bound. The opposite direction intuitively then comes from the fact that the only terms in the sum which contribute are ones near the maximum and that the summand is approximately symmetric around that maximum. Then one should be able to get a similar lower bound, possibly losing a constant factor (or a factor which tends to 1 as k gets large).

  9. oenamen Says:

    Hi Peter,

    As I stated previously, you are studying one of the special functions, the polylogarithm, about which much is known,
    \textrm{Li}_s(z) = \sum_{k=1}^{\infty} \frac{z^k}{k^s}
    Note that you are studying the case for z = 1/2 and s a negative integer.
    For k=1,2,\ldots
    \textrm{Li}_{-k}(z) = \frac{1}{(1-z)^{k+1}} \sum_{i=0}^{k-1}\left\langle{k\atop i}\right\rangle z^{k-i}
    \left\langle{n\atop m}\right\rangle = \sum_{k=0}^m (-1)^k \left({n+1}\atop{k}\right) (m+1-k)^n
    are the so-called Eulerian numbers. Note that this is a finite sum! I believe this is the formula you are looking for. I suppose one could derive it from
    \textrm{Li}_{-k}(z) = \left(z\frac{d}{dz}\right)^k \frac{z}{1-z}
    with a little work.

    Take care, and keep up the good blogging,

  10. Peter Luthy Says:

    Hi Oenamen,

    Thanks for the clarification — I had looked at the wikipedia page for the polylogarithm… they listed dozens of properties of it, and I looked through them but apparently missed that one somehow. This formulation probably provides an easier avenue for a combinatorial understanding of the series. The coefficients are still somewhat difficult to compute (by hand anyway) but certainly represent a drastic improvement over iteration (something like k^2 computations versus k!).

  11. Neal Says:

    Coincidentally I just wrote about this series last month. Except I’m a programmer, not a mathematician, so I didn’t solve anything. I just calculated a lot of numbers.


    Thanks for your clear analysis and explanations.

  12. Peter Luthy Says:

    Hi Neal,

    Thanks very much for your kind words, and I’m glad that you found the information interesting. I read your page and am happy to hear that you are teaching your son mathematics and experimental mathematics. The confidence that one can discover things on one’s own is a fantastic gift to give to a child.


Leave a Reply to Peter Luthy Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: