A year or two ago, a couple of us were bored and somehow got to thinking about the series

It’s pretty easy to compute the series for For , one simply has to reorder the sum:

In the original sum, one adds vertically and then horizontally. Adding horizontally and then vertically — and tacitly making use of Fubini’s Theorem — one obtains

In general, one can iterate this idea to determine the sum in terms of the previous sums. For ,

The first row sums to one. The next three rows each sum to . The next five rows sum to . So in general, one obtains

Performing this same idea for general , one ends up with a sequence of rows. The first row always sums to 1. Then a number of rows follows each of which sums to . Then some rows summing to , and so on. To determine the number of rows summing to , one sees that there are . Hence we can write that

Note that the numerator of the right side of the equality is a polynomial of degree , and so by breaking the sum apart into each constituent power of , we have an iterative method of computing the sum for general . In particular,

and if

then

As a relevant aside, while teaching calculus I went through an example in class of how to use power series to compute sums; At the board I realized I had just computed a power series which could be used to compute the sum for any denominator. If

then one can easily see that

All these series have radius of convergence 1, so in particular makes sense when .

Anyway, the original series seems to grow extremely quickly in , much faster than . The approximate growth rate with respect to is pretty obvious by considering the integral,

Since must be smaller than 1, we changed to so that it’s more clear the integral actually converges.Making the change of variables , one gets to something close to the function:

For large enough , depending on , one can replace the lower-limit by 0 without making too great an error. This is because the maximum of the integrand gets larger and moves further and further to the right as gets bigger. In fact the approximation is very good as long as isn’t too small, in particular if isn’t bigger than . Anyway, the whole point of this is that the final integral is almost exactly equal to . For , the integral is 119.99 rather than 5!=120.

Hence,

For , this formula is good to 2 decimal places.

We worked for a while to figure out if there was a way to get a closed-form formula (i.e. a formula for finding the th sum without having to know all the sums smaller than ), but didn’t get too far. If anyone happens to know it, feel obliged to provide a reference!

April 5, 2009 at 4:57 pm |

A000629?

April 6, 2009 at 11:33 am |

My first reaction was that “Well, *of course*, it’ll involve the Bernouli numbers”, but thinking again, I may have to revise that thought … anywho, when I’ve a table to write at, I might scribble out some more.

Cheers!

April 6, 2009 at 8:55 pm |

I can’t find a reference, but the polynomials A(x) such that

are essentially the Euler polynomials (which means yes, there is probably a formula in terms of Bernoulli numbers); then you just need to plug in x = 1/2.

Anyway, two questions:

- In the words of the OEIS entry, what do necklaces of sets of labeled beads have to do with the cumulants of the probability distribution of the number of tails before the first head in a sequence of fair coin tosses?

- Can you think of a heuristic argument that gives that estimate?

April 8, 2009 at 1:26 am |

I believe you are studying the polylogarithm for negative k, [tex]Li_{-k}(x)[/tex]. There are many nice formulae for it—relating it to Bernoulli numbers, finite sums for it, etc.

Have fun!

April 8, 2009 at 1:29 am |

?

April 8, 2009 at 4:58 pm |

Nope. The general term of the polylogarithm is (exponential in the index) / (polynomial in the index), not the other way around.

April 8, 2009 at 5:00 pm |

Whoops, scratch that.

April 9, 2009 at 9:51 pm |

Thanks for the comments. You are probably right about the Bernoulli numbers. That is a partially satisfactory answer, I suppose, except that the Bernoulli numbers don’t seem to have a closed form either (although folks have computed quite a few of them). So, probably my desire for a truly closed form is impossible or at least as hard as finding a closed form for the Bernoulli numbers.

As for the OEIS entry, I don’t really get what they mean by “sets of labeled beads” — it seems somewhat ambiguous. For n=1, the sum is 2, so there are two necklaces that can be made from a single bead. I guess the empty necklace counts? But then for n=2, I seem to get more than 6 if the empty necklace counts. Maybe I’m not understanding what they mean.

As for a more heuristic argument, I’m not sure. One can certainly argue from a more elementary point of view. For instance, the summand hits its maximum at . So, for , one can replace a single in the numerator of the summand with so that . It’s not hard to see the same inequality holds for larger values of n by a more direct comparison. Then by iterating this estimate, one gets the correct upper bound. The opposite direction intuitively then comes from the fact that the only terms in the sum which contribute are ones near the maximum and that the summand is approximately symmetric around that maximum. Then one should be able to get a similar lower bound, possibly losing a constant factor (or a factor which tends to 1 as k gets large).

April 10, 2009 at 1:55 am |

Hi Peter,

As I stated previously, you are studying one of the special functions, the polylogarithm, about which much is known,

Note that you are studying the case for and a negative integer.

For

where

are the so-called Eulerian numbers. Note that this is a finite sum! I believe this is the formula you are looking for. I suppose one could derive it from

with a little work.

Take care, and keep up the good blogging,

…

April 10, 2009 at 3:07 am |

Hi Oenamen,

Thanks for the clarification — I had looked at the wikipedia page for the polylogarithm… they listed dozens of properties of it, and I looked through them but apparently missed that one somehow. This formulation probably provides an easier avenue for a combinatorial understanding of the series. The coefficients are still somewhat difficult to compute (by hand anyway) but certainly represent a drastic improvement over iteration (something like k^2 computations versus k!).

April 16, 2009 at 9:57 pm |

Coincidentally I just wrote about this series last month. Except I’m a programmer, not a mathematician, so I didn’t solve anything. I just calculated a lot of numbers.

http://nealabq.com/blog/2009/03/19/sums_sequences/

Thanks for your clear analysis and explanations.

April 18, 2009 at 3:59 am |

Hi Neal,

Thanks very much for your kind words, and I’m glad that you found the information interesting. I read your page and am happy to hear that you are teaching your son mathematics and experimental mathematics. The confidence that one can discover things on one’s own is a fantastic gift to give to a child.

Peter