Matt’s entertaining posts on divergent series have inspired me to contribute my own two cents. In his posts, the central theme has been to replace a divergent series with a formal ‘function’ and to use algebraic manipulations and analytic continuation to assign a ‘best’ sum to a series without any obvious candidate. The two classes of formal functions used have been generating function and zeta functions .

One of my favorite traits of these two beasts is the connections between them, in particular, an integral-based method for turning generating functions into zeta functions called the Mellin transform. Why should such a connection make me happy? Well, on one side, the generating function is useful for exploring the additive nature of the sequence of ‘s; while the zeta function is useful for exploring the multiplicative nature of the sequence of ‘s. It is a sad fact of mathematical life that understanding the additive nature of an object (in say, a ring) is almost totally unhelpful for understanding the multiplicative nature of that object. For instance, the prime factorization of an integer tells us virtually nothing about the prime factorization of (for a more concrete realization of the hardness of this problem, see the Collatz conjecture). Therefore, the ability to turn a generating function into a zeta function can make impossibly obfuscated details clear!

**Multiplicative vs. Additive**

The first task is to make precise what I mean when I say that generating functions deal with additive structure, and zeta functions deal with multiplicative structure.

We start with a situation where we have a bunch of Objects, and each Object has a Size which is a positive integer. Objects can be positive integers, points on an integral lattice, fixed points of a dynamical system or, in the case of relevant interest, trees. The notion of Size is situational, but usually obvious… for trees, it is the number of nodes. We then concern ourselves with , which is the number of Objects of Size . How fast do the grow? Is there any rhythm to their values? Can I assign some number to the value of their sum (ie, the total number of Objects)?

One way we have been attacking these questions is to cook up a generating function . Some of the previous questions have straightforward analogs now; for instance, the growth rate of the is measured exactly by the radius of convergence of around 0. Still, the true strength of this formulation comes from the following simple fact:

I can exploit this whenever I can uniquely combine an Object of Size (of Type A) and an Object of Size (of Type B) into an Object of Size . This gives me a multiplicative statement of generating functions, , where corresponds to all Objects, and and correspond to only their respective type.

A silly example. My Objects will be subsets of a set of distinct elements, and the Size of an Object is the number of elements in the set. The now are binomial coefficients . The associated generating function is

.

By the Binomial Theorem,

.

is the generating function for the number of subsets of a one element set. This is telling us that we can pick a subset of a element set by picking subsets of each constituent one element set, and the Size of the total set will be the sum of the Sizes of the smaller sets.

I have a question here. Matt’s beloved identity is (where is the generating function for trees). If it were true that (which it’s not), then this would imply that there could be a bijection between seven-tuples of trees and trees that preserved total number of vertices. Is there a weaker version of this statement that is equivalent to the equality holding only at 1?

Anywho, what about zeta functions? The analogous simple-but-important fact here is:

Now, I need to combine Objects of Size and into an Object of Size .

The classical example here is positive integers. This may seem dumb, since for all , but it is fun and useful to apply these techniques to even the most basic examples. I cook up the zeta function (which is, in fact, THE zeta function). However, the integers satisfy unique prime factorization! That means, an integer is equivalent to a choice of a prime power for every prime (technically, all but a finite number must be , but this point turns out not to matter). Therefore, I can cook up new zeta functions which count only the number of prime powers of a given Size. Since there is either one or none of them, I just sum over prime powers:

Then, unique prime factorization gives us the equality:

,

where the product is over all primes. One can then manipulate the two sides of this equality, called the Euler Product, to show a tremendous number of awesome things about the primes. For instance, there are quite alot of them.

What was the point of this rather long-winded exposition? Generating functions are useful when you can break down Objects into constituent elements whose Size** adds** to give you the original Size and zeta functions are useful when you can break down Objects into constituent elements whose Size **multiplies** to give you the original Size. Now let’s relate the two!

**The Mellin Transform**

The goal here is to create a nice analytic process that takes analytic functions in the variable to analytic functions in the variable such that polynomials of the form get taken to . Such a process won’t really exist for all analytic functions, but we only care about one that can deal with the generating functions we like.

The first step is to replace the variable with . This turns the functions we care about into the form . Lets play with this thing a bit.

If we multiply it by , then the change of variables does the following:

Hey, theres that term we wanted! The trick now is just figuring out how to make the other parts of the equality ‘go away’.

We can make the change of variables dissappear by integrating both sides over a measure and a contour which are fixed by the rescaling . For a measure, we pick , and for a contour we pick the positive real line (going toward infinity). My point is that for *any* integrand ,

In the case of the above integrand, this is

Astute readers might notice that the last integral is the defintion of the Gamma function . It has numerous nice properties we will ignore, since for this we only care that it is a function defined almost everywhere. If we want just a on the right, we can divide both sides by the Gamma function.

So, we got a to appear. How do we turn this into a more generalized process? Start with a function . We define its Mellin transform to be:

I should mention that the usual Mellin transform is just multiplying by and integrating against , but the additional variable change and factor of are minor. As was shown before, this takes to as required. Therefore, if I have a function

,

then its Mellin transform is

.

Actually, this last statement only works if the integral in the definition can be commuted past the sum in , but it usually does in the cases we like.

So what good is this? We got an analytic process that turned generating functions into zeta functions. This has several nice consequences. First, if I have an analytic continuation of a generating function, then its Mellin transform is an analytic continuation of the corresponding zeta function. This means that I can’t get two different answer for the value of a divergent sum from the two different functions. Second, this means that if I have a class of Objects I can decompose both additively and multiplicatively, then both the generating function and the zeta function might have nice factorizations I can compare and exploit. I really ought to include an example of this, but it’s late and I’m tired. Maybe another post, if it won’t get too bogged down in Riemann Zeta function magic.

P.S. I can rewrite the fact that the Mellin transform of is as

.

I’ll be darned if this doesn’t look an awful lot like a state sum in statistical mechanics, normalized over a partition function. Is there a basic reason why this should be true? Is there a toy system where the state sum for a parameter is exactly this?

August 3, 2007 at 2:49 pm |

Greg, could you spell this out a bit? You seem to be saying that, if f and F are analytic functions defined on (open neighborhoods of) [0,1] and [0, infinity) respectively such that f and F are Mellin transforms of each other for z and s sufficently near 0 and infinity respectively, then f(1)=F(0). Of course, if the integral defining the Mellin transform converges, this is straightforward, but I don’t see how to prove it simply from analytic continuation. I imagine this is a standard result, but it isn’t obvious. Can you give a proof or reference?

August 5, 2007 at 12:23 am |

I would need to look at things more carefully, my statistical mechanics knowledge isn’t so good, but my first thought when I saw that is that it might be related to knot polynomials. They tie knot theory and stat mech together rather intimitely in some ways, and they tend to be Laurent polynomials. Again this is all just my first instinct, I intend to look at this more carefully when I find time to learn stat mech.

August 8, 2007 at 6:19 am |

08 08 07

Oh mydear what a wonderful post! The comment by Charles brings to bear a whole plethora of thoughts regarding the connexion between the Reimann Zeta function as a fundamental entity in distribution of the primes in number theory and the fundamental nature of the partition function in stat mech. There are two papers that come to mind:

1.Statistical Theory of Numbers, by BL Julia

2. And I just found this one: State Sum Invariants of Knotted Curves and surfaces from quandle cohomology.

Really cool stuff. I realize maybe the focus of your post was not to get into the physics, but Charles’ comment led me down this path.

Great blog!

August 17, 2007 at 9:17 am |

Great post, Greg… I’m definitely going to keep reading this blog regularly.

Could you submit this to the next carnival of mathematics, hosted on http://www.johnkemeny.com/blog/ in a week? We’re looking for more advanced things than what has appeared in most recent carnivals, and Mellin transforms fit the bill perfectly.

October 12, 2007 at 10:26 am |

We use the basic idea of this transform in deriving the real equivalent of the functional equation for the zeta function by considering the gamma function (and using the substitution x’ = nx as used here) It is interesting to note that putting z = exp -t and differentiating the power series for the sequence (a) k times and setting t=0 gives \sum_{n} a_n n^{s}