One excellent reason to believe that these Cauchy-divergent sums can be assigned reasonable values comes from the fact that equations like

and

have real, finite combinatorial consequences. These are the sums of the Catalan numbers, Motzkin numbers, and Schroder numbers, respectively. By taking these divergent sums seriously, we are led to new results. As a matter of fact, a new combinatorial theorem came out of the comments in the last post, thanks to Isabel (of God Plays Dice) noting that the sum of the Motzkin numbers should be : there is a bijective algorithm (explicitly constructed) which converts Motzkin trees to 5-tuples of Motzkin trees.

Today I would like to pose the question “How do we know that our divergent sums are meaningful in situations where we can’t immediately find a finite consequence?” And, of course, finally get to the promised puzzle.

By being particularly uncritical about what universe our calculations were living in, we showed last time that the formula quite indirectly implies that

But unlike the case of tree-counting, where the divergent sums lead us to very hands-on combinatorial truths, it is unclear how much faith we should have in this sum. I’ve heard that this sum comes up (and really is -1/12) when computing vacuum expectation values in quantum field theories, but I don’t really know enough to say anything reasonable about this. Hopefully some kind commenter will fill in the blanks. While a physical manifestation of this divergent sum is about the best thing we could hope for, we can also look for other abstract manifestations and see if the value -1/12 is consistent between them.

Remember that when we tried to compute the sum directly using analytic regularization, there was a problem:

which is singular at and therefore fails to give a finite value to this sum. In order to compute the value we had to start from and (both proved by analytic regularization) and then carry out some rather questionable algebraic manipulations to get the final result. Why should we have faith in the answer we computed?

If we believe in a Platonic realm of divergent series, we could think of our value as an experimental prediction: if we find another way to compute divergent sums which assigns a value to , that value will be . This is probably not going to actually be true, but if we had another regularization scheme which *did* give us , we might be more inclined to believe that the sum has some meaningful finite interpretation just like the combinatorial sums from before.

With that setup, you probably aren’t going to be surprised that we *do* have another useful regularization scheme for divergent sums. It goes by the name of zeta regularization, and works like this: the zeta-regularized sum is computed by taking the limit as of

Zeta regularization works well in many cases where analytic regularization does not, and vice versa. If there is a universal method for summing divergent series, it is almost as if zeta and analytic regularizations are two disjoint approximations of this method. In particular, we have no reason to expect zeta-regularized sums to have anything to do with analytically regularized sums. With our expectations sufficiently lowered, let us do some calculating with the sum :

where is the Riemann zeta function. The sum we are trying to compute is therefore given by , which we can compute using the functional equation

and the fact (demonstrated nicely by Jim on this very blog) that . What do we get?

Let us pause for a moment and think about just how bizarre this is: two entirely different methods of assigning a sum to the series, the first of which used calculations which are not even clearly well-defined, have given us the same result. Lest we think this is a coincidence, let us also compute with zeta-regularization. Using our questionable algebra from last time, we found a sum of for this series. With zeta-regularization:

so we need to compute . The functional equation tells us that for an infinitessimal ,

(where = should be read as “is infinitessimally close to”). is the harmonic series, which gives a simple pole of reside 1 at . As a result,

Do you believe that there is some rigorous notion of “divergent sum” hiding away in some Platonic corner of the universe yet?

Now here is my puzzle: the harmonic series obviously diverges in the Cauchy sense. It also is Cauchy-divergent for any p-adic metric on . Contrast this property with the nice divergent series , for example. The harmonic series does not have a nice zeta regularization due to the pole of at . It does not have an analytic regularization either:

so that

which implies

Sending to 1 is a disaster, so the harmonic series diverges under analytic regularization as well. Unlike all the other divergent series that we have seen so far, the harmonic series seems to be *really* divergent. This is my puzzle to you, the internet: can you sum the harmonic series?

Just for reference, here are two other dirty tricks that I have tried: the first uses the fact that the alternating harmonic series converges. We have

But as , this becomes the unfortunate equation .

The second trick is much dirtier, and I was very sad to see that it seems to be failing. The zeta function has a special relationship

with the Mobius function. The Mobius function is zero about a third of the time, and is equal to +1 as often as it is equal to -1. So it is not unreasonable to expect that

converges. But numerical tests that I have run computing the sum out to 100 million terms show that , computed this way, is roughly half the magnitude of the nth partial sum of the harmonic series. For reference, if we replace with a random variable that has the same distribution, the expected absolute value of that we obtain is something like 1.8, while the value computed using the real function is about 8.9 and the partial sum of the harmonic series is about 19 after 100 million terms.

Thus concludes my sad story about trying to sum the harmonic series; can we come up with a more clever idea?

August 2, 2007 at 3:23 pm |

There is an obvious difference between the harmonic series and the other divergent series you are looking at here, and that’s the limit of the terms of the series. The limit is zero only for the harmonic. Do you think this plays some role or is it only a coincidence?

August 2, 2007 at 4:35 pm |

Since has only the single pole at z = 1, sums like all are zeta-regularizable (that particular sum is about -1.46), though the terms are all going to zero. Since , that series also Cauchy-diverges. I can’t figure out how to compute an analytic regularization of the series, though. It would be interesting to see if one exists and agrees with the zeta regularization, maybe somebody out there can figure it out?

January 26, 2010 at 8:18 am |

Hi,

I’ll try to answer last mnoonan doubt. Let be the following power series:

S(z)=1+z/sqrt(2)+z^2/sqrt(3)+z^3/sqrt(4)+…

Then S(z)-sqrt(2)*z*S(z^2)=1-z/sqrt(2)+z^2/sqrt(3)-z^3/sqrt(4)+…=T(z),

where T(z) converges for z=1.

Then:

zeta(1/2)=S(1)=T(1)/(1-sqrt(2))

One can see that this series converges to -1.46 approximatedly by computing terms.

August 2, 2007 at 6:43 pm |

What a fun puzzle!! Maybe we should allow divergences in terms of ‘omega’ so long as we can show exactly what it equals in the surreals. Eg. the harmonic series is minus

2(1+1+1+1+….) + 3(1+1+1+1+1+……) + 4(1+1+1+1+1+….) +

= -1 + (1 + 2 + 3 + 4 + 5 + 6 + …) +

-1 + (1 + 2 + 3 + 4 + 5 + 6 + …) +

-1 + (1 + 2 + 3 + 4 + 5 + 6 + …) +

-1 + (1 + 2 + 3 + 4 + 5 + 6 + …) +

= 13/12 omega

August 2, 2007 at 8:07 pm |

I like that idea, especially since (as Josh pointed out in the first post) a geometric series with might reasonably sum to zero, which could tackle some of the divergences. On the other hand, you have to be extremely careful with re-bracketing in these sums: it is pretty easy to show that any finite rebracketing is OK for analytic regularization, but infinite ones are more tricky:

That is why I was so surprised to find that the sum of the Schroder numbers equals the sum of the Motzkin numbers — even though they count the same thing in different ways, that “different way” involves an infinite rebracketing of the terms in the sum.

For zeta regularization, I can’t even clearly see that finite rebracketing is OK. With analytic regularization, the key lemma is that if then . This lets us treat any finite leading portion of the sum as an honest-to-goodness sum of numbers which can be rearranged as we wish. Can we prove an analogous statement for zeta regularization? Maybe it is in Hardy somewhere…

August 2, 2007 at 9:02 pm |

As far as zeta regularization and analytic regularization giving the same answer when they both work, I believe this is a consequence of the Mellin-transform. This is an operation that takes holomorphic functions on to holomorphic functions on (modulo some definite integral existing), which takes the function to . Therefore, if we pull an Euler and assume that the Mellin transform commutes with the infinite sums we care about, we take the holomorphic function

to

.

Thus, any analytic continuation of the latter corresponds to the Mellin transform of an analytic continuation of the former.

This is pretty hand-wavy, not only because of all the analytic bookkeeping I ignored, but because the techniques Matt used to sum series didn’t always correspond to forming one of the above two formal series.

I’m thinking about writing a post on the Mellin transform and its possible applications to this stuff. Hopefully I get it done before I leave tomorrow.

August 4, 2007 at 1:27 pm |

Could it be that there is some “strip” such that if the rate of growth of the series is in it, we can´t assign a well defined value to the sum? What I mean is, if the sum grows slowly enough it converges, if it grows fast enough one can assign a value to the sum like you have done in this post, and previous posts, but if it´s somwhere in the middle, like the harmonic series, then it truly is divergent.

August 5, 2007 at 12:15 am |

Heh, check this out: a function J(x,y) that unites the Motzkin, Schroeder and Catalan numbers.

August 8, 2007 at 5:08 pm |

Another strange fact : it is known that

(1+2+3+…+n)^2 = 1^3+2^3+3^3+…+n^3

Taking the limit when n goes to infinity, we should find

zeta(-1)^2 = zeta(-3)

but zeta(-3) is 1/120, and not (-1/12)^2 !

Can anyone shed some light on this ?

August 9, 2007 at 12:55 pm |

I think the problem is that “limit” is a metric (or at least topological) concept. Since these sums diverge for most or all good metrics on , we shouldn’t expect them to behave nicely under limit operations like computing partial sums, etc.

October 7, 2007 at 12:47 pm |

The harmonic series can be made to sum to ${\gamma}$ in the Ramanujan sense. However this is merely defined as the asymptotic difference between the sum and the integral(which in this case is divergent) and in the case of H it simply defines ${\gamma}$

October 7, 2007 at 1:35 pm |

Oh and Greg Muller having acknowledged the connection to the Mellin transform we see that neither Abelian or zeta regularization techniques will work for the harmonic series.

I too in a misadventure of sorts tried a procedure for summation(one futile from the ouset sadly) that only resulted by rearrangement a proof that the harmonic diverges.

With regards to fanfan’s question we have to consider asymptotic differences much like the case with ${\gamma}$ (I refer to the taylor series for the logarithm) when we consider the infinite case. It is ironic that I seem to be trying to associate rigour to a supposedly ‘dubious’ mathematical procedure but zeta regularization is mathematically valid (as is the Abelian summation process owing to analytic extension).

We need only absolve ourselves of the hardwired geometry to make sense of it all. In a platonic ‘measure of infinities’ tying it to the bases of analysis.

It may be of interest to consider the Euler Mclaurin sum formula and compare it with the formula I derive in my third post at the link below.

http://www.artofproblemsolving.com/Forum/viewtopic.php?t=164838&sid=83c0268c38c6f0deb8a42a16417c9271

October 18, 2008 at 9:24 am |

http://www.recercat.net/bitstream/2072/920/1/776.pdf

January 23, 2009 at 1:04 pm |

Using Euler Mac Laurin sum this zeta regularization can also be extended to integrals

Int(0,oo) x^m dx although they are divergent

see http://www.wbabin.net/science/papers/moreta23.pdf

relating this strange integral to divergent sums of the form

1+2^M+ 3^M +4^M +…………

January 23, 2009 at 1:42 pm |

http://www.wbabin.net/science/moreta23.pdf instead the above link

if possible please someone remove the upper topic of mine thanks 🙂

here you can see how you could calculate divergent integrals

Int(0,oo) x^m dx although they are divergent

relating them to negative values of zeta function Z(-m)

February 23, 2009 at 6:42 am |

[…] https://cornellmath.wordpress.com/2007/08/02/sum-divergent-series-iii/ […]

February 27, 2009 at 9:27 am |

See my blog at,

http://mathrants.blogspot.com

March 16, 2009 at 5:26 pm |

A thought. One can define a q-analogue of the harmonic series by computing

sum q^n/(1 – q^n) = sum sigma(n) q^n

where sigma(n) is the number of divisors of n. The “sum” of the harmonic series should be the residue at q = 1; perhaps the Mellin transform would be relevant in relating this sum to sum sigma(n) / n^s = zeta(s)^2?

March 18, 2009 at 8:20 am |

S = …1/a² + 1/a + 1 + a + a²…

aS = …1/a + 1 + a + a² + a³… = S

aS = S

Therefore, S=0

January 26, 2010 at 8:40 am |

Hi,

I have discovered that Ramanujan summation C(a) give the usually accepted value of both convergent and divergent summations if we take ‘a’ so that the primitive of f(x) evaluated in ‘a’ be 0:

http://en.wikipedia.org/wiki/Ramanujan_summation

That is: a=+infinite for summation of convergent series and, for divergent series, a=0 for zeta Riemann series, a=-infinite for geometric series and a=1 for the harmonic series. If we consider that ‘a’ is not necessary to have a real value, but we ignore the evaluation of the integral of f(x) in ‘a’, that formula is also valid for alternating divergent series.

Taking that into account, the sum of the harmonic series is obtained by taking f(x)=1/x and a=1 in the formula around x=1 and the value obtained is gamma (the Euler-Mascheroni constant)

February 1, 2010 at 8:54 am |

Hi,

I have made up a “demonstration” that H=gamma, that is, the sum of the harmonic series is equal to the Euler-Mascheroni constant, by using a power series.

Let be H_n = Sum_k=1^n (1/n)

Then H = 1+Sum_n=1^inf ( 1/(n+1) + ln(n/(n+1)) ) – lim(x->1-,n->inf) Sum_k=1^n ( ln(k/(k+1)) ) x^k

The part before the limit is gamma, so we must demonstrate that the other series (which I will call S_n) limit is 0.

Then S_n = Sum_k=1^n ( ln(k/(k+1)) ) x^k

We compute the limit when x->1-, which I will call L1.

L1 S_n = L1 ( x(ln1-ln2) + x^2(ln2-ln3) +…+ x^n(ln n-ln(n+1)) ) = L1 ( 1ln1 – x^n ln(n+1) ) – L1 ( (1-x)(xln2 + x^2 ln3 +…+x^(n-1) ln n) = -L1 ( x^n ln(n+1) )

Now I call L2 = lim(n->inf,0<x<1). Then:

L1 L2 S_n = -L1 L2 ( x^n ln(n+1) ) = (-L1 (0*inf)) = ( using L'Hopital rule making d/dn in numerator and denominator, taking into account that 1/x^n = e^(-n ln x) ) = -L1 L2 ( 1/(n+1) / (-x^n ln x) ) = L1 L2 ( x^n / ((n+1) ln x) ) = L1 0 = 0, as we wish to demonstrate, since:

L2 (x^n) = 0, L2 (n+1) = inf, L2 (ln x) = ln x, which is finite.

See the simmilarity with the geometrical series:

Sum_k=1^n (x^k) = (1-x^n)/(1-x)

S_n/(1-x) + Sum_k=1^(n-1) ( x^k ln(k+1) ) = ( 1 ln 1 – x^n ln(n+1) )/(1-x)

February 3, 2010 at 10:12 am |

Hi again,

here is the “demonstration” in a clearer way. First of all, we write the power series:

H(x) = -ln(1-x) = lim(n->inf) ( x + x^2/2 + x^3/3 +…+ x^n/n )

Thus we can write:

lim(x->1-) ( (x-1) ln (1-x) – xln(1-x) ) = lim(x->1-, n->inf) ( ( x + x^2/2 +…+ x^n/n – x^n ln n ) + lim(x->1-, n->inf) ( x^n ln n)

But

lim(x->1-) ( (x-1) ln (1-x) ) = (L’Hopital) =lim(x->1-) ( (-1/(1-x)) /(-1/(x-1)^2) ) = lim(x->1-) ( 1-x ) = 0

In the right term of the equation, the second member tends to 0 if we make first the limit n->inf with 0<x1-, as shown in my last reply. The first member of the right term of the equation tends to gamma if we make first the limit x->1- with ninf, so the equation results:

H=gamma

February 3, 2010 at 10:14 am |

I repeat my last sentence, since there is a mistake:

The first member of the right term of the equation tends to gamma if we make first the limit x->1- with ninf, so the equation results:

H=gamma

February 3, 2010 at 10:16 am |

There is a problem with the editor, so I write my last sentence with words:

The first member of the right term of the equation tends to gamma if we make first the limit x->1- with n finite and then make the limit n tends to infinite, so the equation results:

H=gamma

May 9, 2013 at 1:43 am |

The sum has a reasonable value of -6, proof on request (Hint, multiply the sum by (1+2+3+…)=-1/12, then sum ove infinitely long diagonals.

June 2, 2013 at 5:13 am |

Hi! I just wish to give you a huge thumbs up for the excellent info you have right

here on this post. I am coming back to your website for more

soon.

January 18, 2014 at 1:02 am |

[…] somehow intimately related to -1/12th, in a way that is subtle and mysterious. (Or, if you prefer Plait, “[r]eally […]

August 3, 2017 at 6:56 am |

This is ten years late – but in case it is still of interest, a response specifically to your “challenge”. I’ve invented a matrix-summation method using the Eulerian numbers, which seems to work nicely for divergent sums like alternating geometric series, even for the Euler’s alternating hypergeometric series. However, it does not, similarly to the Cesaro’s, Euler’s and Borel-summation, sum that non-alternating variants. However, I foud it interesting, what in fact it *does* as a matrix-transformation, and applying this transformation to the zeta(1)-, zeta(0)- ,zeta(-1)- , and so on series I found some pattern which is interesting on its own. For the harmonic series it arrives at something which is perfectily equivalent to your *non*-equality 0=log(2) so I thought this might interest you too. I’ve two (amateurish) essays on the matrix-summation-method with Eulerian numbers (calling it for simpliness “Eulerian summation”) : the first one introducing it at all, and the second one looking specifically at the transformations of two of the classic divergent series (see http://go.helms-net.de/math/binomial_new/EulerianSumsV2.pdf if this is of interest at all).