We’ve had some fun on the blog recently talking about divergent series and assigning bizarre but appropriate sums to them. However, I have a bit of a pet-peeve about when people try to mis-use infinite series to make real-world predictions.
My favorite example comes from a heated argument I had with a friend about a problem from his Probability class. He had the following homework problem.
A casino opens a game that plays as follows. A person pays 1,000 dollars to play, and then they get to flip a coin until they it comes up tails. The pay-out is dollars, where n is the number of heads that came up in a row. Is this game worth playing?
The mathematical approach is to compute the expected value, which is the sum of every possible payoff times the chance of that payoff. The possiblities correspond to the number of heads; the possiblity of getting exactly n heads is , while the payoff is dollars. Therefore, the expected value is forever, so it’s infinite! Hooray! Then the game is worth playing, and it would be no matter how much it costs to play.
Of course, a few days later, all the formerly giddy mathematicians involved are broke and confused. What went wrong?
There are actually several things wrong with the above logic, but the one that bothers me most is the sheer naivity in assuming that there are really an infinite number of terms in computation of expected value.
Every math word problem like the one above comes with implicit assurances that real-world factors don’t enter in (for instance, almost every logic puzzle in the world involves only participants with infallable logical faculties). However, consider the fantastically large payoffs involved in the game. Is the casino really going to pay you 2^1000 dollars? Of course not, such a number is absurdly larger than the number of atoms in the visible universe. And yet, such payouts play a crucial role in the computation of expected value. If the casino was allowed to file for bankruptcy and not pay out rewards of 2^1000 dollars or greater (but still somehow paid all smaller astronomical totals), then the expected value of the game is only 999 dollars, and so the game wouldn’t be worth it.
How can anyone, even in the mathematical abstract, claim a solution that doesn’t stand up to the flimsiest application of reality? This is just a problem in a textbook, but I have repeatedly heard people discuss a similar betting strategy as if it was possibly legitimate. Start with a fair game with a half chance of doubling your bet, and a half chance of losing your bet. Bet an initial amount , and if you ever win, stop betting. If you lose, instead of quitting, double your previous bet and play again. The narrow-minded mathematical claim is that you are guaranteed to make dollars this way.
The proof of this spurious claim is simple, and very similar to the previous computation. If you lose times in a row and then win, your winnings are . Also, you must win at some point, since the chance of you losing in a row approaches zero very fast. Therefore, you always win dollars.
Before another round of enterprising mathematicians lose their nestegg, lets debunk this. The fallacy here is that, instead of the casino having a finite amount of money, the player has a finite amount of money. For this strategy to work, the casino must allow you to make these geometrically increasing bets without bound. At some point, the casino has to cut you off and then take everything of value from you. What ends up happening is that, most of the time you make dollars, and a very small fraction of the time you lose everything you have; and it happens in such a way that the net probabilistic gain is zero (its a fair game, after all). So, in reality, instead of beating the casino, a large number of possible versions of yourself are ganging up and mugging one unlucky possible version of yourself. (Probablity is always more fun when you imagine self-on-self violence across possiblity space.)
While I’m at it, I want to mention another reality-based fallacy that I think crops up in idealistic math problems of value, as well as in people’s actual risk-reward estimates. The trick is that ‘a dollar’ is not equally valuable to all people at all times. Even with lifestyle and personality aside, people value the gain of a dollar less the more money they already have. That means that the value of some money is a non-linear function of the total amount of money.
The point is that, factoring for this, even ‘fair games’ might not be fair. Given a half chance of losing a dollar and a half chance of gaining a dollar, the expected value could easily be less than the original value. This is easier to see with a very large bet on a fair game. Say you have 5,000 dollars in the whole world, is it worth it to bet all 5,000 on even a fair game? Half the time you end up completely destitute, and the other half you end up with 10,000 dollars, which isn’t so much better than 5,000 dollars. On a much, much smaller scale, even one dollar fair bets can be not worthwhile (though on a microscopic scale). This model punishes pretty much all fair risk, and so it breaks most problems of this type.