Author Archive

Card Shuffling II – The Riffle Shuffle

April 21, 2009

In my previous post on card shuffling, I established a basic framework in which we will work. We are given a probability distribution P on S_n and we wish to determine when ||P_k-U|| first begins to decay exponentially, where P_k is the k-fold convolution of P. One key feature of card shuffling theory, as well as much of finite Markov chains in general, is that the tools available are often very particular to a small class of problems. There just aren’t very many big hammers around. Even though the theorem described in the previous post was quite general, it was non-quantitative, and so not especially useful in practice.

The standard shuffling technique is called the “riffle shuffle.” In this shuffle, the deck is cut in half, and the two halves are zippered together. We need to come up with a mathematical way of describing the riffle shuffle, and I’ll list three different methods (I’m assuming the deck has 52 cards here, but any n will do):

First Way. The first thing to think about is how we cut the deck. Mathematically speaking, we will assume the number of cards in the top half of the deck after we cut is binomially distributed. All this means is that to determine the number of cards we cut from the top, flip 52 coins and count the number of heads to figure out how many cards go in the top half. It may seem strange that there is a positive probability of having all 52 cards in the deck sitting in the “top half” but the probability is extremely small and so doesn’t matter so much. For most shufflers, the size of the two “halves” are often quite different. Anyway, suppose that our result is k cards in the top half. From here, we think of having 52 boxes lined up and put the cards in them. We pick k of the boxes (assuming each box is equally likely) and put the top half of the deck in those boxes, keeping them in the same order. Put the remaining 52-k cards in the remaining boxes, keeping them in the same order relative to each other. Stack the cards back up. Note that there are \displaystyle{\binom{52}{k}=\frac{52!}{k!(52-k)!}} ways to put k cards in 52 boxes, so that the probability of any box choice is 1/ \binom{52}{k}.

(more…)

Card Shuffling I

April 19, 2009

Just about anyone interested in mathematics has studied a little probability and probably done some easy analysis of basic card games and dice games. Completely off topic for a second, am I the only one who has noticed that basic probability homework exercises are the only situation, aside from funerals, that anyone will ever use the word “urn?” For whatever reason, probabilists love that word. Anyway, in any real card game, the computations tend to get complicated rather quickly, and most people get turned off from the discussion. With some ingenuity, however, one can answer some pretty cool (but initially difficult seeming) questions without having to go through a lot of tedious computations.

Take as an example card shuffling. In the face of expert card-counters, the natural question for the dealer is how many times he or she has to shuffle the deck before it’s well-mixed. In the case when the dealer is also playing the game — and is a card-counter at the level of a member of the MIT black jack team, say — he or she could drastically improve their odds by using a shuffling method which seems to shuffle the deck well, but actually is very poor at doing so. Anyway, at this point the question is slightly ill-posed, as we have no obvious way to interpret the word mixed, let alone well. In fact, coming up with a mathematical model of what shuffling means is already fairly difficult. What I’m hoping to do is give a framework which makes the problem more tractable.

(more…)

A Silly Infinite Series

April 5, 2009

A year or two ago, a couple of us were bored and somehow got to thinking about the series

\displaystyle{\sum_{n=1}^{\infty}\frac{n^k}{2^n}.}

(more…)

Fun With Sums

February 22, 2009

It’s been a while since there has been any math on the blog, so I figured I’d share a recent (trivial) mathematical fact I came upon while passing the time. A less noble goal is that I hope some of you will find it interesting enough to think about it for a while. In other words, I’m too lazy to keep working on it but I hope some others will fall into my trap and let me know the answer.
(more…)

Two Cute Proofs of the Isoperimetric Inequality

May 16, 2008

The blog has been pretty quiet the last few weeks with the usual end-of-term business, research, and A-exams (mine is coming up quite soon). I was looking through some of my notes recently and came upon two very short Fourier analysis proofs of the isoperimetric inequality. Both proofs are among my all-time favorites; the result is of general interest (though it is subsumed in more general and useful facts), and the proofs are quick and elegant. The proofs are similar, but the second generates a Poincare inequality which is one of the fundamental tools of analysis — basically, the inequality says that for a function with a derivative, the L^2 norm of the function minus its average value (this is known as a BMO norm) is controlled by the L^2 norm of its derivative.

(more…)

Something Certain About Uncertainty

February 26, 2008

I was motivated by a comment on Jim Pivarski’s recent post to speak about the Heisenberg Uncertainty Principle. Someone asked,

If uncertainty in quantum mechanics comes from (or is inseparable from) quantization, then where does it come from in its mathematical formulation i.e in terms of a space and its Fourier transform?

The Heisenberg Uncertainty Principle is a curious fact: it requires no physical intuition whatsoever and yet has profound physical ramifications. It is also interesting because it is among a small group of facts which are both physically and mathematically interesting. It is an important (to harmonic analysis) and commonly known fact that a function and its Fourier transform cannot both be compactly supported. There are stronger statements than that, though, of the following flavor: if a function is a narrow spike near a point, then its Fourier transform will be more spread out. The Heisenberg Uncertainty Principle is a quantitative statement about this kind of fact.

(more…)

Odd Sums of Consecutive Odds

February 15, 2008

Oscar Wilde’s character Algernon said in The Importance of Being Earnest, “One must be serious about something, if one is to have any amusement in life.” Of course in Wilde’s typical ironic fashion, Algernon was only referring to his own dedication to frivolous diversions. In that spirit, allow me a few moments to tell a story about one of the odder sums of odd integers I discovered as a kid.

I remember that sometimes when I was bored — most especially during long, bi-weekly car trips with my parents — I would play various games with integers. I have no idea why, but at one point I memorized some huge list of powers of 2 (I can still remember the list from 1 to 65,536). I also computed the squares, cubes, and so forth of most of the smaller integers. As a result, I discovered on my own quite a number of interesting patterns in the integers. I don’t remember most of them, but there is one in particular that has stuck with me through the years.

(more…)

Singular Integral Operators and Convergence of Fourier Series

February 12, 2008

I’m Peter “Viking” Luthy, a journeyman graduate student at Cornell. I’m an analyst, and my current research goals are in harmonic analysis with applications to and from ergodic theory. To avoid being called a hypocrite, Greg asked me to post on occasion and spread my analytic gospel — this isn’t the Everything-but-Analysis Seminar, after all.

My goal in this post is to go through the initial setup of a deep theorem of Carleson dealing with the convergence of Fourier series on L^p. This theorem is almost universally interesting in and of itself. Additionally, it will give ample reason as to why people — myself included — care about objects called singular integral operators. This will also provide some impetus for some future posts as well, particularly one which will outline a famous construction of Fefferman and give some reasons why harmonic analysis in higher dimensions is distinctly harder than in dimension 1.

(more…)