Archive for the ‘Guests’ Category

Card Shuffling I

April 19, 2009

Just about anyone interested in mathematics has studied a little probability and probably done some easy analysis of basic card games and dice games. Completely off topic for a second, am I the only one who has noticed that basic probability homework exercises are the only situation, aside from funerals, that anyone will ever use the word “urn?” For whatever reason, probabilists love that word. Anyway, in any real card game, the computations tend to get complicated rather quickly, and most people get turned off from the discussion. With some ingenuity, however, one can answer some pretty cool (but initially difficult seeming) questions without having to go through a lot of tedious computations.

Take as an example card shuffling. In the face of expert card-counters, the natural question for the dealer is how many times he or she has to shuffle the deck before it’s well-mixed. In the case when the dealer is also playing the game — and is a card-counter at the level of a member of the MIT black jack team, say — he or she could drastically improve their odds by using a shuffling method which seems to shuffle the deck well, but actually is very poor at doing so. Anyway, at this point the question is slightly ill-posed, as we have no obvious way to interpret the word mixed, let alone well. In fact, coming up with a mathematical model of what shuffling means is already fairly difficult. What I’m hoping to do is give a framework which makes the problem more tractable.



A Silly Infinite Series

April 5, 2009

A year or two ago, a couple of us were bored and somehow got to thinking about the series



Fun With Sums

February 22, 2009

It’s been a while since there has been any math on the blog, so I figured I’d share a recent (trivial) mathematical fact I came upon while passing the time. A less noble goal is that I hope some of you will find it interesting enough to think about it for a while. In other words, I’m too lazy to keep working on it but I hope some others will fall into my trap and let me know the answer.

Two Cute Proofs of the Isoperimetric Inequality

May 16, 2008

The blog has been pretty quiet the last few weeks with the usual end-of-term business, research, and A-exams (mine is coming up quite soon). I was looking through some of my notes recently and came upon two very short Fourier analysis proofs of the isoperimetric inequality. Both proofs are among my all-time favorites; the result is of general interest (though it is subsumed in more general and useful facts), and the proofs are quick and elegant. The proofs are similar, but the second generates a Poincare inequality which is one of the fundamental tools of analysis — basically, the inequality says that for a function with a derivative, the L^2 norm of the function minus its average value (this is known as a BMO norm) is controlled by the L^2 norm of its derivative.


Something Certain About Uncertainty

February 26, 2008

I was motivated by a comment on Jim Pivarski’s recent post to speak about the Heisenberg Uncertainty Principle. Someone asked,

If uncertainty in quantum mechanics comes from (or is inseparable from) quantization, then where does it come from in its mathematical formulation i.e in terms of a space and its Fourier transform?

The Heisenberg Uncertainty Principle is a curious fact: it requires no physical intuition whatsoever and yet has profound physical ramifications. It is also interesting because it is among a small group of facts which are both physically and mathematically interesting. It is an important (to harmonic analysis) and commonly known fact that a function and its Fourier transform cannot both be compactly supported. There are stronger statements than that, though, of the following flavor: if a function is a narrow spike near a point, then its Fourier transform will be more spread out. The Heisenberg Uncertainty Principle is a quantitative statement about this kind of fact.


Quantum Bears

February 17, 2008

Hello all! It has been a very long time since I last wrote; I have been going to and from CERN as we’re preparing for the LHC. I’m in Geneva right now, and I just came back from watching En Pleine Nature (Into the Wild). Without giving away too much plot, this movie contains a bear that did not eat anyone. It made me think of Grizzly Man from a few years ago, in which another bear did. What impressed me most about Grizzly Man is that the probability of being eaten by a bear, should we find ourselves face-to-face, is not a simple 20%. It depends a great deal on who the bear is, what he thinks about humans, how hungry he is, what I smell like, the weather, his mood, etc. There’s a whole space of parameters, and some regions of this space are filled with nearly 100%, others with nearly 0%.

Let’s say we’re doing a scientific study of bears eating people. In our first experiment, we put 100 people in the woods and just count how many get eaten. Then we’d like to get more grant money, so we do a more in-depth analysis by controlling for several variables: some of our volunteers are smeared in honey barbeque sauce, others aren’t. Our sequence of studies slowly sharpen the focus on the bear-eating parameter space, identifying the high-probability regions and the low-probability regions. Where does this process end? If we could do infinitely many studies, would we find that each point is either 100% or 0% (deterministic bears)? Or not (uncertain bears)?

Framing a discussion of quantum mechanics in this way illustrates a feature that is often missed: the connection between quantization and uncertainty. Usually these two topics just fall out of the postulates with little indication that they are related. As it turns out, quantization makes fundamental uncertainty possible.


Odd Sums of Consecutive Odds

February 15, 2008

Oscar Wilde’s character Algernon said in The Importance of Being Earnest, “One must be serious about something, if one is to have any amusement in life.” Of course in Wilde’s typical ironic fashion, Algernon was only referring to his own dedication to frivolous diversions. In that spirit, allow me a few moments to tell a story about one of the odder sums of odd integers I discovered as a kid.

I remember that sometimes when I was bored — most especially during long, bi-weekly car trips with my parents — I would play various games with integers. I have no idea why, but at one point I memorized some huge list of powers of 2 (I can still remember the list from 1 to 65,536). I also computed the squares, cubes, and so forth of most of the smaller integers. As a result, I discovered on my own quite a number of interesting patterns in the integers. I don’t remember most of them, but there is one in particular that has stuck with me through the years.


Singular Integral Operators and Convergence of Fourier Series

February 12, 2008

I’m Peter “Viking” Luthy, a journeyman graduate student at Cornell. I’m an analyst, and my current research goals are in harmonic analysis with applications to and from ergodic theory. To avoid being called a hypocrite, Greg asked me to post on occasion and spread my analytic gospel — this isn’t the Everything-but-Analysis Seminar, after all.

My goal in this post is to go through the initial setup of a deep theorem of Carleson dealing with the convergence of Fourier series on L^p. This theorem is almost universally interesting in and of itself. Additionally, it will give ample reason as to why people — myself included — care about objects called singular integral operators. This will also provide some impetus for some future posts as well, particularly one which will outline a famous construction of Fefferman and give some reasons why harmonic analysis in higher dimensions is distinctly harder than in dimension 1.


Dunkl Operators

December 15, 2007

This post is basically a write-up of notes for a talk I gave for the Olivetti club, the weekly Cornell grad student talk (of course, the post has pretty much everything I wanted to say, while my talk unsurprisingly did not). In 1989 Dunkl introduced a commuting family of operators which is a deformation of the family of directional derivative operators. More specifically, given a finite reflection group W acting on a vector space V, the choice of one complex parameter for each conjugacy class in W determines a family of commuting linear operators on \mathbb{C}[V], and if all the parameters are chosen to be 0 then this family is just the family of directional derivative operators. For almost all parameter choices, this family is surprisingly well-behaved, and many constructions involving directional derivatives can be extended, including the Fourier transform, the heat equation, and the wave equation. As far as I can tell, these operators are pretty mysterious. From looking at the formula, it’s not even obvious they should commute, and further properties are even more surprising. In this post I’ll tell you some of the surprising things about them, but I unfortunately won’t be able to say much about why they exist.

G-equivariant embeddings of manifolds

October 24, 2007

This is my first post, and I plan on sporadically writing some in the future. I’m Peter, a third year grad student at Cornell, and I talk to Greg pretty often, so I thought I’d write down some of the things I say. This first post won’t be long or deep, but it’s kind of cute, and the trick behind it is useful in many other situations, so I decided to share it. Let’s say we have a finite group G acting on a compact manifold M. The Whitney embedding theorem says that we can embed M into \mathbb{R}^k for sufficiently large k, and what I want to show in this post is that you can do this in a G-equivariant way, i.e. there is an embedding \phi:M \to \mathbb{R}^k and an injective homomorphism f:G \to O(\mathbb{R}^k) such that \phi(g\cdot x) = f(g)\cdot \phi(x). I guess the moral of the story is that compact manifolds are really just “nice” subsets of Euclidean space, and a compact manifold with a finite group action is really nothing but a “nice” subset of Euclidean space that is preserved by the action of a finite group of permutation matrices. Also, if one were to summarize the moral of the trick used, one might say “averaging over the elements of a group makes things equivariant,” and this idea comes up almost uncountably many times in many areas of mathematics (of course, averaging takes on different meanings in different situations).