I am now 8 days into teaching a six-week, every-morning-at-8:30 summer intro calculus class. In order to (1) make all the material fit into such a short time, (2) leverage the student’s (theoretically) good grasp of algebraic manipulation and (3) because I visualize infinitesimals more often than limits, I decided to teach as much of the course as possible using a somewhat ill-defined version of the real numbers that includes infinitesimals. In particular, the students have been working over the ring where . We additionally extend the ordering on to by defining for any positive real (this is vital if we want to extend piecewise-defined real functions to since we need the order predicates to extend).

These nilsquare infinitesimals lead to some really nice calculations once you get used to them. For example, here is one way to find the derivative of directly: for some finite . Squaring both sides gives , so by equating infinitesimals we find that .

But what I want to talk about is the obvious but puzzling connection to Greg’s notion of complexes as -modules. Now, I thought I understood calculus. I thought I understood complexes. But for the life of me, I can’t figure out how to think about complexes as “vector spaces with infinitesimals”, which is to say -modules. What the heck is going on here, morally speaking?

A few more points relating all this to Greg’s posts on complexes as -modules: suppose we adjoin two nilsquare infinitesimals and to and specify no relations between them. Then the sum is not nilsquare: its square is . So if we want to adjoin two or more nilsquare infinitesimals and keep out any higher order (nilcube, etc.) infinitesimals, we also need the anticommutation relations . Note the relationship between this and Greg’s definition of bicomplexes. I reiterate: what the heck is going on here?

### Like this:

Like Loading...

*Related*

This entry was posted on July 4, 2007 at 1:16 am and is filed under Basic Grad Student, High School, Matt, Undergraduate. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

July 4, 2007 at 6:33 pm |

I’ve though about this a bit, and I don’t have a concrete answer, but I suspect that the answer is terribly interesting. So, I think a useful thing to think about is a chain complex with just two non-zero terms . One can think of this as the vector space , together with an infinitely small vector space around every point. Then, multiplication by takes and shrinks it down to infinitesmal scale and tries to stick it into . Not all dimensions are necessarily present, so it throws some of them out, and sticks the rest in .

Ok, so why is this a thing to look at? I think this comes up if you want to talk about the ‘equivariant tangent space’ to a manifold with a Lie group acting on it. This is a rather mysterious idea that I have been fruitlessly pursuing the last several days, but I have some idea how it should behave.

So, think about a situation like acting on itself. The quotient space is well-behaved; it is a point. Therefore, we would like to say that has an equivariant tangent space of dimension 0. What seems to want to happen in general is that we take the manifold, count the dimension of its tangent space, and then subtract the number of those directions that stay in the same orbit. Effectively, the dimension of the tangent space is (# of Total directions)-(# of internal directions).

But if we want, we can try to finagle it so that there are more internal directions than total directions, by having a large Lie Group act trivially. Take, for instance, acting trivially on a point. The tangent space is zero dimensional, but it still has one ‘internal’ direction to be subtracted off, meaning it has a -1 dimensional tangent space!

Alot of this stuff is still rather mysterious to me, but I think if you take this concept of a vector space with a second infinitesmal vector space around every point, you can make tangent spaces to quotient spaces look like them. You take a copy of the regular tangent space in degree 0, and a copy of the lie algebra in degree 1, with the boundary map connecting the two. Then the Euler characteristic becomes the ‘dimension’ of the tangent space.

But why should ‘internal directions’ be the same as ‘infinitesmal directions’? There is alot here I would like to get to the bottom of.

Perhaps the most high concept problem here is trying to make this work for longer complexes. Theres a philosophy of Kontsevich, Kapranov and maybe D’rinfeld relevant here. If one considers the stack of all vector bundles on a given variety (it is likely best to imagine here that a stack is nothing but a variety with a group variety acting on it, ie an orbi-variety), the dimension of the tangent space is the difference of the first and second betti-numbers of some homology I forget right now. In the case of curves, all higher betti numbers vanish, so this quanitity is the Euler characteristic, and constant throughout the stack. In the case of higher dimensional surfaces, the dimension jumps around and so the stack is very much not smooth. The smart people I listed about say that this points to the fact that the ‘correct’ tangent space is measured by the whole complex of the homology theory, which would make the dimension of the tangent space constant throughout. This is the starting point of the notion of Derived Algebraic Geometry, which I desperately want to learn.

July 8, 2007 at 2:26 pm |

Working over the ring R[dx] seems like a nice trick. I’m a probabilist, and I recently found myself wondering: say a baseball team has probability p of winning a given game, what’s the chance of them winning a best-of-seven series? That led to this post in my blog; since teams are likely to be evenly matched, I let p = 1/2 + ε where ε is small; ε plays the role of your dx.

I’m not so confident that it would be possible to teach calculus the way you’re doing, though; as you point out, it assumes a facility with algebraic manipulationo that often seems to be lacking.