Its a bit embarassing for me, but I am only now really getting around to learning the theory of D-modules. Given that my research area is Noncommutative Geometry, Homological Algebra or maybe Representation Theory depending on when you ask me, this just shouldn’t be. I made slow progress through Bernstein’s notes last week, and then I found Schneiders’ notes a few days ago. They make a good compliment to Bernstein’s notes; where Bernstein is good about mentioning the underlying ways you should think about various tools, Schneiders is good about saying things clearly and writing out proofs. I tend to prefer the former kind of paper, until the moment I get lost and the lack of nitty-gritty to pour over trips me up.

Anywho, I am fond of explaining things as a method of solidifying thoughts in my head and turning vague concepts into concrete statements (one motivation for having the blog). It feels a little silly talking about something that I have *already* linked to better resources for, but I have never let that stop me in the past. Besides, Matt requested it, and I think it would be good to foster a bit of discussion, since one key factor in understanding why D-modules are cool is understanding how they connect to far-flung reaches of math. For more D-module stuff, especially a lengthy list of why mathematicians care about them, see the Secret Blogging Seminar’s pair of posts on the subject, especially the comments on the first post.

Today, I will define D-modules, and show how certain D-modules can describe solutions to various homogenous and inhomogenous partial differential equations.

Let be the **ring of regular differential operators** on . This is the ring of operators generated by

(1) multiplication by a polynomial on , and

(2) partial differentiation with respect to one of the coordinate variables.

In this specific case, this ring is sometimes called the **n-th Weyl algebra**.

Notice that I have not specified which space of functions I am thinking of acting on. A space with a prescribed action of is called a (left) **D-module**. These could be polynomials, sections of a holomorphic vector bundle with flat connection, compactly supported smooth functions, or whatever.

Suppose that I fix such a space of functions , and further suppose that there is a homogenous partial differential equation of interest in this space. A PDE in looks like a series of equations of the form

for fixed and indeterminant . The goal is then to find out which satisfy these equations. Denote the space of such solutions as .

What do we know about a solution ? Well, I can talk about , a linear combination of differential operators acting on the functions in my solution. However, in general I know nothing about this linear combination, *unless* I happen to satisfy one of the defining equations, which must vanish. Categorically, I have described a morphism of D-modules,

,

which sends to . Since it was determined by a solution to the PDE, I know that the kernel contains any multiples of the defining equations. This means that the map

given by premultiplication by the matrix is killed by the map to .

Ah, but this characterization of solutions is sharp, because any map from to which kills determines a solution of the PDE. To see this, notice that if I have such a map , then satisfies the PDE. Since two distinct solutions will give different maps, the solution space is exactly the equal to the subset of which kill the map .

This has a cleaner statement. The category of D-modules is an abelian category (it is the category of modules of a ring), and so the cokernel of any map exists. Let denote the cokernel of the map . By the definition of the cokernel, the maps in which kill are exactly given by , so .

However, the construction of had nothing to do with our specific choice of ; it only depended on the differential operators . Therefore, if we can find , we can solve our PDE in all D-modules at the same time, by just looking at maps from into any given D-module. We say that **represents** the functor that sends a D-module to its set of solutions .

As Schnieders points out, this is a strong hint that is better to look at than the matrix of operators . This is because two different matrixes and can determine the same set of solutions in every space, but then their corresponding D-modules and must be isomorphic (by the Yoneda lemma).

The next question I’d like to answer is about solving inhomogenous PDEs. An inhomogeneous PDE in is given by differential operators and , with the goal being to find such that

.

Notice that if I have two solutions and to the PDE, their difference solves the corresponding homogenous PDE,

,

and so if one solution exists, I get all the rest by solving the homogenous PDE (one says that the solution space is a -torsor). Thus, the question of importance is not what the space of solutions looks like, but whether or not it has any solutions at all.

First, suppose I can find differential operators such that . Such a collection of determines an element in the kernel of the map from to (the most excellent map from before). Call such an element an **algebraic compatibility condition**.

Applying each to one of the above equations, I get

Hence, for the PDE to have any hope of having solutions, the must satisfy all of the algebraic compatibility conditions; I will call a choice of such **algebraically eligible**.

However, these are just homogenous PDEs, which we just learned how to ‘solve’ (rather, how to characterize solutions to). Using similiar logic from before, we see that is the space of algebraically eligible choices of , where is the cokernel of the inclusion map from . Of course, this means that is the image of the map . As such, it fits into an exact sequence of D-modules

.

Applying the functor to this sequence yields the following long exact sequence of derived functors:

The first (non-trivial) arrow takes homogeneous solutions to themselves inside . The second arrow takes a collection of functions to , thought of as defining a choice of which is algebraically eligible (this is how we think of elements in ). Therefore, is the group of algebraically eligible choices of modulo those which actually have solutions.

Thus, if I have an inhomogenous PDE, I first check if the choice fo is algebraically eligible. Then, I find its image in via the above map, and if it vanishes, then I know my inhomogenous PDE has a solution.

This is further evidence that is the right guy to look at. Not only can he parameterize solutions to the homogenous PDE, but he can also detect whether or not a corresponding inhomogenous PDE has a solution. Hopefully, this was a first step in convincing you that D-modules are a good framework in which to think about problems of this form.

I still have plenty of fun stuff on the to-do list. First, I should sheafify this whole construction so that everything above works for PDEs on arbitrary complex manifolds. I should also show how many fun things in differential geometry can be stated in this language, like connections on a bundle. I also will indulge my algebraic geometry roots and show how you can define differential operators on any scheme. Oh, and of course I should mention important theorems, lest ye think that D-modules are nothing more than a pretty framework in which to state problems.

Tags: math.AC

September 6, 2007 at 9:14 pm |

[…] Over at The Everything Seminar, Greg Muller has a great introductory post about D-modules, which is what representation theory and category theory have to do with partial differential […]

September 7, 2007 at 3:42 pm |

Go for it! I keep meaning to learn more about D-modules and then explain them to the world. You beat me to it.

Will you eventually explain the derived category version of the Riemann-Hilbert correspondence? That’s on my wish list. But never mind – I’ll take what I can get.

September 7, 2007 at 10:10 pm |

You might also find Dragan Milicic’s old notes useful:

http://www.math.utah.edu/~milicic/

Scroll down a bit, and you’ll find them.

September 8, 2007 at 2:32 am |

I really have no plan for how far I am going to go… I would hope to get to the non-derived Riemann Hilbert correspondence before I peter out; otherwise, this will have been a hollow exercise in finding pretty ways of saying hard problems. I would love to learn and explain the derived version, but that will depend on the ambient interest level and my free time.

September 8, 2007 at 7:17 pm |

Sorry if I’m jumping the gun, but how do you define differential operators on a non-smooth scheme?

September 9, 2007 at 2:44 am |

I know what you are getting at, Scott, and I mention it in the second part of this series. For non-smooth affine schemes, the usual abstract notion of differential operator still works. However, the technique everyone seems to use for localizing differential operators is to do it for derivations and then appeal to smoothness.

In short, I don’t how to make it work, if it can be made to work. However, the idea I had to try (which is too late to pursue at the moment), is the following.

A differential operator of order n on defines a linear map from to by

This map has lots of nice properties, including that it is a derivation in each argument. This implies that, if is some element on whose inverse I would like to operate, I have identities like

The question is then, can we use the above data alone to define an action of on ?

September 25, 2011 at 6:24 pm |

mac keeper scam…[…]D-module Basics I « The Everything Seminar[…]…