## Dunkl Operators

by

This post is basically a write-up of notes for a talk I gave for the Olivetti club, the weekly Cornell grad student talk (of course, the post has pretty much everything I wanted to say, while my talk unsurprisingly did not). In 1989 Dunkl introduced a commuting family of operators which is a deformation of the family of directional derivative operators. More specifically, given a finite reflection group W acting on a vector space V, the choice of one complex parameter for each conjugacy class in W determines a family of commuting linear operators on $\mathbb{C}[V]$, and if all the parameters are chosen to be 0 then this family is just the family of directional derivative operators. For almost all parameter choices, this family is surprisingly well-behaved, and many constructions involving directional derivatives can be extended, including the Fourier transform, the heat equation, and the wave equation. As far as I can tell, these operators are pretty mysterious. From looking at the formula, it’s not even obvious they should commute, and further properties are even more surprising. In this post I’ll tell you some of the surprising things about them, but I unfortunately won’t be able to say much about why they exist.

## Definitions

I’ll need some notation before I can give the formula for a Dunkl operator. For the whole post we’ll be working in the vector space $V = \mathbb{R}^n$ with some fixed inner product $(\cdot,\cdot)$. Given a vector $\alpha \in V$, we denote the reflection across the hyperplane perpendicular to the vector by $\sigma_\alpha(x) = x - 2\frac{(\alpha,x)}{\left\vert \alpha\right\vert}\alpha$. Now for our purposes, a finite set $R \subset V$ is called a root system if the following two conditions are satisfied for every $\alpha \in R$:

1. $R \cap \mathbb{R}\alpha = \{\pm\alpha\}$
2. $\sigma_\alpha(R) = R$

Now if we pick a generic hyperplane it won’t intersect $R$, so we can divide $R$ into a disjoint union $R = R^+ \sqcup (-1)R^+$. Also, it’s a classical fact that the subgroup $G \leq O(V)$ generated by the reflections associated to a root system is finite. Now we can define the Dunkl operators. Given a root system $R$, a G-invariant function (called the weight function) $k:R \to \mathbb{C}$, and a vector $\xi \in V$, we define $T_{\xi}:\mathbb{C}[V] \to \mathbb{C}[V]$ by

$T_\xi(f)(x) = \partial_\xi (f)(x) + \sum_{\alpha \in R^+} k(\alpha) (\alpha , \xi) \frac{f(x) - f(\sigma_\alpha x)}{(\alpha , x)}$.

Notice that if we take the Taylor expansion of $f - f \circ \sigma_\alpha$ in the $\alpha$ direction, there is no constant term, so division by $(\alpha , \cdot)$ is well-defined and makes the operators homogeneous of degree -1. This formula also defines linear endomorphisms of the Schwartz space $\mathcal{L} := \{f \in V^* \mid \left\| x^\alpha D^\beta f\right\|_{\infty, V} < \infty \}$ of rapidly decreasing functions. If the weight function is 0 on all the roots, then this operator is the standard directional derivative. This is one of the strange things, because we are deforming local operations (directional derivatives) by adding nonlocal terms (the divided difference operators). One of the things I would love to know but haven’t seen in any papers is some kind of intuitive reason why this formula should work.

One of the first things proved about these operators is that for a fixed choice of $k:R \to \mathbb{C}$, the Dunkl operators commute, i.e. $T_\xi \circ T_\eta = T_\eta \circ T_\xi$. This is trickier than it might seem, and the original proof is in [D]. The commutativity allows us to think of the Dunkl operators as deformations of the de Rham complex. Given an orthonormal basis $\{e_i\}_{i=1}^N$ of V, let $K^l = \mathbb{C}[V] \otimes_{\mathbb{C}}(\bigwedge^l V^*)$ and define $d:K^l \to K^{l+1}$ by $d(p \otimes \omega) = \sum_{j=1}^N(T_{e_j} p)\otimes (de_j^*\wedge\omega)$. The commutativity immediately implies that $d^2 = 0$, so $d$ is a deformed differential for the de Rham complex, and as usual if $k = 0$ then this is the standard differential. One might wonder if the (co)homology of the complex is the same, and the answer is almost always yes, as the next theorem demonstrates.

## Intertwining Operator

A given weight function $k:R \to \mathbb{C}$ is called regular if the Dunkl operators are all simultaneously 0 only on the constant functions, i.e. $\cap_{\xi \in V} \ker_{\mathbb{C}[V]}(T_\xi) = \mathbb{C}$. A weight function that isn’t regular is called singular. Then the following are equivalent ([DJO]):

1. $k$ is regular
2. There is a unique linear intertwining operator $V_k:\mathbb{C}[V] \to \mathbb{C}[V]$ which is a homogeneous isomorphism, degree 0, fixes constant functions, and satisfies $T_\xi\circ V_k = V_k \circ \partial_\xi$.
3. $H^i(K^\bullet,d) = 0$ if $i > 0$, where $d$ is the deformed differential of the de Rham complex
4. $H^0(K^\bullet,d) \cong \mathbb{C}$

In particular, the deformed de Rham differential gives the same cohomology as the standard one if and only if the parameter choice is regular. (Another theorem in [DJO] says that if the weight function is singular then it takes negative rational values, so the singular set is small.) The proof of this theorem is reasonably readable and mainly involves linear algebra (except for a page of computation).

At this point there is a choice to be made – one can study algebraic properties of these deformations, which would lead to Cherednik algebras, among other things, or one can study analytic properties. For the rest of this post I’ll follow the latter path (which may be dangerous since I don’t know much analysis), but there are many interesting results in both directions. In either case, the intertwining operator plays a crucial role. However, it is also one of the difficulties in working with concrete examples, because it is defined inductively and quickly becomes difficult to compute with.

So far we’ve only defined the intertwining operator for polynomials, but to do analysis we really want to study the completion polynomials under some norm. We define $\left\|p\right\|_r = \sum_{n=0}^\infty \left\|p_n\right\|_{\infty,B_r}$ where $p_n$ is the homogenous component of degree $n$ and $B_r$ is the ball of radius $r$. Also, let $A_r$ be the completion of the polynomials under this norm. Then another theorem of Dunkl states that $\left\| V_k p\right\|_r \leq \left\| p \right\|_r$ so that the intertwining operator extends uniquely to a bounded linear operator on $A_r$ via the formula $V_kf := \sum_n V_kf_n$. Now we can define the Dunkl kernel, which is a deformation of exponentiation. Let $E_k(x,y) = V_k(e^{(\cdot,y)})(x)$, which is well-defined since $x\mapsto e^{(x,y)} \in A^r$. Then $f = E_k(\cdot,y)$ is a solution to the equation $T_\xi f = (\xi,y) f$, which is a generalization of the one dimensional equation $f' = cf$, which has the solution $f(x) = e^{cx}$. This hints that we might be able to use the Dunkl kernel to generalize the Fourier transform.

## One Dimensional Wave Equation

Before I talk about that, I’m going to give a brief overview of the solution to the one dimensional wave equation which uses the Fourier transform. Then I’m going to talk about the “Dunkl-fied” wave equation, which very surprisingly still satisfies the (modified) weak and strong Huygen’s principles. (The basic idea for the 1-D wave equation is to increase dimensions to make it first order in time, Fourier transform the space dimensions to make it an ODE in time, solve using matrix exponentials, then Fourier transform back. If you’re comfortable with this much of an explanation then you can skip down about a page, to the discussion of Huygen’s principle.)

First, the Fourier transform, which intuitively takes a function of space and turns it into a function of frequencies, is defined (in one dimension) as $F(f)(\omega) = \frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}f(x)e^{-i\omega x}dx$. We’re going to use three properties of the Fourier transform, none of which are difficult to verify (modulo convergence issues, which I will mainly ignore because I want to get to the point, which is that everything generalizes, without being too technical).

1. Derivatives transform to products: $F(\partial_x f) = -i\omega F(f)$
2. The transform is invertible: $F^{-1}(f) = F(f \circ (x \mapsto -x))$
3. Convolutions map to products:
$F(f \ast g) := F(x \mapsto \int_\mathbb{R}f(x)g(x-t)dt) = F(f)F(g)$

Now lets solve the wave equation (which is a model of physical systems where the force is a linear function of displacement) in one dimension. The equation is

1. $\partial_x^2 u(x,t) = \partial_t^2 u(x,t)$ with initial conditions
2. $u(x,0) = f(x)$
3. $\partial_t u(x,0) = g(x)$

This comes directly from Newton’s law that force is proportional to acceleration. Throughout this entire discussion it is probably best (necessary?) to assume that $f,g$ decrease rapidly enough to be in the Schwartz space so that the Fourier transform behaves well. Since there are quite a few equations in the solution, I’m just going to write them all down and then explain some afterwards.
$\begin{array}{lcl} U(x,t) & := & \left(\begin{array}{c} u \\ \partial_t u \end{array}\right) \\ \partial_t U & = & \left(\begin{array}{cc} 0 & 1 \\ \partial_x^2 & 0\end{array}\right) U \\ \partial_t F(U) & = & \left(\begin{array}{cc} 0 & 1 \\ -x^2 & 0\end{array}\right)U \\ A & := & \left( \begin{array}{cc} 0 & 1 \\ -x^2 & 0\end{array}\right) \\ F(U) & = & F(U(x,0))e^{tA} \\ U &=& F^{-1}(e^{tA}) \ast \left( \begin{array}{c}f \\ g\end{array}\right) \\ u(x,t) &=& \left( F^{-1}(\cos(t\omega))\ast f\right) (x) + \left( F^{-1}(\frac{\sin(t\omega)}{\omega}) \ast g\right) (x) \\ u(x,t) &=& f(x-t) + f(x+t) + \int_{x-t}^{x+t} g(s) ds \end{array}$

The first step is to turn the equation into a first order equation in time by defining $U(x,t)$ to increase the number of dimensions. Then using the first property of the Fourier transform listed above (derivatives transform to multiplication), we can transform with respect to the space variable to turn the equation into a first order ODE in the time variable with no space derivatives involved (it’s an ODE and not a PDE because we can treat the space variables as constant). We can solve this with matrix exponentials and get a solution which is a product of functions of the frequency variable $\omega$. Using the third property above of Fourier analysis, we can invert a product of functions to get a convolution, and evaluating the Fourier inverse we can obtain an exact solution. A subtle point here is that the inverse Fourier transform doesn’t produce a function when you apply it to the matrix exponential, it actually produces a distribution. That is why the evaluation of the convolution in the last step gives the initial condition evaluated at discrete points.

## Huygen’s Principle and Generalized Wave Equation

The wave equation can be generalized to multiple space dimensions fairly easily as follows: $\Delta u(x,t) = \partial_t^2 u(x,t)$ where $x \in V$ and we are now using the Laplacian $\Delta = \sum_{i=1}^N \partial_{e_i}^2$. The solutions to this equation satisfy two very interesting properties, sometimes called the weak and strong Huygen’s principles. The weak Huygen’s principle basically says that waves travel at a finite velocity. More formally, the solution $u(x,t)$ at a particular point only depends on the values of the initial conditions $f(x-y,t), g(x-y,t)$ for $\left\|y\right\| \leq \left\vert t \right\vert$. The strong Huygen’s principle says that the waves stay in sharp packets (the inequality becomes an equality), which is a much more restrictive condition. Formally, the solution $u(x,t)$ at a particular point only depends on the values (and derivatives) of the initial conditions $f(x,y,t),g(x,y,t)$ for $\left\|y\right\| = \left\vert t \right\vert$. The weak Huygen’s principle is reasonably common (as far as I know) and the wave equation always satisfies it, but the strong Huygen’s principle is quite uncommon. Even the wave equation only satisfies the strong version when the number of space dimensions is odd and at least 3. This is an easily observable physical phenomenon: if you drop a pebble in a pond, the place where you drop the pebble will have waves come back several seconds after you initially drop it (even without reflecting off the boundary of the container). However, if you clap your hands, you hear exactly one impulse (unless the sound echoes off other objects).

What were the (relevant) steps we used to solve this equation?

1. Write down the equation.
2. Apply the Fourier transform.
3. Invert a product.

Now let’s look at the steps we took and try to generalize them. I’ll just say what their generalizations are without motivating why these particular generalizations are the ones the want (this is partly to avoid writing too much detail and partly to avoid learning too much detail). If you want more details, they are contained in [SO].

1. Generalizing the first step isn’t hard, we can just write $\Delta_k := \sum_{i=1}^N T_{e_i}^2$, with the rest of the equation unmodified.
2. Generalizing the Fourier transform is a little more difficult, because the integral is performed with respect to a different measure. Define $\omega_k(x) = \prod_{\alpha \in R^+} \left\vert (\alpha,x)\right\vert^{2k(\alpha)}$ (which is 1 if the reflection group is trivial). Then the deformed Fourier transform (or Dunkl transform) is
$F_k(f)(\xi) = c_k\int_{V}f(x)E_k(-ix,\xi)\omega_k(x) dx$.
Here $c_k$ is a normalization constant. Generalizing the Fourier transform preserves several of its properties: it is a homeomorphism of the Schwartz space, if a function and its transform are in $L^1(V,\omega_k(x)dx)$ then $(F^{-1}_k \circ F_k) (f)$ converges almost everywhere to $f$, and $F_k$ extends to an isometric isomorphism of $L^2(V,\omega_k(x)dx)$.
3. Now to generalize the third step, we’ll need to figure out how to transform a product of functions. To do this we’ll deform the convolution product. Define $f(x \bullet_k y) = V^x_kV^y_k((V_k^{-1}f)(x-y))$. (Here the superscript refers to the variable affected by the intertwining operator.) Notice that if $k = 0$ then $f(x \bullet_0 y) = f(x-y)$, since in this case $V_k$ is the identity function. Then define $(f \ast_k g)(x) = \int_V f(y) g(x \bullet_k y) \omega_k(y) dy$. With care one can show that this convolution product behaves similarly to the standard one (in particular, it is commutative). Also, these are the correct definitions, in the sense that $F_k(f \ast_k g) = F_k(f)F_k(g)$.

Now we can state the solution for the deformed wave equation, which can be obtained in a way similar to our derivation of the one dimensional wave equation. Let $P^{11}_{k,t} = F_k^{-1}(\cos(t\left\|\cdot\right\|))$ and $P^{12}_{k,t} = F_k^{-1}(\frac{\sin(t\left\|\cdot\right\|)}{\left\|\cdot\right\|})$. (As before these are distributions and not functions.) Then the solution to the deformed wave equation is
$u_k(x,t) = (P^{11}_{k,t} \ast_k f)(x) + (P^{12}_{k,t}\ast_k g)(x)$.

As mentioned above, the deformed wave equation always satisfies a deformed weak Huygen’s principle and depending on the dimension (and the deformation) sometimes satisfies a deformed strong Huygen’s principle. No matter what the dimension, $u_k(x,t)$ only depends on values of $f(x \ast_k y),g(x \ast_k y)$ for $\left\|y\right\| \leq \left\vert t \right\vert$. To change the weak Huygen’s principle to the strong one we again change the inequality to an equality. The strong principle is satisfied if $(N-3)/2 + \sum_{\alpha \in R^+}k(\alpha) \in \mathbb{N}$. Also, it is known that the strong principle fails if $(N+1)/2 + \sum_{\alpha \in R^+} \not\in \mathbb{Z}$. This leaves a few cases unknown, but not too many.

As far as I know, satisfying the strong Huygen’s principle is pretty rare, and there aren’t too many other examples known. However, this definitely isn’t an area I know very much about, so corrections are appreciated. I do know Huygen’s strong principle has been studied extensively classically for hyperbolic operators, and many of these references are contained in [BE]. I suppose I should mention that when I gave this talk, one of the questions at the end was “This is a lot of complicated machinery, why is it interesting?” (My fellow Cornell grad students can almost certainly guess who asked this.) I realized at the time that I didn’t have a very good answer to that question, and my answer now depends on your philosophy in math. One way to find interesting new things in math is to generalize existing knowledge and explore what works, what doesn’t, and why. Another goal is to find solutions to unsolved problems. In my opinion the Dunkl operators definitely satisfy the first goal and cause curiosity because many things generalize unexpectedly. However, I don’t know much about whether they satisfy the second goal. I suspect they do, in some ways at least, but I’m not very knowledgeable about it. In any case, my tastes tend more towards the first goal and less towards the second (to guess arbitrarily, about a 2-1 ratio). At some later point I might write more about the second goal, or describe some of the interesting algebraic properties of these operators.

[Be] “The problem of lacunas and analysis on root systems” Berest, 2000.
[D] “Differential Difference Operators Associated to Finite Reflection Groups,” Dunkl, 1989
[DJO] “Singular Polynomials for Finite Reflection Groups,” Dunkl, de Jeu, Opdam, 1994
[SO] “The wave equations for Dunkl operators” Said, Orsted, 2005.

Tags: ,

### 9 Responses to “Dunkl Operators”

1. Anonymous Says:

The guy’s name is not Hyugen but Huygens. So please replace all occurences of “Hyugen’s principle” by “Huygens’ principle”.

2. Peter Says:

Done. I knew it looked funny when spelled correctly, but I guess I picked the wrong funny-looking.

3. Anonymous Says:

I’d like to apologize for my grumpy comment. I realize now how much work must have gone into this post! An enjoyable read by the way!

4. Ben Says:

No need to apologize for a perfectly valid comment. Peter doesn’t seem to take offense, so why should anybody else?

5. Anonymous Says:

Well, I just thought I could have been more supportive as well as offering corrections, particularly since I choose to post anonymously. It’s safer this way because then, say, if Peter was offended, if he was, say, a raging cauldron of emotions, then there’d be no chance of him challenging me to a duel, for example.

6. Peter Says:

I wasn’t offended – I’ve never been a particularly good speller, and I wouldn’t want misspellings to distract from the content of the post.

7. Greg Muller Says:

Hey, while we are at it, Olivetti is one L, two Ts. Its supposed to be the ‘little Oliver’ club.

I also edited the tags a bit. I took off ‘Basic Grad Student’, since Im trying to commit that to things I would feel comfortable talking to a first or second year about. I think things like Schwartz space and deRham complex are just a notch more complex than that (though deRham complex should be more widely known). I also added some arxiv subject tags, though trying to put some of this stuff under a specfic subject heading is quite difficult.

8. Stephen Griffeth Says:

Dear Peter,

I just stumbled on your post after googling Dunkl operators. You may not care much anymore, or already know a better answer, but I thought I’d give you an answer to “This is a lot of complicated machinery. Why is it interesting?”

Two “hard” problems that have been solved by considering Dunkl operators (and their cousins for the double affine Hecke algebra) are Haiman’s conjecture on the diagonal coinvariant ring of a Coxeter group, and Macdonald’s conjectures on the norms and evaluations of Macdonald polynomials. For the first, see Gordon’s paper “On the quotient ring by diagonal invariants” (or something similar), and for the second, see Macdonald’s book “Affine Hecke algebras and orthogonal polynomials”, or Cherednik’s book on double affine Hecke algebras.

These problems are both algebro-combinatorial; I have to admit that I don’t know much about applications in analysis. But the two problems above are certainly the most well-known applications of Dunkl operator techniques to hard problems from other areas. There are a host of others- you might also see Etingof and Ginzburg’s paper “Symplectic reflection algebras” to get an idea of what’s possible.

Best,
Stephen

9. John Jiang Says:

Extremely good post! I understood Huygens’ principle for the first time.