Bezout’s theorem in algebraic geometry is one of those simple facts that manages to capture the heart and style of its field. It states that any two irreducible curves and in usually intersect in points (where the degree of a curve is the degree of the polynomial that defines it).

Now, very few mathematicians will stand for a ‘usually’ in their theorems, and the most basic form of Bezout’s theorem is typically stated differently – so as to be a real theorem. However, my favorite aspect of the theorem is that figuring out how to fix the ‘usually’ has repeatedly foreshadowed the development of algebraic geometry as a whole.

In its simplest cases, Bezout’s theorem is familiar even to most math undergrads. For, if is a degree 1 curve, then it is a line. Restricting the defining equation for to this line gives a polynomial in one variable of (usually) the same degree as , and so by the Fundamental Theorem of Algebra, it (usually) has points. I have glossed over the fact that restricting to the line, which amounts to eliminating a variable, can sometimes cause highest degree terms to vanish; and also that sometimes two roots of a polynomial will coincide. Still, the underlying intuition is the Fundamental Theorem of Algebra, which everyone should find reassuring and familiar.

For higher degrees, the proof is less immediate, but no less intuitive. The two curves always intersect at exactly the proscribed number of points – unless some numbers happen to cancel/coincide.

Geometrically, what is going wrong? There are two kinds of behavior that are causing the theorem to be break down(as I have stated it).

The first is most easily seen in the case of two parallel lines. They both have degree 1, and so Bezout says they should intersect, but Euclid and reality disagree. Higher degree curves can exhibit similar behavior, such as when parallel asymptotes cause an intersection point to be lost.

The second problem is when the two curves intersect too well at a given point of intersection. As a simple example, take to be the curve defined by (also know as the x-axis), and let be defined by (also known as the graph of ). These curves do intersect at , but this is the only place they intersect. This intersection point has **multiplicity;** that is, it wants to correspond to more than one point ( in this case), but there is no room in the geometric viewpoint to count the same point more than once.

**First Fix: Projective Geometry**

The first problem is pretty easy to fix. As you might have guessed, the answer is to think of two parallel lines as ‘intersecting at infinity’. can be compactified by adjoining a sphere at infinity. A path leaving has a limit if it has an asymptote. This new space we have constructed is called , and it is the second member of the family of complex projective spaces (the first member is the Riemann sphere).

If we think of our curves and as sitting inside of , then there is a canonical way to extend them to the sphere at infinity. Adding these points at infinity will magically add points of intersection that make Bezout’s theorem (closer to being) true! For example, our two parallel lines now have exactly one intersection point, as we hoped would be true.

This is a basic example of ‘Projective Geometry’, which is the study of varieties not in , but their compactifications in . Theorems tend to be cleaner here, and in general this is seems to be the more natural home for varieties.

**Second Fix: Intersection Theory**

I claim that projective geometry fixed the problem of points at infinity, so the only remaining problem is that figuring out how to count points with the right multiplicity.

The most basic solution is to notice that any time two curves intersect with multiplicity , you can slide one of the curves a bit in some direction to split the bad point into n points. Thus, Bezout’s theorem is always true, up to an infinitesmal slide. This has the advantage of being conceptually simple, but its not very useful for computations.

We can also develop a rigorous theory for counting intersection multiplicities, appropriately called **intersection theory**. The details are a bit more technical than I am aiming for with this post, but the idea is that one declares two curves ‘the same’ if you can find an analytic function on the compliment of the first curve that vanishes only on the second curve. This rigid relationship is called ‘linear equivalence’, and its a more algebraic version of the above sliding intuition. The number of intersection points of two curves depends only the linear-equivalence class… except for a small number of exceptions. Thus, we define the **intersection number** of two curves to be the number of intersections of any two ‘generic’ curves linearly-equivalent to the original pair.

This is mostly just a fancy way of declaring that its ok to slide a curve a small amount in order to resolve a point of multiplicity. However, this is a more effective computational tool, and intersection theory has some nice benefits. For example, intersection theory can be robustly generalized to higher dimensional varieties. Instead of a -valued inner-product like we just constructed, we get a ring called the Chow ring.

**Second Fix, Take Two: Schemes**

There is a slightly different take on this that isn’t explicitly necessary, but I like since it points us in the direction of another important evolution in algebraic geometry. The above fix was secretly just a weakening of Bezout’s theorem, from talking about curves intersecting, to talking about equivalence classes of curves intersecting.

Let us instead declare that the deficiency was not with the theorem, but with our notion of geometry instead. Perhaps our definition of ‘point’ was too primitive to distinguish between a simple intersection, and one of higher multiplicity. These are bold statements, and they require a bold theory to pull off; but the theory of schemes is just bold enough.

Very crudely, a scheme in this context behaves like a variety, with a distinguished sheaf on it that dictates what the ‘ring of rational functions’ looks like over any (good) open set. With the extra data provided by this sheaf, we can distinguished between two schemes whose underlying variety is the same.

Take for example, a point… a sheaf on a point is the same as a its ring of global sections. This ring can’t be anything; it must be a commutative algebra such that modding out by the Jacobson radical gives . This doesn’t leave room for too much, but we still get multiple different schemes that look like points to the naked eye (ie, as a variety).

Now, there is also a way to intersect schemes, so we can see what happens if we intersect and . We get a point whose global sections looks like (which readers of previous posts might recall is one of my favoritest rings). Ah ha! We get a point of a totally different flavor than what we would have gotten if we’d looked at a simple intersection (its global sections would have been ). By assigning to each distinct flavor of point a ‘multiplicity’ equal to its dimension as a vector space, we can again make Bezout’s Theorem work.

**Third Fix…? : Derived Schemes**

But wait, I said there were only two things wrong with Bezout’s theorem – what else is there to fix? Well, I have stealthily concealed a third minor error in the statement, to see if readers would ignore it unthinkingly. The trick is that I never forbid that the two curves coincided. Its instinct to dismiss such cases out of hand, since the ‘number’ of intersection points doesn’t make sense.

Perhaps we shouldn’t be so hasty; after all, the other two fixes involved building techniques that turned out to be useful for wholly unrelated reasons. According to Jacob Lurie’s engrossing GRASP lecture, this error can also be fixed, by passing to an even richer version of geometry, known as **derived algebraic geometry**. I should qualify the following by saying that I know almost nothing about the subject, and so I am parroting cool ideas I have heard others express.

A curve in is the zero-set of some polynomial . Given two polynomials on , their restriction to is the same only if they differ by some multiple of . Thus, the ring of polynomial functions on looks like . If I had another curve , it would be defined by a polynomial , and the ring correspond to the scheme of the intersections of and is . This breaks down if , since quotienting by twice is the same as quotienting by it once.

To fix this, let us first replace rings with topological rings. In practice, topological rings aren’t the right idea to work with, but they will suffice for conveying intuition. A ring then becomes a set of discrete points, with the appropriate ring axioms, etc. We can now think of the act of quotienting by as connecting two points by a line every time a multiple of takes one to the other. We should also add triangles and higher simplices between appropriate compositions. If is nice enough (cancelable), we get a set of contractable pieces, each of which correspond to an element of the quotient. Since we are trying to think of topological rings only up to homotopy, this gives us the old, boring notion of the quotient.

However, if wasn’t nice, then we added some non-trivial loops. The connected components might still be isomorphic to the boring quotient, but suddenly non-trivial first homology has emerged. We can define an Euler characteristic as usual, the alternating sum of dimensions of the homology. Now, if I have two curves that coincide, I can say that their intersection number is the Euler characteristic of the corresponding topological ring, and I can ask if this is equal to the product of the degrees. I believe this is true, though a quick shuffling through my references hasn’t yielded a confirmation.

Derived algebraic geometry is an exciting field that I would like to learn more about. As near as I can tell, it is an attempt to bring the idea of homotopy equivalence into the core of scheme theory, with the goal of explaining such phenomena as stacks that really *should* have a tangent space that isn’t a vector space, but a complex of vector spaces (up to homotopy). If you are interested in learning more, I would recommend the above GRASP lecture, the (rather long) papers at Lurie’s homepage, and some of Toen’s lecture notes online.

Tags: math.AG

July 17, 2007 at 3:00 pm |

A very nice piece on the Bezout Theorem. I just have on nit picking remark that it’s the Jacobson radical, not the Jacobsen radical, you want to mod out by!

July 17, 2007 at 6:37 pm |

Mwah! THAT’s what derived algebraic geometry is about? *droooooooool*

I just got my “Have to learn this”-list expanded. By a couple of pages.

July 17, 2007 at 11:23 pm |

Fundamental Theorem of Calculus Fundamental Theorem of Algebra

July 18, 2007 at 12:32 am |

Whoops… thanks for the corrections. I’ve fixed them above.

July 18, 2007 at 12:48 pm |

an attempt to bring the idea of homotopy equivalence into the core of scheme theoryYou can turn this around, too, and say that the point of derived algebraic geometry is that all the usual algebraic geometry ideas — sheaves, functors of points, stacks, and topoi — seem to work just fine if, instead of commutative rings, we use other objects that behave like commutative rings. Simplicial commutative rings, E_infinity ring spectra, commutative DGAs, etc,… So derived algebraic geometry is also about importing ideas from scheme theory into homotopy theory.

July 21, 2007 at 3:53 pm |

I think you need to check the definition of “proscribed”.

P.S. Your blog is good.

September 14, 2007 at 1:48 pm |

[...] curves will always be . This theorem is nifty in other ways, and to see one of them check out this post at the Everything [...]

January 10, 2008 at 8:19 am |

[...] Now, if we specify that we’re living in the projective plane, then we take be distinct curves of degrees and , that is, irreducible one dimensional projective varieties who intersect in a finite collection of points , we have that . This is because points have Hilbert polynomial 1. (If we don’t require that they only intersect in a finite collection of points, things get trickier. However, a subject called Derived Algebraic Geometry appears to help understand this case, and you can read a bit about that here.) [...]

December 2, 2008 at 5:26 pm |

I’m no mathematician, but here is a simple question suppose you had 2 quadratics in x and y.. Bezout’s theorem predicts 4 solutions. fine. But suppose you get the Grobner basis for this set of 2 polynomials (in the plex form). So b1 is a 4th order polynomial in x and b2 is lst order in y and 3rd order in x. If you apply Bezout’s theorem to this set of 2 polynomials the number of intersections seems to have risen to 4×3=12..What’s happening? Do b1 and b2 have the other 8 solutions at infinity..are there multiple roots. I don’t get it!

December 2, 2008 at 7:01 pm |

Carey,

That can’t be all there is to the Groebner basis, because it won’t generate the same ideal. So there must be more polynomials, which would cut down the number of solutions further. Chosen at random, they’d cut it down to zero, but they aren’t random at all, so it works out.

Also, you’d expect 16, because both of the two polynomials are total degree 4.

July 16, 2013 at 8:42 am |

Abe Books also offers a variety in shipping methods to help lower

costs, as well as combined shipping on more than one book from the same seller.

Recently, I took a trip out to Pyramid Lake and the Black Rock Desert and spent the

day taking photographs with my new Sony DSLR A300

camera. Most religions of the world (but especially Christianity)

are based upon principles of psychology.

July 21, 2013 at 3:27 pm |

I have read some excellent stuff here. Definitely value bookmarking for revisiting.

I wonder how a lot effort you put to create this sort of great informative web site.