The Resultant, Episode 1
Time to discuss the resultant; we’ll need it for Kendig’s proof of Bézout’s theorem, but it has other uses too. The story will take several episodes, plus extras. Like a miniseries!
Recall the idea: we have polynomials E(x) and F(x) with coefficients in a ring R. Do they have a common root, perhaps in some extension of R? We’d like to eliminate x, getting an equation resx(E,F)=0 that tells us if they do. The resultant, resx(E,F), should involve the coefficients but not the variable x.
When E and F are polynomials in k[x1,…,xN], we let R=k[x2,…,xN], treating E and F as polynomials in x with coefficients in R (i.e., E, F ∈ R[x]). That’s the most important case for us; or since we’re looking at curves, E, F ∈ k[x,y] = k[y][x] with resx(E,F) ∈ k[y]. Of course, we could eliminate y instead of x: resy(E,F) ∈ k[x]. But we’ll also have to deal with the ring of power series, as we’ve seen. (Reminder: “ring” for us is always “commutative ring with unity”.)
Let’s get our feet wet with a couple of special cases. For the moment, let R be any old ring. You can’t get any simpler than E(x) = x–c, F(x) = x–d. Eliminating x from x–c = 0, x–d = 0 yields c=d as the condition for a common root. That is, resx(E,F) = c–d. A step up in complexity: E(x) = ax–c, F(x) = bx–d. If R is a field, then we have x = c/a, x = d/b, so the condition is c/a – d/b = (bc–ad)/ab = 0.
I’m sure you’ve noticed that the numerator bc–ad is a determinant, namely . That’s just the matrix of coefficients! If R isn’t a field, but just an integral domain, this determinant still tells us if the equations have a common root in the fraction field of R. It so happens that resx(E,F) is this determinant here.
The theory of resultants gets a bit messy if R has zero divisors—the denominator ab could be 0 even with nonzero a and b. From now on, assume R is an integral domain, with fraction field K. When E and F belong to k[x,y] and we’re eliminating x, R will be k[y] and K will be k(y), rational functions in the variable y. In K[x], we can divide through by leading coefficients, should we have a craving for monic polynomials.
OK, enough throat clearing. E(x) and F(x) belong to R[x], R is an integral domain with fraction field K, and we want to know if E and F have a common root in K, or more generally, in an extension field L of K. Say E has degree m and F has degree n. If E and F have all their roots in L (i.e., factor completely), then one expression cries out for attention: the product of all the differences of pairs of roots. That is, if u1, …, um are the roots of E in L, and v1, …, vn are the roots of F in L, the product
It’s clear that (1) is zero if and only if some ui equals some vj—that is, if and only if E and F have a root in common. But now comes the first plot twist. Notice that the product remains unaltered if we permute the ui‘s among themselves, and likewise if we permute the vj‘s (or even both at once). If you’re familiar with the theorem on elementary symmetric polynomials, or for that matter the basic tricks of Galois theory, you’ll suspect that the product (1) must belong to K. True fact! (Although the proofs I’ve seen take a different tack.)
We thus have an element of K that tells us if E and F share a root in any extension field of K. Neat! The special case we did above (E and F both linear) suggests that if we multiply by the correct factor (ab in the special case), we can even get an element of R to do the job. It is so! Here’s the first definition of the resultant, with notation as above, and letting a and b be the leading coefficients of E and F respectively:
We’re nearly at the end of the first episode. One observation before the closing theme music: since F(x) factors in L as b(x–v1)···(x–vn), the innermost product in (1) equals F(ui), and we have another form for the resultant:
Of course we can switch the roles of E and F:
End of Episode 1.