The Resultant, Episode 3
Last time the linear operator
Φ: Kn[x]⊕Km[x] → Km+n[x]
Φ(p,q)=pE+qF
made its grand entrance, clothed in the Sylvester matrix. (Recall that Kn[x] is the vector space of all polynomials of degree <n with coefficients in K, likewise for Km[x] and Km+n[x].)
When m=2, n=3, E(x)=a2x2+a1x+a0, and F(x)=b3x3+b2x2+b1x+b0, the Sylvester matrix looks like this:
The general pattern is clear. Let’s call the matrix S. (As you probably noticed, this looks a bit different from last time. I’ve switched to the traditional convention, transposing the matrix and rearranging the rows and columns. The determinant is ±1 times the previous determinant.)
We learned that E and F have a common (nonconstant) factor when and only when Φ is singular, and thus precisely when det(S)=0. (I.e., is identically zero.) Φ being singular means that pE+qF=0 has a solution with p∈Kn[x], q∈Km[x], p,q≠0. In fact, there’s a solution to PE+QF=0 with P∈Rn[x], Q∈Rm[x], P,Q≠0—just clear denominators.
OK, what if Φ is non-singular? What then? Well, the dimensions of the domain and range spaces are both m+n, so Φ must be onto. In other words, we have a solution to
pE+qF=1
p∈Kn[x], q∈Km[x]
(Since E and F are both assumed to be nonconstant, we obviously have p,q≠0.)
If you unwind this a bit, it may seem puzzling. Say K=k(y). The equation pE+qF=1 at first blush seems to rule out any common roots for E and F, let alone common components. The resolution: if E(x,y)=F(x,y)=0 for some x and y, then p and/or q can blow up at the point—their coefficients belong to k(y), so they can have a polynomial in y in the denominators. It looks like E and F can intersect only at points where at least one of those denominators evaluates to 0. Which brings us back to the resolvent.
Φ is an isomorphism from Kn[x]⊕Km[x] to Km+n[x], so Φ–1 exists. Φ–1(1) equals the pair ⟨p,q⟩ solving pE+qF=1. Now, the matrix of Φ–1 is S–1. In principle, you could compute it using Cramer’s rule. We don’t care about most of the details, except for one salient fact: S–1 has the form (matrix with entries in R)/det(S). Therefore, we can clear denominators in pE+qF=1 if we multiply through by det(S), the resolvent. Conclusion: we have a solution to
PE+QF=det(S)
P∈Rn[x], Q∈Rm[x]
Notice that det(S) belongs to R, and does not have any x‘s in it. That’s what it means to eliminate x!
For the case where E and F belong to R=k[x,y] (our algebraic curves), we have a corollary: det(S), a polynomial in y, is 0 at all the intersections.
How about the converse? If det(S)(c)=0 for some value y=c, is there always an intersection of E and F on the line y=c?
We claimed that det(S) is the resolvent, as we defined it before:
(*)
where the ui‘s and vj‘s are the are all the roots of E and F (respectively), in the extension field L. (Also I’m now writing am and bn for the leading coefficients, instead of just a and b.) So if det(S)(c)=0 at some c, then either one of the leading coefficients is 0, or E and F have a common root in some extension field of K. (It turns out that both of the leading coefficients have to be 0, to avoid an intersection on the line y=c, i.e., a common root.)
More about this in “Inside the Episode”. Episode 4 will justify claim (*).