The Resultant, Episode 3

Last time the linear operator

Φ: *K _{n}*[

*x*]⊕

*K*[

_{m}*x*] →

*K*[

_{m+n}*x*]

Φ(

*p*,

*q*)=

*pE+qF*

made its grand entrance, clothed in the *Sylvester matrix*. (Recall that *K _{n}*[

*x*] is the vector space of all polynomials of degree <

*n*with coefficients in

*K*, likewise for

*K*[

_{m}*x*] and

*K*[

_{m+n}*x*].)

When *m*=2, *n*=3, *E*(*x*)=*a*_{2}*x*^{2}+*a*_{1}*x+a*_{0}, and *F*(*x*)=*b*_{3}*x*^{3}+*b*_{2}*x*^{2}+*b*_{1}*x*+*b*_{0}, the Sylvester matrix looks like this:

The general pattern is clear. Let’s call the matrix *S*. (As you probably noticed, this looks a bit different from last time. I’ve switched to the traditional convention, transposing the matrix and rearranging the rows and columns. The determinant is ±1 times the previous determinant.)

We learned that *E* and *F* have a common (nonconstant) factor when and only when Φ is singular, and thus precisely when det(*S*)=0. (I.e., is *identically* zero.) Φ being singular means that *pE+qF*=0 has a solution with *p*∈*K _{n}*[

*x*],

*q*∈

*K*[

_{m}*x*],

*p*,

*q*≠0. In fact, there’s a solution to

*PE+QF*=0 with

*P*∈

*R*[

_{n}*x*],

*Q*∈

*R*[

_{m}*x*],

*P*,

*Q*≠0—just clear denominators.

OK, what if Φ is non-singular? What then? Well, the dimensions of the domain and range spaces are both *m*+*n,* so Φ must be onto. In other words, we have a solution to

*pE+qF*=1

*p*∈*K _{n}*[

*x*],

*q*∈

*K*[

_{m}*x*]

(Since *E* and *F* are both assumed to be nonconstant, we obviously have *p*,*q*≠0.)

If you unwind this a bit, it may seem puzzling. Say *K*=*k*(*y*). The equation *p**E*+*q**F*=1 at first blush seems to rule out any common *roots* for *E* and *F*, let alone common *components*. The resolution: if *E*(*x*,*y*)=*F*(*x*,*y*)=0 for some *x* and *y*, then *p* and/or *q* can blow up at the point—their coefficients belong to *k*(*y*), so they can have a polynomial in *y* in the denominators. It looks like *E* and *F* can intersect only at points where at least one of those denominators evaluates to 0. Which brings us back to the resolvent.

Φ is an isomorphism from *K _{n}*[

*x*]⊕

*K*[

_{m}*x*] to

*K*[

_{m+n}*x*], so Φ

^{–1}exists. Φ

^{–1}(1) equals the pair ⟨

*p,q*⟩ solving

*pE+qF*=1. Now, the matrix of Φ

^{–1}is

*S*

^{–1}. In principle, you could compute it using Cramer’s rule. We don’t care about most of the details, except for one salient fact:

*S*

^{–1}has the form (matrix with entries in

*R*)/det(

*S*). Therefore, we can clear denominators in

*pE+qF*=1 if we multiply through by det(

*S*), the resolvent. Conclusion: we have a solution to

*PE+QF*=det(*S*)

*P*∈*R _{n}*[

*x*],

*Q*∈

*R*[

_{m}*x*]

Notice that det(*S*) belongs to *R*, and does *not* have any *x*‘s in it. That’s what it means to eliminate *x*!

For the case where *E* and *F* belong to *R*=*k*[*x*,*y*] (our algebraic curves), we have a corollary: det(*S*), a polynomial in *y*, is 0 at all the intersections.

How about the converse? If det(*S*)(*c*)=0 for some value* y*=*c*, is there always an intersection of *E* and *F* on the line *y*=*c*?

We claimed that det(*S*) is the resolvent, as we defined it before:

(*)

where the *u _{i}*‘s and

*v*‘s are the are all the roots of

_{j}*E*and

*F*(respectively), in the extension field

*L*. (Also I’m now writing

*a*and

_{m}*b*for the leading coefficients, instead of just

_{n}*a*and

*b*.) So if det(

*S*)(

*c*)=0 at some

*c*, then either one of the leading coefficients is 0, or

*E*and

*F*have a common root in some extension field of

*K*. (It turns out that

*both*of the leading coefficients have to be 0, to avoid an intersection on the line

*y*=

*c*, i.e., a common root.)

More about this in “Inside the Episode”. Episode 4 will justify claim (*).