Algebraic Geometry Jottings 17

Prev TOC Next

At last we come to Kendig’s proof of Bézout’s Theorem. Although not long, it will take me a few posts to appreciate it in full.

Kendig starts by choosing favorable coordinates; among other desiderata, he wants to avoid intersections at infinity. From the standpoint of logical efficiency, this is the right call. But I’d rather go through these difficulties than around them, as much as possible, hoping to glean more insight.

Another complication: fractional power series (Puiseax series). Kendig introduces these in §3.3. I’ve been avoiding them, and I prefer to postpone the reckoning a bit longer. But we will have to confront them soon.

The argument, thus modified, has four steps.

  1. Show Facts 4 and 5 from post 15: the order of the resultant is the sum of the multiplicities on the x-axis (with some provisos), and the degree of the resultant is the sum of all multiplicities in the affine plane (same provisos).
  2. Adapt (1) to handle fractional power series.
  3. Homogenize E and F and show how the homogenized resultant now counts the intersections at infinity as well.
  4. Show that the homogenized resultant has degree mn.

As usual, I’ll chew over the details.

Recall Kendig’s first definition of multiplicity, for a branch of E at an intersection with the curve F at the origin O. (For another intersection point P, move P to O.) First assume we’ve parametrized the branch via power series in a variable t, (xE(t),yE(t)). Plug into the polynomial F(x,y) getting a power series F(t) = F(xE(t),yE(t)). The order of F(t)—the degree of its lowest order term—is the multiplicity of the branch-curve intersection. Post 3 explored the intuition behind this: when we “perturb” E or F or both a little, an intersection of multiplicity r typically splits into r distinct intersections.

Remember also the local-global feature of Bézout’s Theorem: we need to total up all the branch-branch multiplicities at an intersection, and then sum this over all intersections. Kendig’s first definition already sums over the branches of the curve F. The second definition sums over horizontal lines when using resx(y), or vertical lines when using resy(x). For example, the roses:

resx(y)=16y14(16y2-5)2

The green horizontal lines pass through the intersections; the green numbers at right are total multiplicities. As we’ve seen before, resx(y)=16y14(16y2-5)2 has order 14, for the single intersection of multiplicity 14 on the x-axis. The top horizontal line has y-coordinate √5/4; we shift it down to be the new x-axis by substituting (y+√5/4) for y in the resultant. It would be painful to expand the result by hand, but we care only about are the lowest degree terms of 16(y+√5/4)14, clearly a nonzero constant, and (16(y+√5/4)2-5)2, easily computed to be a constant times y2. So we get total multiplicity 2 for this line. Likewise for the bottom horizontal line.

Recall the formulas for the resultant:

\det(S)=\text{res}_x(E,F) = a_m(y)^n b_n(y)^m\prod_{i=1}^m\prod_{j=1}^n (u_i-v_j)    (1)
= a_m(y)^n \prod_{i=1}^m F(u_i) = (-1)^{mn}b_n(y)^m \prod_{j=1}^n E(v_j)    (2)

E(x,y) = am(y)xm+···+a0(y) = am(y)(x–u1)···(x–um)    (3a)
F(x,y) = bn(y)xn+···+b0(y) = bn(y)(x–v1)···(x–vn)        (3b)

(I’ve made it explicit that the coefficients are polynomials in y.) Kendig proves Fact 4 (or rather a special case) using (1), but I prefer to use (2). The branches of E crossing the x-axis are essentially just the roots of the equation E(x,y) = 0; let’s assume these can be expressed as power series in y for y small enough. (This won’t work if a branch isn’t locally the graph of a function, but I’m postponing that issue.) Thus the i-th branch has the parametrization (ui(y),y), where ui(y) is a power series in y. In other words, we have a parametrization (x,y) = (ui(t), t).

By Kendig’s first definition, the multiplicity of the intersection between the i-th branch of E and the curve F is just the order of F(t) = F(ui(t), t) in t. Since y=t, this is just the order of F(ui(y), y) as a power series in y. When you multiply power series, the orders add. Therefore, by eq.(2), the order of resx(y) in y is the sum of the multiplicities over all branches of E crossing the x-axis, plus n times the order of am(y). If am(y) is a constant, we get Fact 4 as stated.

How about the other horizontal lines? To get their y-coordinates, we factor the resultant, say

resx(y) = k(y–c1)r1···(y–cl)rl

If, say, c1=0, then r1 would be the order of resx(y), measuring the contribution from the x-axis. If c1≠0, then we shift things up or down to make it the x-axis, by substituting y+c1 for y. This works for any ci. We conclude that ri is the total multiplicity of the intersections on the line y=ci (when am(y) is constant). Since r1+···+rl is the degree of resx(y), we’re done!

Prev TOC Next

Leave a comment

Filed under Algebraic Geometry

Algebraic Geometry Jottings 16

Prev TOC Next

The Resultant, Episode 5: Inside the Episode

The double-product form for the resultant:

\text{res}_x(E,F) = a_m^n(y) b_n^m(y) \prod_{i=1}^m\prod_{j=1}^n (u_i-v_j)  (1)

implies Fact 3:

  1. resx(y0)=0 ⇔ [the line y=y0 passes through an intersection of E and F, or am(y0)=bn(y0)=0].

provided it makes sense to evaluate the ui‘s and vj‘s at y0. If you’ll grant me that, then here goes the argument. In one direction: if resx(y0)=0 and am(y0)≠0 and bn(y0)≠0, then some factor ui(y0)–vj(y0) must equal 0. (What if am(y0)≠0 but bn(y0)=0? See below.) Let x0=ui(y0)=vj(y0). Plugging into the factored forms for E(x,y) and F(x,y):

E(x,y) = am(y)(x–u1)···(x–um)    (3a)
F(x,y) = bn(y)(x–v1)···(x–vn)      (3b)

gives E(x0,y0)=F(x0,y0)=0, so (x0,y0) is an intersection of E and F. This argument is easily reversed to prove the other direction.

This near-proof or plausibility proof has some wrinkles worth investigating. First off, it seems from eq.(1) that am(y0)=0 or bn(y0)=0 should force resx(y0)=0—why do we need them both to be zero? The simplest example exhibiting the flaw in this ointment: x=y, xy=1, i.e., E(x,y)=x–y, F(x,y)=yx–1. When y0=0, the leading coefficient of F is 0. The double product is (y–(1/y))=(y2–1)/y. The resultant is y times that, that is y2–1; thus plugging in y0=0 gives resx(y0)=–1. If we hadn’t canceled y first. we’d have gotten 0·∞.

A more elaborate example: E(x)=ax2+bx+c, F(x)=dx+e. Using the approximation \sqrt{b^2-4ac}\approx b-2ac/b, valid for |b|≫|ac|, the roots of E(x)=0 are approximately

x\approx\begin{cases}-\frac{b}{a}+\frac{c}{b}\\-\frac{c}{b}\end{cases}

Keeping b and c fixed (with b≠0) while a→0, one of the roots of E(x)=0 tends to –c/b while the other wanders off to infinity. F(x)=0 has the fixed root –e/d. When a=0, E morphs into a linear equation, sharing a root with F precisely when c/b=e/d, i.e., when becd=0. We’ve seen this before: it’s the Sylvester determinant:

\begin{vmatrix}b & c\\d & e\end{vmatrix}=be-cd

But if we plug a=0 into the Sylvester determinant for the original pair E and F, we have:

\begin{vmatrix}{} & b & c\\d & e\\{} & d & e \end{vmatrix}=-d(be-cd)

Let’s look at the general case, but let’s get rid of those pesky leading coefficients by dividing them out:

E′(x) = xm + (am–1/am) xm–1 +···+ (a0/am)    (3a′)
F′(x) = xn  +   (bn–1/bn) xn    +···+  (b0/bn)    (3b′)

Say the original coefficients are polynomials in y. We’ve introduced poles at any of the roots of am(y)=0 or bn(y)=0. If, say, bn(y0)=0, then the actual degree of F(x,y0) in x is less than n, but (you might say) its formal degree is still n. (Some people do say this.) Whereas F(x,y0)=0 has fewer than n roots, F′(x,y0)=0 is blessed (or cursed) with additional roots at infinity.

Now, if am(y0) is not 0, then all the roots of E′(x,y0)=0 are finite. If we’re asking after common roots, we can ignore the infinite roots of F′(x,y0).

How does the resultant view this? The roots at infinity bedevil the double product in eq.(1), but the leading coefficient factors smooth the troubled waters. If we use the actual degrees, we’re rewarded with a smaller Sylvester determinant. As long as am(y0)≠0 and bn(y0)=0 (or vice versa), the “formal” and “actual” determinants give the same verdict for the existence of common roots: one equation has only finite roots, the other just some extra roots at infinity. If both am(y0) and bn(y0) are 0, then we have common roots at infinity, and det(S) is 0.

This is easy to see from the determinant itself, using expansion by minors down the first column. Say am(y0)≠0 and bn(y0)=0. Letting m=2, n=3 as usual, to ease formatting, we have:

\begin{vmatrix}  a_2 & a_1 & a_0\\  {} & a_2 & a_1 & a_0\\  {} & {} & a_2 & a_1 & a_0\\  0 & b_2 & b_1 & b_0\\  {} & 0 & b_2 & b_1 & b_0  \end{vmatrix} =a_2\cdot\begin{vmatrix}  a_2 & a_1 & a_0\\{} & a_2 & a_1 & a_0\\  b_2 & b_1 & b_0\\  {} & b_2 & b_1 & b_0  \end{vmatrix}

with the “formal” determinant on the left, and the “actual” one on the right. (At least if b2(y0)≠0. If not, rinse, repeat.) So one determinant is 0 if and only if the other is.

OK, how about the other dodgy aspect of the plausibility proof, evaluating the ui‘s and vj‘s at a value y0 of k? The ui‘s and vj‘s belong to an algebraic extension L of k(y). We sort of have an evaluation map from k(y) to k, at least when we steer clear of poles. It seems like one should be able to extend this to a suitable map from L to k, since k is algebraically closed. When k=ℂ, the theory of Riemann surfaces handles the matter. For algebraically closed k in general, I gather Puiseaux series step into the breach. Kendig discusses this in §§3.3–3.4.

While we’re on the topic of alternate proofs, Lang’s Algebra concludes the justification of eq.(1) (det(S)=double product) this way, adjusting his notation to agree with ours:

From (2) we see that [the double product] is homogeneous and of degree n in [the ai‘s], and homogeneous and of degree m in [the bj‘s]. Since [det(S)] has exactly the same homogeneity properties, and is divisible by [the double product], it follows that [det(S)=c(double product)] for some integer c. Since both [det(S) and the double product] have a monomial [amn b0m] occurring in them with coefficient 1, it follows that c=1, and our proposition is proved.

To my mind, the logic suffers from two gaps, although I think I see how to fill them. Recall the corresponding spot in the proof from Episode 4.  We were looking at the special case with am=bn=1.  We’d set R=k[u1,…,um, v1,…,vn]. We’d shown

\det(S)=h\cdot \prod_{i=1}^m\prod_{j=1}^n (u_i-v_j), with hR.

Let’s write Rab for the subring k[am–1,…,a0, bn–1,…,b0] of R. Note that det(S) belongs to Rab. If we knew that the double product and h also belonged to Rab, we’d be home free.

Galois theory shows that the double product belongs to Rab. The equivalent expression \prod_{i=1}^m F(u_i) belongs to Rab[u1,…,um], and is symmetric in the ui‘s; that should do the trick. We know that h results from dividing one element of Rab by another, without remainder in R. This ought to force h∈Rab. Ideals, Varieties, and Algorithms (Cox, Little, and O’Shea) provides a multivariable polynomial division algorithm; I haven’t checked the details, but I think this offers what we need.

So we can validate Lang’s argument, but it takes a bit of work. The approach in Episode 4 avoids these potholes, and as a bonus, introduces the “packing with t‘s” technique we’ll need later.

One last matter. We’ve already made use of the formula PE+QF=det(S), for some PRn[x], QRm[x]. Proving this took some effort. It’s much easier to derive this result:

PE+QF=D
for some PR[x], QR[x], and DR

Here’s the proof. Let K be the fraction field of R, as usual. Then K[x] is a PID. So if E and F are coprime, then for some p,qK[x],

pE+qF=1

Since the coefficients of p and q all belong to K, we can clear denominators and get PE+QF=D. Note that D≠0. When R=k[y], D is a polynomial in y alone, and the y-coordinates of the intersections of E and F are roots of D.

Now let’s think about the ideal (E,F) = {PE+QF : P,QR[x]}. Intersecting this with R gives an ideal of R. When R=k[y], R is a PID, so (E,F)∩k[y] is principal. It’s nonzero because D belongs to it. In other words, we can eliminate x from the equations E(x,y)=0 and F(x,y)=0 and get something non-trivial.

We might as well write D for a generator of (E,F)∩k[y]. I used to believe that resx(y) was such a generator. Not true. We’ve seen an example already in Inside Episode 2 (with x and y switched): the two ellipses x2+y2=0 and 2x2+y2=0. Easy congruence calculations show that x2 ≡ 0 mod (E,F), and as we saw way back in post 6, this amounts to showing that x2 belongs to (E,F). In fact (E,F)∩k[x] is generated by x2. But resy(x) , as we saw, is x4.

Prev TOC Next

Leave a comment

Filed under Algebraic Geometry

Algebraic Geometry Jottings 15

Prev TOC Next

The Resultant, Episode 5 (The Finale)

Recap: The setting is an integral domain R, with fraction field K, and extension field L of K in which E(x) and F(x) split completely. E(x) and F(x) have coefficients in R. E(x) has degree m, F(x) degree n; we assume m,n>0. The main special case for us: R=k[y], K=k(y), so R[x]=k[x,y], and E and F are polynomials in x and y. As always, we assume k is algebraically closed.

The formulas:

\det(S)=\text{res}_x(E,F) = a_m^n b_n^m\prod_{i=1}^m\prod_{j=1}^n (u_i-v_j)    (1)
= a_m^n \prod_{i=1}^m F(u_i) = (-1)^{mn}b_n^m \prod_{j=1}^n E(v_j)    (2)

E(x) = amxm+···+a0 = am(x–u1)···(x–um)    (3a)
F(x) = bnxn+···+b0 = bn(x–v1)···(x–vn)        (3b)

PE+QF=det(S)  (4)
for some PRn[x], QRm[x]
i.e., deg(P)<n, deg(Q)<m, coefficients in R

Φ: Rn[x]⊕Rm[x] → Rm+n[x]   (5)
Φ(P,Q)=PE+QF
ditto with K in place of R

Matrix of Φ is the Sylvester matrix, e.g.

\begin{bmatrix}  a_2 & a_1 & a_0\\  {} & a_2 & a_1 & a_0\\  {} & {} & a_2 & a_1 & a_0\\  b_3 & b_2 & b_1 & b_0\\  {} & b_3 & b_2 & b_1 & b_0  \end{bmatrix} ;

In our main special case, eq.(3a) now reads, in part: E(x,y) = am(y)xm+···+a0(y), likewise for (3b). So we have polynomials E(x,y) and F(x,y) defining curves E and F. The key facts:

  1. The resultant resx(E,F) is a polynomial in y alone, call it resx(y).
  2. resx(y) is identically 0 ⇔ Φ is singular ⇔ E and F have a common component ⇔ E and F have a common nonconstant factor.
  3. resx(y0)=0 ⇔ [the line y=y0 passes through an intersection of E and F, or am(y0)=bn(y0)=0].
  4. The order of resx(y) is the sum of the multiplicities of the intersections on the x-axis, provided am(y) is a constant.
  5. The degree of resx(y) is the sum of the multiplicities of the intersections in the affine plane, provided am(y) is a constant.

Also, x and y can trade places in all of these.

Now to tie up some loose ends, namely proving some of this. Fact (1) holds because det(S) belongs to R=k[y].  Episodes 1 and 2 spent most of their running time justifying Fact (2).

One direction of Fact 3 falls out immediately from eq.(4): if the line y=y0 passes through an intersection of E and F, then there exists an x0 such that E(x0,y0)=F(x0,y0)=0, whence

 resx(y0)=P(x0,y0)E(x0,y0)+Q(x0,y0)F(x0,y0)=0.

On the other hand, if am(y0)=bn(y0)=0, then plugging this value of y into the Sylvester determinant gives a first column that is entirely 0. So the resultant is 0 at y=y0.

In the other direction, we stated Fact 2 for the polynomial resx(y), implicitly assuming R=k[y]. But Fact 2 holds just as well for the more boring case of R=k. It says that if resx (an element of k) equals 0, then E(x,y0) and F(x,y0) have a common nonconstant factor in an extension field. Since k is algebraically closed, the polynomials factor completely over k and so must have a common factor (xx0). Thus E(x0,y0)=F(x0,y0)=0, and E and F have an intersection on the line y=y0.

“Hmm”, you’re probably wondering, “where did you use the assumption that am(y0)≠0 or bn(y0)≠0?” If you trace through the proof of Fact 2 in Episode 2, you’ll find this step: pE=–qF, so the degree m polynomial E divides qF. But deg(q)<m, so E and F share a nonconstant factor. Well, E(x,y) as a polynomial in x has degree m, and E(x,y0) will still have degree m when am(y0)≠0. We can use the same reasoning if bn(y0)≠0. But if am(y0)=bn(y0)=0, then the argument falls through.

A somewhat subtle point: we’ve tacitly assumed that we obtain the resultant of E(x,y0) and F(x,y0) by plugging y=y0 into the resultant of E(x,y) and F(x,y). If you want to get all fancy about it, we’re applying an evaluation homomorphism from k[y] to k, converting the two-variable polynomials E(x,y) and F(x,y) into the one-variable E(x,y0) and F(x,y0), and likewise converting det(S), with entries from k[y], into a determinant with entries from k. I’ve belabored this because it’s at the heart of the other direction for Fact 3.

If you ponder the double-product form for the resultant, on the right hand side of eq.(1), you might see an alternate proof of Fact 3. I’ll talk about this in “Inside the Episode”.

As for the proofs of Facts 4 and 5, I will feature these as the initial steps of Kendig’s proof of Bézout’s Theorem.

[Closing theme music, credits—

“These episodes owe much to Anthony Knapp’s treatment in his Advanced Algebra.”

promos for other miniseries.]

Prev TOC Next

Leave a comment

Filed under Algebraic Geometry

Algebraic Geometry Jottings 14

Prev TOC Next

The Resultant, Episode 4

This episode has one sole purpose: to show that the two formulas for the resultant are equivalent. The next episode, the finale, will tie up some loose ends.

Continue reading

Leave a comment

Filed under Algebraic Geometry

Weierstrass’s Smackdown of Dirichlet’s Principle

In 1856 Dirichlet made the following claim in a lecture:

Continue reading

Leave a comment

Filed under Analysis, History

The Monoenergetic Heresy (Part 1)

The Emperor Heraclius.
Classical Numismatic Group, Inc. Wikimedia Commons

And now for something completely different.

Continue reading

Leave a comment

Filed under Bagatelles, History

Escher’s Toroidal Print Gallery

If Art+Math brings one person to mind, it’s Escher. His tessellations present the best-known instance, but he did a lot more than that.

In April 2003, the mathematicians Bart de Smit and Hendrik Lenstra wrote a delightful article, Escher and the Droste effect, about Escher’s lithograph Prentententoonstelling. They pointed out that

We shall see that the lithograph can be viewed as drawn on a certain elliptic curve over the field of complex numbers…

Continue reading

Leave a comment

Filed under Analysis, Geometry