Category Archives: Math

Set Theory Jottings 19. GCH implies AC.

Prev TOC Next

Sierpiński’s Theorem: GCH implies AC

I had a look at the version of the proof in Cohen (§IV.12). Sierpiński was a clever fellow, and he came up with a few tricks that would be hard to motivate.

Here I will try to imagine how Sierpiński could have devised his proof. Cohen does offer one bit of intuition:

The GCH is a rather strong assertion about the existence of various maps since if we are ever given that ABP(A) then there must be a 1–1 map either from B onto A or from B onto P(A). Essentially this means that there are so many maps available that we can well-order every set.

Let A be the set we wish to well-order. Let’s write AB to mean there is an injection from A into B. GCH tells us that for any U,

AUP(A) implies UA or U≡𝒫(A)

If U is well-ordered, then UA and U≡𝒫(A) both imply that A can be well-ordered, the latter because A is naturally imbedded in 𝒫(A). But this is too simple an approach: AU already makes A well-ordered for a well-ordered U, so if we could show the antecedent we’d be done—we wouldn’t need GCH to finish the job.

Let’s not assume U is well-ordered, but instead suppose it contains a well-ordered set. Say we could show that

AW+A≤𝒫(A)

for a well-ordered set W, where ‘+’ stands for disjoint union. (That is, W×{0}∪A×{1}, or some similar trick to insure disjointness.) Then we’d have

W+A≡𝒫(A) or W+AA

Now, if W+A≡𝒫(A), then we ought to have W≡𝒫(A), just because A is “smaller” than 𝒫(A) (in some sense)—W should just absorb A, if W is “big enough”. Also, if W is “big enough” then that should exclude the other arm of the choice, where W+AA. And if W≡𝒫(A), then 𝒫(A) and so also A can be well-ordered, as we have seen.

At this point Hartog’s theorem shows up at the door. This gives us a well-ordered set W with

W≤𝒫4(A) and WA

So we have

AW+A ≤𝒫4(A)+A ?≡? 𝒫4(A)

(where ?≡? means that the equivalence needs to be proven). WA excludes W+AA, good. Let’s postpone the issue of the ‘?’. Deal first with the problem that the bounds are not tight enough for GCH to apply. We fix that by looking at:

𝒫3(A)≤W+𝒫3(A)≤𝒫4(A)+𝒫3(A) ?≡? 𝒫4(A)

So if W+𝒫3(A)≡𝒫4(A), we ought to have W≡𝒫4(A) and hence a well-ordering of A. What about the other case, W+𝒫3(A)≡𝒫3(A)? Ah, then we have

𝒫2(A)≤W+𝒫2(A)≤W+𝒫3(A)≡𝒫3(A)

and so we can repeat the argument: either W+𝒫2(A)≡𝒫3(A), which ought to make 𝒫3(A) well-ordered and hence also A well-ordered; or W+𝒫2(A)≡𝒫2(A), in which case we repeat the argument yet again. Eventually we work our way down to

AW+AW+𝒫(A)≡𝒫(A)

and W+AA is excluded since WA, and we are done.

All this relies on the intuition that if W+M≡𝒫(M), then we should have W≡𝒫(M): we used this with M=𝒫n(A) for n=0,…,3. Well, we can prove something a little weaker.

Lemma: If W+M≡𝒫(M)×𝒫(M), then W≥𝒫(M).

Proof: Suppose h:W+M→𝒫(M)×𝒫(M) is a bijection. Restrict h to M and compose with the projection to the second factor: π2⚬(hM):M→𝒫(M). Cantor’s diagonal argument shows that this map cannot be onto. (The fact that π2⚬(hM) might not be 1–1 doesn’t affect the argument.) So for some s0∈𝒫(M), we know that h(x) never takes the form (−,s0) for xM. In other words, the image of hW must include all of 𝒫(M)×{s0}. Therefore 𝒫(M)×{s0} can be mapped 1–1 to a subset of W. qed.

The missing pieces of the proof now all take the form of absorption equations. We know that 𝒫(M)×𝒫(M)≡𝒫(M+M)—as an equation for cardinals, 2𝔪2𝔪=22𝔪. If we had 2𝔪=𝔪, that would take care of that problem. The ?≡? above also takes the form 2𝔪+𝔪 ?=? 2𝔪, for 𝔪 the cardinality of 𝒫3(A).

The general absorption laws for addition depend on AC. But we do have these suggestive equations even without AC:

𝔞+ω+1 = 𝔞+ω, 2𝔪+1 = 2·2𝔪

and so if 𝔞+ω=𝔪 and 2𝔪=𝔟, then 2𝔟=𝔟. So let’s say we set B=𝒫(A+ω). Then we have 2B≡𝒫(A+ω+1)≡B (where 2B, of course, is the disjoint union of B with itself). Also BB+1≤2BB, so BB+1 and so 𝒫(B)≡2𝒫(B). So if we replace A with B, then all gaps in the argument are filled and we conclude that B can be well-ordered. But obviously A can be imbedded in B, so A also can be well-ordered. QED.

Ernst Specker proved a “local” version of Sierpiński’s Theorem: if 𝔪 and 2𝔪 both satisfy CH, then 2𝔪=ℵ(𝔪).

Prev TOC Next

Leave a comment

Filed under Set Theory

Set Theory Jottings 18. The Axiom of Determinacy

Prev TOC Next

Just denying the axiom of choice doesn’t buy you much. If you’re going to throw away AC, you should add some powerful incompatible axiom in its place. The Axiom of Determinacy (AD) has been studied in this light.

Here’s one formulation. Let S be ℕ, i.e., the set of all infinite strings of natural numbers. Let GS. Alice and Bob play a game where at step 2n, Alice chooses a number s2n, and at step 2n+1, Bob chooses a number s2n+1. If s0s1s2…∈G, Alice wins, otherwise Bob wins. We say elements of G are assigned to Alice, and elements not in G are assigned to Bob. We’ll call the infinite strings results (of the game). Rather than think of G as a set of results, think of it as a function G:S→{Alice,Bob}.

A strategy for Alice tells her how to play each move. Formally, it’s a function from the set of all number strings of finite even length to ℕ. Likewise, a strategy for Bob maps number strings of finite odd length to numbers. A game is determined if Alice or Bob has a winning strategy, i.e., if the player follows the strategy then that player will win. The Axiom of Determinacy says that each game is determined.

Interesting thing about the proof that AC → ¬AD: it’s much easier using the well-ordering theorem instead of Zorn’s lemma.

First note that there are c=ℵ00 strategies (lumping together both Alice and Bob strategies), likewise c results. Assuming AC, well-order the strategies {Sα:α<ωc}. Here ωc is the least ordinal with cardinality c, so the set {α:α<κ} has cardinality less than c for each κ<ωc.

We construct a game G by inducting transfinitely through all the strategies, at step κ considering Sκ. Our goal is to assign some result to Alice or Bob that prevents Sκ from being a winning strategy. Say Sκ is an Alice strategy. Since we assign only one result at each step, fewer than c results have been assigned before step κ. However, there are c possible results if Alice follows Sκ, since Bob can play his numbers however he wants. So there exists a result where Alice follows Sκ but this result has not yet been assigned to either player. Assign it to Bob; this thwarts Sκ. If Sκ is a Bob strategy, just switch everything around. QED

The cardinality argument at the heart of this proof is harder to pull off with Zorn’s lemma (though possible, of course). (The exact same argument works with bit strings instead of strings of natural numbers, but for some reason AD is generally stated using ℕ instead of 𝒫(ℕ).)

Prev TOC Next

Leave a comment

Filed under Set Theory

Set Theory Jottings 17. Ordinals Revisited

Prev TOC Next

In post 13 I sketched proof by transfinite induction, and definition by transfinite recursion. Let’s take a closer look at that. By definition, < means ∈ for ordinals—treat that also as vernacular. Sometimes one symbol or the other makes the meaning clearer. We must keep in mind though that we have not shown that < is a well-ordering on Ω, or even a simple ordering.

Let Ω(x) be the formula for “x is an ordinal”, thus:

(∀u,v)(uvxux)
∧(∀u,vx)(u<v v<u u=v)
∧(∀ux)(¬u<u)
∧(∀u,vx)(¬(u<v v<u))
∧(∀u,v,wx)(u<v<w u<w)
∧(∀yx)(y≠∅ → ∃u(uy ∧ ∀v(¬(v<u vy))))

The first line says x is transitive, the next four lines say that < simply-orders x, and the last line says that < is a well-ordering on x. (Quite a bit of vernacular, but by now you should know how to deal with “uvx’’, for example.)

The clauses are not as economical as possible: for example, Foundation implies the third and last line, and a transitive irreflexive relation is automatically asymmetric, so even without Foundation we don’t need the fourth line. For the moment, we proceed without using Foundation.

Within an ordinal, ∈ well-orders; we want to bootstrap this to all of Ω. First an easy observation. Given a formula φ(x), if any β∈α satisfy φ, then there is a least such β. We need only invoke Separation to provide us with the set of all β∈α satisfying φ, and then appeal to the last clause in the definition of an ordinal.

How about smallest elements over all of Ω? That is,

∃αφ(α)→∃α(φ(α) ∧ ∀β(¬(β<α∧φ(β))))

(By vernacular convention, α and β are ordinals. Formally one begins “∃x(Ω(x)∧φ(x))…’’.)

Suppose ∃γφ(γ). If none of γ’s predecessors satisfy φ, then we’re done. Otherwise suppose φ(α) with α∈γ. Because γ is well-ordered, there is is an element of γ—let’s still call it α—such that α is the smallest element of the set {β∈γ:φ(β)}. We need to know that there are no ordinals, period, that satisfy φ and precede α. But if β was such an ordinal, we’d have β∈α∈γ. Since γ is transitive, we’d have β∈γ, and we’ve already ruled that out.

The contrapositive is transfinite induction. Letting ψ be ¬φ,

∀α((∀β<α)ψ(β)→ψ(α)) → ∀αψ(α)

With transfinite induction, we can show that Ω is simply-ordered. Only trichotomy presents any wrinkles. We induct on the property, “α is comparable with every ordinal.” Suppose this is true for all β<α. Let γ be an arbitrary ordinal. If γ<β or γ=β for some β∈α, then γ<α, since α is transitive (and < means ∈). Suppose then that β∈γ for all β∈α. That is, α⊆γ. If α=γ, we’re done. Suppose α⊂γ. Let δ be the smallest element of γ∖α. We will now show that α=δ, proving α∈γ, i.e., α<γ.

The key point: both α and δ are subsets of γ, and within γ, ∈ is a simple ordering. Extensionality kicks in: we have to prove x∈α↔x∈δ, or turned around, x∉δ↔x∉α. Now, α is an initial segment of γ because α is transitive. Therefore δ>x for all x∈α because δ∉α. So x∉δ implies x≥δ implies x∉α. On the other hand, x∉α implies x>y for all y∈α, again because α is an initial segment. Since δ is the least element of γ∖α, it follows that x≥δ, i.e., ¬x<δ, i.e., x∉δ. We’re done.

So < is a well-ordering on Ω, and we can trust our intuition about it (mostly).

Next consider transfinite recursion. The discussion in post 13 applies, with a few remarks. As before, we have a “rule” ρ(α,f,s) that assigns a value s to an ordinal α given as input a function f:α→···. Our rule is a formula, and may even include parameters. Thus: ρ(α,f,s,).

Suppose that for the parameter values = this formula is “function-like”; then we’re in business. One proves in ZF that the following condition

∀α,ρ(α,f,s,)
∧ ∀α,f,s,t (ρ(α,f,s,)∧ρ(α,f,t,)→s=t)

implies, for all α, the existence of a unique function gα:α+1→··· “obeying the recursion”. The proof uses transfinite induction, of course; also the set-existence axioms, especially Replacement. I won’t comb out all the hair, but essentially: if we have an “obedient” gβ for all β<α, Replacement plus Union plus the uniqueness allow you to combine all these gβ’s into one fα with domain α. Then you use ρ to assign a value to α, extending fα to gα.

The conclusion, even leavened with vernacular, is more than I’m prepared to write. But you get a theorem—technically, a different theorem for each formula ρ. (So it’s a theorem schema.) It has the form ∀(AB), where A is the above condition and B asserts the existence of g.

We don’t need to prove that the condition holds; rather, the theorem says that whenever it holds (perhaps only for some values of the parameters), then the conclusion holds. The proof does not require Choice: the ordinals are already well-ordered. As before, we also have a formula G(α,,s); if the condition holds for some c, then G(α,,s) assigns a unique s to every ordinal—a “function-like” thingy with “domain” Ω.

As a bonus, we can rehabilitate Cantor’s “proof” of the Well-Ordering Theorem. Let M be a set, and let c be a choice function for M. Our rule ρ says that the s assigned to α is c(M∖{f(β):β<α}), provided that the difference set is not empty. Otherwise we assign M to α. (We choose M simply because it is not an element of M.) We define G by transfinite recursion; informally, we can write Ĝ(α) for the value assigned to α. (I have left the parameters M and implicit.) If we never have Ĝ(α)=M, that means that G defines an injection of Ω into M, and inverting this gives us a map from M onto Ω. Replacement then says that Ω is a set. The Largest Ordinal paradox however proves this is not so. (The formal statement: ¬∃S x(Ω(x)↔xS).) Therefore for some α, gα maps α+1 1–1 onto M, and we’ve well-ordered M.

I’ve made a full meal of this argument. Typically, one would condense this discussion into a couple of sentences: “Given a choice function c, use it to define (by transfinite recursion) an injective function from an initial segment of Ω into M. As this function cannot be defined for all ordinals, it must map the ordinals less than some α onto M, and so M is well-orderable.”

While this proof resembles Cantor’s demonstration, it borrows essential features from Zermelo’s. Gone are the successive choices with their psychological aspect, replaced with a choice function. And the “union of partial well-orderings” in Zermelo’s 1904 proof lies buried inside the transfinite recursion.

Prev TOC Next

Leave a comment

Filed under History, Set Theory

Set Theory Jottings 16. Axioms of ZFC

Prev TOC Next

The Axioms of ZFC

The language of ZF (ℒ(ZF)) consists of basic first-order syntax, with a single binary predicate symbol ∈. Here is a list of the axioms, with a tag-line (an imprecise description) for each.

Extensionality:
Every set is determined by its elements.
Foundation:
All sets are built up in levels by starting with the empty set.
Pairs:
There is a set whose elements are any two given sets. (Or any one set.)
Union:
For every set, there is a set consisting of the union of its elements.
Power Set:
For every set, there is a set consisting of all its subsets.
Infinity:
There is an infinite set.
Replacement:
Given a set and a rule for replacing its elements, there is a set consisting of all these replacements.
Choice:
Given a set of pairwise disjoint nonempty sets, there is a set containing exactly one element from each of them.

Two more axioms:

Null Set:
There is a set with no elements.
Separation:
Given a set and a property, there is a set consisting of all the elements satisfying the property.

These are redundant. The version of Replacement we adopt implies Separation; Null Set follows from Pairs plus Separation. But they are historically important, and help understand ZFC.

Now for the formal, or at least more formal, statements. (I discussed “vernacular” in post 15. Recall that stands for stands for a list (t1,…,tn).)

Extensionality:

z(zxzy)→x=y

Using the defined symbol ⊆ (defined as ∀z(zxzy)), we could also write this as “xyyxx=y’’.

Foundation:

x(∃yx)(yx=∅)

Of course, ∩ and ∅ are defined terms, and (∃yx) is vernacular. Here is Foundation without using them:

xy(yx∧∀u(¬(uxuy)))

People sometimes call this Regularity. Foundation says there is an ∈-minimal element of x, that is, a yx none of whose elements belong to x. If that were false, we’d have an infinite descending chain xy1y2∋…. (Possibly it could end in a loop, i.e., ym=yn for some m<n.) But such an x is not “built up from ∅’’.

Foundation has an equivalent statement: There are no infinite descending ∈-chains. To prove this equivalence, you need Choice. The version above (due to von Neumann) avoids this issue, and is simpler to express.

Pairs:

xyz(z={x,y})

or without the defined term {x,y}:

xyzu(uz ↔(u=xu=y))

Since x=y is not excluded, this also covers singletons.

Null Set:

xyyx)

Union:

xu(u=⋃yxy)

or without the defined term ⋃

xuz(zu ↔∃y(zyyx))

In other words, the elements of the union are the elements of the elements of the original set.

Power Set:

xps(spsx)

Or in other words, p={s : sx}, just the kind of set-builder definition that was uncritically accepted before the paradoxes. The power set of x is denoted 𝒫(x). Without the defined term sx:

xps(sp↔∀y(ysyx))

Infinity:

w(∅∈w∧∀x(xwx∪{x}∈w))

We mentioned earlier the von Neumann representation for natural numbers: 0={∅}, 1={0}, 2={0,1}, etc. This axiom insures the existence of a set containing all the von Neumann natural numbers. I leave the vernacular-free expression of Infinity as an exercise for the fastidious.

Separation (sometimes called Subset) isn’t a single axiom, but an axiom schema. We have an instance of the schema for each formula φ(y,). For any set x and any choice of values  for , the instance says there is a set of all elements y of x satisfying φ(y,).

Separation:

 ∀xsy(ysyx∧φ(y,))

Separation justifies the use of the set-builder notation s={yx:φ(y,)}.

Replacement (sometimes called Substitution) is also an axiom schema, with an instance for each formula φ(y,z,). Suppose for some particular choice =, the formula φ(y,z,) defines z as a partial function φ° of y: Given y, there is at most one z making φ(y,z,) true. Then the range of φ° on any set x is a set.
Replacement:

x(∀y≤1z φ(y,z,) →∃w(w={φ°(y):yx}))

But the set-builder term is not justifiable by Separation. Eliminating it gives:

x[∀y≤1zφ(y,z,)
→∃wz(zw↔(∃yx)φ(y,z,))]

And we can eliminate ∃≤1z and ∃yx. We replace ∃≤1z with this:

uv(φ(y,u,)∧φ(y,v,)→u=v)

and (∃yx)φ(y,z,) with

y(yx ∧φ(y,z,))

This version of Replacement demands only that φ° be a partial function, not necessarily total. You can derive Separation from it. If φ(y,) is the separating property, then let ψ(y,z,) (for a particular =) define the partial function ψ°(y)=y when φ(y,) is true, and undefined when φ(y,) is false. Then applying ψ° to x gives the subset we want. Some authors use the “total” version of Replacement, and include Separation in their list of axioms.

Choice (AC) was the scene for major philosophical combat early in the 20th century, as we’ve seen. It’s easier to express than Replacement.

Choice:

x[(∀yx)y≠∅∧ (∀yx)(∀zx)(yzy∩z=∅)
→∃c(∀y∈x)#(c∩y)=1]

Eliminating some vernacular should be routine by now: y≠∅, yz=∅. As for #(cy)=1, this becomes

u(ucy∧∀v(vcyv=u))

where of course ucy is formally ucuy, likewise for v.

Another version of Choice does not require disjointness. It’s easy to express formally once you have the machinery of ordered pairs, and thus functions as sets of ordered pairs. It says: for every set x whose elements are all nonempty sets, there is a so-called choice function c such that c(y)∈y for all yx.

Unlike every other “set existence” axiom of ZFC, we can’t define c with set-builder notation, or indeed give any other explicit description of the choice function. We’ve seen how this lead to much distrust of AC. In 1938, Gödel showed if ZF is consistent, then ZFC is too. That calmed things down somewhat. I’ve cited Moore’s book before on the history of the Axiom of Choice.

Prev TOC Next

2 Comments

Filed under History, Set Theory

Set Theory Jottings 15. From Zermelo to ZFC: Formal Logic

Prev TOC Next

The Role of Formal Logic

In Zermelo’s original system, the Separation Axiom refers to a “definite property”. The Replacement Axiom refers to a “rule”. In 1922, Skolem proposed interpreting “definite” as “first-order definable”. So properties and rules are just formulas in the language of ZF, ℒ(ZF). With this clarification, ZFC assumes its modern form as a first-order theory.

ZF boasts a spartan vocabulary: just ∈ plus the basic symbols of first-order logic. Writing things out formally, with no abbreviations or short-cuts, rapidly snows us under unreadable expressions. We handle this (like everyone else) with semi-formal expressions; I like to call these “vernacular”. Copious hand-waving suggests how a masochist could write these out in the formal language ℒ(ZF).

Example: here’s how we say p=〈x,y〉={{x},{x,y}}, partially expanded:

a,b(p={a,b}∧a={x}∧b={x,y})

“∃a,b’’ is vernacular for “∃ab’’. Next we expand p={a,b} into

u(up↔(u=au=b))

and likewise for a={x} and b={x,y}.

A relation r is a set of ordered pairs, so more formally

(∀pr)∃a,b(p=(a,b))

which still has some vernacular. (∀pr)… more formally is ∀p(pr→…). Likewise, (∃xz)… in formal dress is ∃x(xz∧…).

To say f is a function with domain D, we start with the vernacular

f is a relation
∧ (∀(a,b)∈f)aD
∧ (∀aD) ∃!b((a,b)∈f)

“∀(a,b)∈f’’ expands to “(∀pf)∀a,b(p=(a,b)→…)’’. ∃!bφ(b), “exists a unique b satisfying φ’’, expands to ∃bφ(b)∧∀b,c(φ(b)∧φ(c)→b=c).

The vernacular f(x)=y becomes 〈x,y〉∈f. These should be enough to give you the flavor.

Often we have lists of variables, like x1,…xn. We write to reduce clutter; ∀ and ∃ have the obvious meanings.

When we get to the formal versions of Separation and Replacement, we’ll see how “property” and “rule” are made precise.

Coda

We’ve seen the informal use of “class” in ZF. This proved so convenient that NBG, a theory developed in succession by von Neumann, Bernays, and Gödel, gave a home to it. It turns out that NBG is a so-called conservative extension of ZFC: any formula of NBG that “talks only about sets” is provable in NBG iff it is provable in ZF.

In NBG, we still have only the symbol ∈ plus the basic logical symbols. However, certain members of the “universe” have the left-hand side of ∈ barred to them. If xy, then we say x is a set; anything that’s not a set is a proper class. So proper classes can have sets as elements, but cannot themselves be elements. The term class encompasses both sets and proper classes; in NBG, the variables range over classes.

Here’s how NBG skirts around Russell’s paradox. We can still write the formal expression R={x:xx}. This defines the class R, which is the class of all sets that are not elements of themselves. Is RR? No, because if it were, it would have to be a set that was not an element of itself. OK, if RR, doesn’t that mean that R satisfies the condition to be an element of R? No, not if R is a proper class—R contains only sets, no proper classes allowed!

Zermelo’s original system did not include Replacement and Foundation, although it did include Choice. Somewhat ahistorically, people use Z to refer to ZF minus Replacement, but including Separation. ZC is Z plus Choice.

ZF is a theory of “pure sets”. ZFA is “ZF with atoms”. An atom is an object that is not the null set but has no elements. The axioms of ZF can be modified to allow for this. Before the invention of forcing, Mostowski used ZFA to investigate theories without Choice.

Prev TOC Next

Leave a comment

Filed under History, Set Theory

Set Theory Jottings 14. From Zermelo to ZFC: Replacement and Foundation

Prev TOC Next

Replacement and Foundation

Thirteen years after Zermelo published his axioms, Fraenkel pointed out that they weren’t quite strong enough. For example, you couldn’t prove the existence of the set ⋃n∈ω𝒫n(0).

Fraenkel suggest the Axiom of Replacement (aka Substitution): If we have a way of “associating” with every element m of a set M a set Xm, then {Xm:mM} is a set. In other words, if you go through a set, replacing each element with another set, you get a set. The intuition: {Xm:mM} is “no bigger” than M, and so ought to be a set also.

Combined with Axiom of Unions, this is the gateway to big sets. Replacement gives us {𝒫n(0):n∈ω}, and then Union gives us Fraenkel’s set.

Zermelo agreed with Fraenkel for this addition to the axioms. Skolem also pointed out the need.

Using transfinite recursion, we now define the Cumulative Hierarchy:

V0 = ∅
Vα+1 = Vα∪𝒫(Vα)
Vλ = ⋃α<λ Vα

Fraenkel’s set is Vω.

(Two variations: it turns out that “Vα∪’’ isn’t neeeded in the second line, you get the same sets with “Vα+1=𝒫(Vα)’’. Some authors modify the definition so that their Vλ is the same as our Vλ+1.)

Power Set plus Replacement plus Union shows that all the Vα are sets. The rank of a set is where it first appears in the cumulative hierarchy, or more precisely: rk(X) is the least α with XVα+1. The “+1’’ is so that α has rank α.

How do we know that every set appears somewhere in the cumulative hierarchy? That brings us to our second additional axiom: Foundation.

Skolem noticed the need to exclude “pathological” sets, such as A={A} or infinite descending chains A1A2∋…. Such sets will never appear in any of the Vα’s. The Foundation Axiom does the trick.

von Neumann gave the most elegant version of Foundation: Every nonempty set contains an element disjoint from it. That is, for any set A, there is an aA with aA=∅. This a is minimal in the ∈ ordering of the elements of M. That’s another way to phrase it: Every nonempty set A has a minimal element in its ∈-ordering.

Foundation is equivalent to every set having a rank. Writing this the classy way, V=⋃α∈ΩVα. The proof of this takes a few steps; I’ll save it for the end of this post.

For so-called “ordinary mathematics”, you usually don’t need to go any higher than rank ω2. Example: the complex Hilbert space L2(ℝ3), beloved by analysts and physicists alike. We build up to this by contructing all the natural numbers, then the integers, rational numbers, and reals, ordered pairs and ordered triples of reals, then functions from the ordered triples to ordered pairs, and finally equivalence classes of these functions. (Recall that two L2 functions are identified if they agree except for a set of measure zero; hence, the actual elements of L2(ℝ3) are equivalence classes.) We have all the natural numbers by the time we get to Vω. If x,yVα, then the ordered pair 〈x,y〉∈Vα+2. So if AVα, then any subset of A×A is in Vα+3. Integers are often defined as sets of ordered pairs of natural numbers, rational numbers as sets of ordered pairs of integers, and real numbers as sets of rational numbers (Dedekind cuts). If I counted right, that puts all the elements of ℝ in Vω+7 and any subset of ℝ in Vω+8. The Hilbert space in question should belong to Vω+15, again if I haven’t miscounted.

Although sets of high rank aren’t “needed” by most mathematicians, it would be quite strange to impose a “rank ceiling” limitation on the power set axiom. Just maybe if Vω+1 were adequate—but it isn’t.

Without Choice, we cannot use von Neumann’s definition of card(M) as the least ordinal equivalent to M. An alternative is Scott’s trick: card(M) is the set of all sets of least rank equivalent to M. In other words, S∈card(M) iff SM and for all TM, rk(T)≥rk(S). This lacks the simplicity of von Neumann’s definition, but it’s the best we’ve got. Scott’s trick can be used for other purposes, e.g., to define the order type of a simply-ordered set.

Okay, let’s look at the equivalence of Foundation with V=⋃α∈ΩVα. We’ll need some machinery: transitive closures, ∈-induction, and the suprenum of a set of ordinals.

Recall that a set A is transitive if baA implies bA, i.e., aA implies aA. Every set A is contained in a transitive set called its transitive closure, tc(A). First we set f(A)=⋃aAa. Then set tc(A)=⋃n∈ωfn(A), where f0(A)=A. The transitive closure is the smallest transitive set containing A: A⊆tc(A), and if AB with B transitive, then tc(A)⊆B.

Suppose φ is a property where the following implication holds: If every element of A satisfies φ, then so does A. Then ∈-induction says that φ holds for all sets. Foundation justifies this. Suppose on the contrary that A is a non-φ set. Then some element of A is non-φ. Consider the subset S of tc(A) consisting of all non-φ elements. This is nonempty by hypothesis, so let sS be ∈-minimal in S. All the elements of s belong to tc(A) because tc(A) is transitive, so therefore all of s’s elements must satisfy φ (otherwise s wouldn’t be ∈-minimal). But then s must also satisfy φ, contradiction.

Lemma: If S is a set of ordinals, then ⋃α∈Sα is an ordinal. This is a set by the Union Axiom. That S is simply-ordered under ∈ is easy as pie. Foundation makes the proof of the well-ordering property trivial, though it’s not hard to prove without it. For ordinals, α⊆β is equivalent to α≤β (we’ll prove this in a later post), so S is an upper bound to all its elements. As usual, we call the least uppper bound to S its suprenum, denoted sup S.

Now suppose X is a set, and we associate an ordinal αx with each xX. Replacement says that {αx:xX} is a set; we denote its suprenum by supxXαx.

Time to prove that every set has a rank. We use ∈-induction. Suppose all the elements of A have ranks. Set α=supaA(rk(a)+1). As aA belongs to Vrk(a)+1 and VβVα whenever β≤α, all the elements of A belong to Vα. Therefore A belongs to Vα+1.

The reverse implication is straightforward, given this fact about rank: ab implies rk(a)<rk(b). (Proof: transfinite induction.) It follows that an element of A of lowest rank is ∈-minimal in A.

Prev TOC Next

Leave a comment

Filed under History, Set Theory

Set Theory Jottings 13. From Zermelo to ZFC: Everything’s a Set!

Prev TOC Next

Zermelo’s 1908 system differs from ZFC in three respects. (1) ZFC is a “pure” set theory, without atoms. (2) ZFC includes two additional axioms, Replacement and Foundation. (3) ZFC is a first-order theory. Zermelo’s axioms are not formalized in this way. In this and the next two posts I will go into details.

Everything’s a Set!

In Zermelo’s original conception, we begin with a foundation of so-called atoms (aka urelements), things like the natural numbers. We then build up the universe by forming sets of atoms, sets of sets, etc. Later people realized that with various “coding tricks”, you don’t need any atoms. We’ve already encountered the von Neumann finite ordinals: 0=∅, 1={0}, 2={0,1}, etc. So these can serve for the natural numbers.

Zermelo had no ordered pairs. Kuratowski came up with this trick: define the ordered pairx,y〉 to be the set {{x},{x,y}}. We can extract x and y from p=〈x,y〉={a,b} by noting that x is the only element of ab, and then y is the other element of ab, if ab has two elements. If ab has only one element, then of course y is that element. With Kuratowki’s ordered pairs on hand, we can deal with relations and functions without Zermelo’s awkward workarounds.

von Neumann defined ordinals by two properties. First:

A set X is transitive if baX implies bX. Equivalently, if aX implies aX.

Then:

An ordinal is a transitive set that is well-ordered under ∈.

We need transitivity, otherwise (for example) {5} would compete with {0} for the title of “1”.

I won’t go through the development of the ordinals in detail, but here are the first few steps. The precedessors of an ordinal are its elements: β<α iff β∈α. The successor of an ordinal α (denoted α+ in this post) is α∪{α}. If α is neither 0 nor a successor, it is a limit ordinal. By standard convention, λ always denotes a limit ordinal.

A finite ordinal (aka natural number) is defined as a non-limit ordinal (i.e., either a successor or 0), all of whose elements are also non-limit ordinals. If an ordinal is finite, so is its successor. The Axiom of Infinity says there is a set, say w, closed under α↦α+ and containing 0. Separation now says there is set, denoted ω, of all the finite ordinals in w. One proves that ω is the set of all finite ordinals.

Ω denotes the class of all ordinals. (Some authors use “On”.) Classes are not a formal part of ZFC, but we permit them as a “manner of speaking”. If φ(x) is a property of sets, we might talk about the class φ of all sets x satisfying φ(x), and write x∈φ to mean φ(x). For example, α∈Ω is just another way to say that α satisfies von Neumann’s two properties. We write V for the class of all sets, corresponding to the property x=x. Any set x is a class, corresponding to the property yx. A non-set class is called a proper class. After a fashion, proper classes are the descendents of Cantor’s “absolutely infinite multitudes”.

It is almost trivial to prove transfinite induction on Ω. Say φ(α) is a property of ordinals, and say we have this implication: If φ holds for all β<α, then it holds for α. Then φ holds for all ordinals. (Proof: If it doesn’t hold for α, then it doesn’t hold for some β<α. Since α is well-ordered, there is a least β<α for which it doesn’t hold. But then it holds for all γ<β, and hence for β.)

While Ω isn’t a well-ordered set, it shares enough properties to call it (informally) a well-ordered class. Namely, the ordinals are simply ordered: we will prove this in a later post. Any property φ of ordinals has a least ordinal satisfying it.

Definitions by transfinite recursion take a bit more fussing. I’ll leave the details to the textbooks, but basically: we have a “rule” determining a value sα whenever we are given all the values sβ with β<α. How are we given all these values? By a function fα with domain α, so fα(β)=sβ for all β<α. The “rule” is a predicate with two free variables, say ρ(x,y); we want ρ(fα,sα) to hold. The rule ρ should be “function-like”, i.e., for every x there is a unique y such that ρ(x,y) holds. You drop the function fα into the input hopper of ρ, and out pops a value sα. (If you’re wondering how s0 gets assigned, for α=0 the function is the empty function f0=∅.)

Using transfinite induction, one can show that for every α∈Ω there is a unique function gα with domain α+=α∪{α} that “obeys the recursion”. That is, for every β≤α, if fβ is the restriction of gα to β, then ρ(fβ,gα(β)). We say that we’ve defined gα by transfinite recursion up to and including the ordinal α.

Thus we have a function gα, with domain α+, for every ordinal α. For any β<α, gβ is the restriction of gα to β′. We can’t formally define a function with domain Ω, but we do have a “function-like” predicate γ(x,y) with “domain” Ω: γ(α,sα) holds if and only if gα(α)=sα.

Now that we have pairs, functions, and ordinals, we can populate our universe with all the usual flora and fauna. ℕ is the same as ω. The standard rigamarole then constructs integers, rationals, reals, Hilbert spaces,…

What about cardinalities? Frege and Russell both proposed “The cardinality of M is the set of all sets equivalent to M.” But a more elaborate version of Russell’s paradox kills this approach. Informally, we can talk about the class of all sets equivalent to M. We need to pick a representative from each such class.

von Neumann proposed: the first ordinal in the class. ℵ0 is just ω. ℵ1 is the first uncountable ordinal. 20 is whichever ordinal is the first one with cardinality c. The cardinality of M is thus the unique smallest ordinal μ such that M is equivalent to μ.

All this assumes that for any M, there is an ordinal equivalent to it. That follows from the well-ordering theorem, which uses the Axiom of Choice. If AC isn’t around, then you need a different idea. We’ll see one later.

Prev TOC Next

Leave a comment

Filed under History, Set Theory

Set Theory Jottings 12. Zermelo on “definiteness”

Prev TOC Next

In the last post, I mentioned Zermelo’s 1929 paper “On the concept of definiteness in axiomatics”. By this time, people had suggested replacing “definite” with “definable in first-order logic”. Zermelo did not agree with this.

Three points, I think, explain Zermelo’s views.

First, Zermelo never really “got” formal logic. He didn’t grasp the distinction between the meta-theory and the object theory, nor that between syntax and semantics. His correspondence with Gödel shows this. Another example: in the 1929 paper, he complains that the inductive definition of a first-order formula “depends on the concept of finite number whose clarification, after all, is supposed to be one of set theory’s principal tasks.” The inductions of course belong to the meta-theory.

Second, Zermelo approached axiomatization in the spirit of Euclid rather than with the philosophy of formalism. The axioms assert mathematical truths. They are not the arbitrary rules of a game. Hilbert’s famous “tables, chairs, and beer mugs” remark expresses the need to rid the development of any reliance on visual intuition. In much the same way, Zermelo stressed the purely objective character of the Axiom of Choice.

Third, Zermelo found Skolem’s countable model of ZF unacceptable. Recall the resolution of Skolem’s paradox: the power set of ω is not the “true” power set. In a 1930 paper, Zermelo gave his final version of the axioms of set theory. In a footnote to the Separation Axiom, he writes:

Like the replacement function in [the Replacement Axiom], the propositional function 𝔣(x) can be completely arbitrary here, and all consequences of restricting it to a particular class of functions cease to apply from the present point of view. I shall consider elsewhere more thoroughly “the question of definiteness” in connection with my last contribution to this journal and with the critical “remarks” by Mr. Th. Skolem.

The implicit criticism: if you hobble the Separation Axiom by allowing only first-order definable properties, no wonder you get a countable power set!

The 1929 paper offered the following definition of “definite property”. Zermelo writes Dp to say that p is a definite property. Then (changing notation and rewording somewhat, except for the quoted parts):

  1. First, all fundamental relations are definite.”
  2. Definiteness is passed on to composite assertions as follows
    1. If Dp, then Dp).
    2. If Dp and Dq, then D(pq) and D(pq).
    3. If Df(x,y,z,…) “for all (permissible) combinations of values”, then D((∀x,y,z,…)f(x,y,z…)) and D((∃x,y,z,…)f(x,y,z…)).
    4. If DF(f) “for all definite functors f’’ then D(∀f F(f)) and D(∃f F(f)).
      Definiteness is passed on to the quantifications.”
  3. If P is the system of all definite properties, then “it has no proper subsystem P1’’ that contains all the fundamental relations and is closed under the compositions listed above.

In clause (II.4), the f ranges over properties (or “propositional functions”), so we have a second-order quantification. Furthermore, we have an implicit circularity: the scope of ∀f is restricted to definite properties, just what we’re in the midst of defining. But clause (III) is perhaps even worse: without a robust set theory, how are we to interpret the quantification over all subsystems of P? Skolem made both these points in his reply.

Zermelo’s 1930 paper “On boundary numbers and domains of sets: New investigations in the foundations of set theory”, gave (as I mentioned above) his final version of the axioms. This includes both Replacement and Foundation, but curiously not Choice—as an axiom. Zermelo writes:

We have not explicitly formulated the “axiom of choice” here because it differs in character from the other axioms […] However, we use it as a general logical principle upon which our entire investigation is based; in particular, it is on the basis of this principle that we shall assume in the following that every set is capable of being well-ordered.

Prev TOC Next

2 Comments

Filed under History, Logic, Set Theory

First-Order Categorical Logic 14

Prev TOC Next

JB: So, let’s think about how we can prove this generalization of Gödel’s completeness theorem. First, remember that a hyperdoctrine B is consistent iff B(0) has at least two elements, or in other words, ⊤ ≠ ⊥ in this boolean algebra. Second, let’s say a hyperdoctrine C is set-based if every C(n) is the power set of Vn for some fixed set V. We call V the universe. Third, let’s say a morphism of hyperdoctrines, say F: BC, is a natural transformation whose components F(n): B(n) → C(n) are Boolean algebra homomorphisms obeying the Beck–Chevalley condition and maybe the Frobenius condition. (We’re a bit fuzzy about this and we’ll probably have to sharpen it up.)

Then let’s try to prove this: a hyperdoctrine is consistent iff it has a morphism to some set-based hyperdoctrine.

Here’s my rough idea of one way to try to prove it: use Stone’s representation theorem for boolean algebras! This says any boolean algebra is isomorphic to the boolean algebra of clopen subsets of some Stone space. A Stone space is a topological space that’s profinite, i.e. a limit of finite spaces with their discrete topology. A subset of a topological space is clopen if it’s closed and open.

The clopen subsets of a topological space always form a boolean algebra under the usual intersection, union and complement. So, the cool part of Stone’s representation theorem is that every boolean algebra shows up this way—up to isomorphism, that is—from a very special class of topological spaces: the ones that are almost finite sets with their discrete topology. A good example of such a space is the Cantor set.

I hope you see how this might help us. We’re trying to show every consistent hyperdoctrine is set-based. (I think the converse is easy.) But Stone is telling us that every boolean algebra is a boolean algebra of some subsets of a set!

MW: Okay, that jibes with some of my own vague ideas. Though I hadn’t got as far as Stone’s Theorem. (I’m familiar with it, of course.)

Two ideas make Henkin’s proof work. One is to find a epimorphism of the boolean algebra B(0) to the two-element boolean algebra {⊤,⊥} (or equivalently, an ultrafilter on B(0)). In other words, we’re deciding which sentences we want to be true; this is then used to define the model. The points of the Stone space are these epimorphisms, so we’re further along on the same track.

The other idea—specific to Henkin—insures that quantifiers are respected. Syntax now comes to the fore: We add a bunch of constants, called witnesses, and witnessing axioms, like so (in a simple case):

x φ(x)→φ(cφ)

where cφ witnesses the truth of the existential sentence ∃x φ(x).

Stone’s Theorem gives us a Stone space Sn for each B(n). That’s nice. But we don’t immediately have Sn=Vn, with the same V for all n. (I’m not even sure that we want exactly that.)

The Henkin witnessing trick “ties together” (somehow!) B(n) and B(n+1). And the tie relates to the left adjoint of a morphism B(n)→B(n+1).

So the stew is thickening nicely, but it looks like we’ll have to adjust the seasonings quite a lot!

JB: Right, this proof will take real thought. I don’t know ahead of time how it will go. But I’m very optimistic.

Stone’s representation theorem says the category of boolean algebras and boolean algebra homomorphisms is contravariantly equivalent to the category of Stone spaces and continuous maps. Let’s call this equivalence

S: BoolAlgop → Stone

S is a nice letter because the Stone space associated to a boolean algebra A should really be thought of as the ‘spectrum’ of that algebra: it’s the space of boolean algebra homomorphisms

f:A → {⊤,⊥}

Our hyperdoctrine B gives a Boolean algebra B(n) for each n, and this gives us a Stone space S(B(n)). Maybe we can abbreviate this as Sn, as you were implicitly doing, and hope no symmetric groups become important in this game. But for now, at least, I want to keep our eyes on that functor S.

Why? Because our hyperdoctrine doesn’t just give us a bunch of Stone spaces: it gives us a bunch of maps between them! For each function

f: mn

our hyperdoctrine gives us a boolean algebra homomorphism

B(f): B(m) → B(n).

Hitting this with the functor S, we get a continuous map going back the other way:

S(B(f)): S(B(n)) → S(B(m)).

And all this is functorial. That has to be very important somehow.

Next: somehow we want to use the Henkin trick to fatten up the Stone spaces S(B(n)) to sets of the form Vn for a fixed universe V. There are a lot of tantalizing clues for how. First, we can actually think about the Henkin trick and its use of “witnesses”. Second, we can note that the Henkin trick is related to existential quantifiers—which in hyperdoctrines are related to left adjoints in the category of posets to the boolean algebra homomorphisms

B(f): B(m) → B(n).

This raises a natural and interesting question. If you have a mere poset map between boolean algebras, what if anything does it do to their spectra? I should at least figure out the answer for the left adjoints we’re confronting.

So, there’s a lot to chew on, but it’s very tasty.

MW: Model theorists call these Stone spaces the spaces of complete n-types. We ran into them several times in our conversation on non-standard models of arithmetic (in posts 1, 2, 9, 18, and 25).

Let me noodle around with what you just said for a nice easy example, the theory LO of linear orderings. Actually, let’s start with something even easier: DLO, dense linear orderings without endpoints. This enjoys quantifier elimination: any formula is equivalent to one without quantifiers.

So, B(0) is just {⊤,⊥}. That makes sense: since DLO is complete, any sentence is provably true or provably false.

How about B(1)? Also {⊤,⊥}! What can we say about x? Provably true things like x=x, and provably false things like xx. That’s it!

Look at it this way: all countable DLO’s are isomorphic, and there’s an automorphism taking any element of a countable DLO to any other element. So all countable DLOs look alike, and for the elements of one, “none of these things is not like the other…”

For B(2) we finally have a bit of variety: B(2)={[x<y],[x=y],[y<x]}. Plus disjunctions like [x<y y<x]. And ⊥. What about the map B(f):B(1)→B(2) with our old friend f:{x}→{x,y}? Obviously ⊤ goes to ⊤ and ⊥ goes to ⊥.

Finally with B(3) things start to pick up a little. It contains the six “permutation situations” [x<y<z], [x<z<y], …, [z<y<x], plus seven more where some things are equal (like [x=y<z] or [x=y=z]), plus all the disjunctions, plus ⊥. If f now is the injection {x,y}→{x,y,z}, then B(f) sends an assertion about x and y to the disjunction of all the xyz situations consistent with it. For example

[x<y] ↦ [z<x<y z=x<y x<z<y ∨ … ∨ x<y<z]

Okay, complete n-types. (We’re concerned here with complete pure n-types, that is, no names added to the language.) These give you the full scoop on the n elements (x1,…,xn). So ignore the disjunctions. For DLO, we have only principal n-types: a single formula tells the whole story. I’ll write 〈φ〉 for the n-type generated by the boolean element [φ]. So for DLO,

S(B(0)) = {〈⊤〉}
S(B(1)) = {〈⊤〉}
S(B(2)) = {〈x<y〉, 〈x=y〉, 〈y<x〉}
S(B(3)) = {〈x<y<z〉, 〈y<x<z〉,…, 〈x=y=z〉}

In other words, we can forget about the disjunctions, and an n-type must be consistent. Take a hike, ⊥!

In general, if h:XY is a boolean algebra homomorphism, then S(h):S(Y)→S(X) is just inverse images: S(h)(p)=h−1(p). Here p, being an n-type, is a subset of Y. For example, let f:{x,y}→{x,y,z}. Above we saw that B(f):[x<y] ↦ [z<x<y ∨ … ∨ x<y<z]. All these disjuncts are sent to 〈x<y〉. (Or to be persnickety, the principle ideals generated by the disjuncts are sent there.)

That makes sense. The 3-type p is “full info” on the triple (x,y,z), and S(B(f))(p) extracts all the info we can glean about the pair (x,y). That’s a 2-type!

Hmm, couple of ways I could go from here. On the one hand, I don’t yet discern any existential quantifiers lurking in the background. Also, what do we want for our “universe” or “domain” V? None of the B(n)’s, n=0,1,2,3, seem to be co-operating!

Or I could repeat the exercise for a less simple theory, namely LO. Now B(1), and even B(0), have some real meat on their bones! Quantifiers stick around.

I don’t have such a clear understanding of the n-types of LO. Maybe if I finish reading Joe Rosenstein’s marvelous book…

Oh, one more thought. The contravariant S functor gives us S(f) going in the other direction from f. But we need two morphisms going the other way: the left and the right adjoints.

JB: Yes, I’ve been talking about the left adjoint since that relates to existential quantifiers, and those seem to be the key to “Henkinization”. My vague idea is this—I’ll just sketch a special case. If our hyperdoctrine B has some unary predicate PB(1) that gives “true” in B(0) when we existentially quantify it, Henkin wants to expand the Stone space S(B(1)) by throwing in an element x such that P(x) = ⊤. (Remember, elements of B(n) are precisely continuous boolean-valued functions on the Stone space S(B(n)).)

And here, when I said “existentially quantify it”, I meant take an element of B(1) and hit it with the left adjoint of the boolean algebra homomorphism

B(f): B(0) → B(1)

coming from the unique function

f: 0 → 1

Does this make sense?

If so, of course we need to do this sort of thing in a way that involves not only B(0) and B(1), but also the boolean algebras B(n) for higher n.

To make progress on this idea, it’s very good that you found a nice example of a hyperdoctrine, namely the theory of dense linear orders, and started working out B(n) for various low n. I think we should focus on this example for a while, and think about trying to “Henkinize” it, throwing new elements into the Stone spaces S(B(n)), until we get a set-based hyperdoctrine C, i.e. one for which every C(n) is the power set of Vn for some fixed set V. This will be a countable infinite set, right? It’s probably the underlying set of some Stone space. And it should be obtained in some systematic way from our original hyperdoctrine B.

You’ve nicely shown why B itself is not set-based.

MW: Sounds like a plan!

Prev TOC Next

Leave a comment

Filed under Categories, Conversations, Logic

Set Theory Jottings 11. Zermelo to the Rescue! (Part 2)

Prev TOC Next

In 1908 Zermelo published his paper “Investigations in the foundations of set theory”. This contained the axiom system that eventually led to ZFC. Zermelo opens the paper with this rationale:

Set theory is that branch of mathematics whose task is to investigate mathematically the fundamental notions “number”, “order”, and “function” … At present, however, the very existence of this discipline seems to be threatened by certain contradictions, or “antinomies” [such as the Russell paradox]. …it no longer seems admissible today to assign to an arbitrary logically definable notion a “set”, or “class”, as its “extension”. Cantor’s original definition of a “set” as “a collection, gathered into a whole, of certain well-distinguished objects of our perception or our thought” therefore certainly requires some restriction … Under these circumstances there is at this point nothing left for us to do but to proceed in the opposite direction and, starting from “set theory” as it is historically given, to seek out the principles required for establishing the foundations of this mathematical discipline. In solving the problem we must, on the one hand, restrict these principles sufficiently to exclude all contradictions and, on the other, take them sufficiently wide to retain all that is valuable in this theory.1

Zermelo’s motivation is pragmatic, unlike the philosophical approach of Russell and Whitehead’s Principia Mathematica.

Axiomatization was “in the air” at this time, with people throwing out various suggestions. Moore (p.151) offers some examples. Julius König proposed two axioms: (1) There are mental processes satisfying the formal laws of logic. (2) The continuum ℝ, treated as the totality of all sequences of natural numbers, does not lead to a contradiction. Schoenflies opted for the Trichotomy of Cardinals, and wanted to hold on to the Principle of Comprehension. Cantor sent a letter to Hilbert with some principles that look a lot like Zermelo’s axioms, but this letter didn’t come to light until decades later.

Zermelo’s article stands out as the first published proposal with a full set of axioms, demonstrating that it could save some of Cantor’s Paradise, and recognizing that the Principle of Comprehension was kaput.

Zermelo’s eschews philosophy:

The further, more philosophical, question about the origin of these principles and the extent to which they are valid will not be discussed here. I have not yet even been able to prove rigorously that my axioms are “consistent”, though this is certainly very essential…

The paper, he says, develops the theory of equivalence in a manner “that avoids the formal use of cardinal numbers.” He promises a second part, dealing with well-ordering, but this never appeared.

After the introduction, Zermelo begins:

  1. Set theory is concerned with a “domain” 𝔅 of individuals, which we shall call simply “objects” and among which are the “sets”. …
  2. Certain “fundamental relations” of the form aεb obtain between the objects of the domain 𝔅. …An object b may be called a set if and—with a single exception (Axiom II)—only if it contains another object, a, as an element. [I will use the modern ∈ in place of Zermelo’s ε from now on.]
  3. [Definition of subset and disjoint]
  4. A question or assertion 𝔈 is said to be “definite” if the fundamental relations of the domain, by means of the axioms and the universally valid laws of logic, determine without arbitrariness whether it holds or not. Likewise a “propositional function” 𝔈(x), in which the variable term x ranges over all individuals of a class 𝔎, is said to be “definite” if it is definite for each single individual x of the class 𝔎. Thus the question whether ab or not is always definite, as is the question whether MN or not.

Note that item (1) allows for so-called urelements or atoms—things like integers. ZFC is a so-called “pure” set theory, without atoms.

Next come seven axioms, interlarded with extensive discussion.

Extensionality:
“Every set is determined by its elements.” In other words, if MN and NM, then M=N.
Elementary Sets:
The null set exists. Given any elements a and b of the domain, the sets {a} and {a,b} exist.
Separation:
To quote Zermelo: “Whenever the propositional function 𝔈(x) is definite for all elements of a set M, M possesses a subset M𝔈 containing as elements precisely those elements x of M for which 𝔈(x) is true.”

Zermelo notes that “sets may never be independently defined by means of this axiom but must always be separated as subsets from sets already given”, and that this prevents the Russell paradox and the like. Indeed, the Russell paradox is turned into a theorem: for any set M there is a subset M0 such that M0M. He also notes that “definiteness” precludes some semantic paradoxes, e.g., Richard’s paradox (see post 5).

Zermelo shows that Separation implies the existence of set differences MM1 (denoted MM1) and intersections MN (denoted [M,N]), and even ⋂XTX for a set of sets (which he denotes 𝔇T, for “Durchschnitt”).

Power Set:
For any set T, there is a set whose elements are precisely all of T’s subsets. He denotes the power set of T by 𝔘T (for “Untermengen”).
Union:
For any set T, there is a set whose elements are precisely the elements of the elements of T. In modern notation, ⋃XTX. Denoted 𝔖T, for “Summe”. He writes M+N for our MN.
Choice:
Given any set T of mutually disjoint nonempty sets, the union ⋃XTX contains a subset S such that SX is a singleton for each XT.

Zermelo adds, “We can also express this axiom by saying that it is always possible to choose a single element from each element M,N,R,… of T and to combine all the chosen elements, m,n,r,…, into a set” S.

The set of all S’s satisfying this condition (card(SX)=1 for all XT) Zermelo calls the product of the elements of T, denoted 𝔓T, or just MN for a pair of disjoint sets M and N.

Infinity:
There is a set Z containing the null set 0, and for each of its elements a, it also contains {a}.

This leads to the so-called “Zermelo finite ordinals”, 0, {0}, {{0}}, {{{0}}}, etc. Z contains all these, and using Separation, we can assume Z contains exactly these. The Zermelo finite ordinals have two drawbacks: (1) They don’t extend naturally into the infinite ordinals; (2) Each of them, except 0, contains exactly one element. The von Neumann ordinals removed both of these blemishes.

The rest of the paper develops the theory of equivalence from the axioms. I noted that Zermelo allows atoms. On the other hand, he does not have ordered pairs, and thus neither relations nor functions. This lack calls for some gymnastics. When M and N are disjoint, the set of all unordered pairs MN={{m,n}:mM, nN} substitutes for our M×N.

To define equivalence between sets M and N, he assumes first that M and N are disjoint. Using MN instead of M×N, he can define “bijection between M and N’’. If one exists, then M and N are “immediately equivalent”. Dropping the disjointness condition, he says M and N are “mediately equivalent” if there exists a third set that is disjoint from both and “immediately equivalent” to both. It takes a couple of pages to show that this definition makes sense.

Zermelo proves the Equivalence Theorem, that is the “(Cantor-Dedekind-Schröder)-Bernstein Theorem”. (A couple of decades later, he discovered that Dedekind had basically the same proof.) He gives detailed proofs of the basic facts about equivalence. He defines “M has lower cardinality than N’’ in the usual fashion (M injects into N but not vice versa) but avoids defining “cardinal number”, as he promised in the introduction. The paper crescendoes in a proof of J. König’s inequality, a generalization of Cantor’s 𝔪<2𝔪. Expressed using cardinal numbers, this says that if 𝔪k<𝔫k for all k in some index set K, then ∑k𝔪k<∏k𝔫k. Zermelo, of course, phrases this without mentioning cardinal numbers.

Zermelo spills a fair amount of ink on the question of “definiteness”. He initially claims that ab and MN are definite questions, as we’ve seen. When defining 𝔇T, he notes that for any object a, the set Ta={XT: aX} exists by Separation (because aX is definite). But the question whether Ta=T is also definite. So using Separation again, 𝔇T={aA: Ta=T}, where A is any element of T. A similar discussion accompanies his definition of “immediately equivalent” showing that it is definite whether a given subset of MN defines a bijection.

Nonetheless, a certain nimbus of indefiniteness surrounds Zermelo’s “definite”. Twenty-one years later, Zermelo published a paper, “On the concept of definiteness in axiomatics”. By this time, people had suggested replacing “definite” with “definable in first-order logic”. Zermelo did not accept this, and his proposal had no influence on ZFC.

[1] Despite this clear statement from Zermelo, Moore argues that “his axiomatization was primarily motivated by a desire to secure his demonstration of the Well-Ordering Theorem and, in particular, to save his Axiom of Choice” (Moore (p.159)). He notes that Zermelo composed the two 1908 papers—the axiomatization, and the second well-ordering proof—together, and that “there are numerous internal links connecting the two papers” (Moore). Zermelo’s biographer takes an intermediate view: “Above all, however, one has to take into consideration how deeply Zermelo’s axiomatic work was entwined with Hilbert’s programme of axiomatization and the influence of the programme’s ‘philosophical turn’ which was triggered by the publication of the paradoxes in 1903” (Ebbinghaus (p.81)).

Prev TOC Next

8 Comments

Filed under History, Set Theory