Category Archives: Math

First-Order Categorical Logic 14

Prev TOC Next

JB: So, let’s think about how we can prove this generalization of Gödel’s completeness theorem. First, remember that a hyperdoctrine B is consistent iff B(0) has at least two elements, or in other words, ⊤ ≠ ⊥ in this boolean algebra. Second, let’s say a hyperdoctrine C is set-based if every C(n) is the power set of Vn for some fixed set V. We call V the universe. Third, let’s say a morphism of hyperdoctrines, say F: BC, is a natural transformation whose components F(n): B(n) → C(n) are Boolean algebra homomorphisms obeying the Beck–Chevalley condition and maybe the Frobenius condition. (We’re a bit fuzzy about this and we’ll probably have to sharpen it up.)

Then let’s try to prove this: a hyperdoctrine is consistent iff it has a morphism to some set-based hyperdoctrine.

Here’s my rough idea of one way to try to prove it: use Stone’s representation theorem for boolean algebras! This says any boolean algebra is isomorphic to the boolean algebra of clopen subsets of some Stone space. A Stone space is a topological space that’s profinite, i.e. a limit of finite spaces with their discrete topology. A subset of a topological space is clopen if it’s closed and open.

The clopen subsets of a topological space always form a boolean algebra under the usual intersection, union and complement. So, the cool part of Stone’s representation theorem is that every boolean algebra shows up this way—up to isomorphism, that is—from a very special class of topological spaces: the ones that are almost finite sets with their discrete topology. A good example of such a space is the Cantor set.

I hope you see how this might help us. We’re trying to show every consistent hyperdoctrine is set-based. (I think the converse is easy.) But Stone is telling us that every boolean algebra is a boolean algebra of some subsets of a set!

MW: Okay, that jibes with some of my own vague ideas. Though I hadn’t got as far as Stone’s Theorem. (I’m familiar with it, of course.)

Two ideas make Henkin’s proof work. One is to find a epimorphism of the boolean algebra B(0) to the two-element boolean algebra {⊤,⊥} (or equivalently, an ultrafilter on B(0)). In other words, we’re deciding which sentences we want to be true; this is then used to define the model. The points of the Stone space are these epimorphisms, so we’re further along on the same track.

The other idea—specific to Henkin—insures that quantifiers are respected. Syntax now comes to the fore: We add a bunch of constants, called witnesses, and witnessing axioms, like so (in a simple case):

x φ(x)→φ(cφ)

where cφ witnesses the truth of the existential sentence ∃x φ(x).

Stone’s Theorem gives us a Stone space Sn for each B(n). That’s nice. But we don’t immediately have Sn=Vn, with the same V for all n. (I’m not even sure that we want exactly that.)

The Henkin witnessing trick “ties together” (somehow!) B(n) and B(n+1). And the tie relates to the left adjoint of a morphism B(n)→B(n+1).

So the stew is thickening nicely, but it looks like we’ll have to adjust the seasonings quite a lot!

JB: Right, this proof will take real thought. I don’t know ahead of time how it will go. But I’m very optimistic.

Stone’s representation theorem says the category of boolean algebras and boolean algebra homomorphisms is contravariantly equivalent to the category of Stone spaces and continuous maps. Let’s call this equivalence

S: BoolAlgop → Stone

S is a nice letter because the Stone space associated to a boolean algebra A should really be thought of as the ‘spectrum’ of that algebra: it’s the space of boolean algebra homomorphisms

f:A → {⊤,⊥}

Our hyperdoctrine B gives a Boolean algebra B(n) for each n, and this gives us a Stone space S(B(n)). Maybe we can abbreviate this as Sn, as you were implicitly doing, and hope no symmetric groups become important in this game. But for now, at least, I want to keep our eyes on that functor S.

Why? Because our hyperdoctrine doesn’t just give us a bunch of Stone spaces: it gives us a bunch of maps between them! For each function

f: mn

our hyperdoctrine gives us a boolean algebra homomorphism

B(f): B(m) → B(n).

Hitting this with the functor S, we get a continuous map going back the other way:

S(B(f)): S(B(n)) → S(B(m)).

And all this is functorial. That has to be very important somehow.

Next: somehow we want to use the Henkin trick to fatten up the Stone spaces S(B(n)) to sets of the form Vn for a fixed universe V. There are a lot of tantalizing clues for how. First, we can actually think about the Henkin trick and its use of “witnesses”. Second, we can note that the Henkin trick is related to existential quantifiers—which in hyperdoctrines are related to left adjoints in the category of posets to the boolean algebra homomorphisms

B(f): B(m) → B(n).

This raises a natural and interesting question. If you have a mere poset map between boolean algebras, what if anything does it do to their spectra? I should at least figure out the answer for the left adjoints we’re confronting.

So, there’s a lot to chew on, but it’s very tasty.

MW: Model theorists call these Stone spaces the spaces of complete n-types. We ran into them several times in our conversation on non-standard models of arithmetic (in posts 1, 2, 9, 18, and 25).

Let me noodle around with what you just said for a nice easy example, the theory LO of linear orderings. Actually, let’s start with something even easier: DLO, dense linear orderings without endpoints. This enjoys quantifier elimination: any formula is equivalent to one without quantifiers.

So, B(0) is just {⊤,⊥}. That makes sense: since DLO is complete, any sentence is provably true or provably false.

How about B(1)? Also {⊤,⊥}! What can we say about x? Provably true things like x=x, and provably false things like xx. That’s it!

Look at it this way: all countable DLO’s are isomorphic, and there’s an automorphism taking any element of a countable DLO to any other element. So all countable DLOs look alike, and for the elements of one, “none of these things is not like the other…”

For B(2) we finally have a bit of variety: B(2)={[x<y],[x=y],[y<x]}. Plus disjunctions like [x<y y<x]. And ⊥. What about the map B(f):B(1)→B(2) with our old friend f:{x}→{x,y}? Obviously ⊤ goes to ⊤ and ⊥ goes to ⊥.

Finally with B(3) things start to pick up a little. It contains the six “permutation situations” [x<y<z], [x<z<y], …, [z<y<x], plus seven more where some things are equal (like [x=y<z] or [x=y=z]), plus all the disjunctions, plus ⊥. If f now is the injection {x,y}→{x,y,z}, then B(f) sends an assertion about x and y to the disjunction of all the xyz situations consistent with it. For example

[x<y] ↦ [z<x<y z=x<y x<z<y ∨ … ∨ x<y<z]

Okay, complete n-types. (We’re concerned here with complete pure n-types, that is, no names added to the language.) These give you the full scoop on the n elements (x1,…,xn). So ignore the disjunctions. For DLO, we have only principal n-types: a single formula tells the whole story. I’ll write 〈φ〉 for the n-type generated by the boolean element [φ]. So for DLO,

S(B(0)) = {〈⊤〉}
S(B(1)) = {〈⊤〉}
S(B(2)) = {〈x<y〉, 〈x=y〉, 〈y<x〉}
S(B(3)) = {〈x<y<z〉, 〈y<x<z〉,…, 〈x=y=z〉}

In other words, we can forget about the disjunctions, and an n-type must be consistent. Take a hike, ⊥!

In general, if h:XY is a boolean algebra homomorphism, then S(h):S(Y)→S(X) is just inverse images: S(h)(p)=h−1(p). Here p, being an n-type, is a subset of Y. For example, let f:{x,y}→{x,y,z}. Above we saw that B(f):[x<y] ↦ [z<x<y ∨ … ∨ x<y<z]. All these disjuncts are sent to 〈x<y〉. (Or to be persnickety, the principle ideals generated by the disjuncts are sent there.)

That makes sense. The 3-type p is “full info” on the triple (x,y,z), and S(B(f))(p) extracts all the info we can glean about the pair (x,y). That’s a 2-type!

Hmm, couple of ways I could go from here. On the one hand, I don’t yet discern any existential quantifiers lurking in the background. Also, what do we want for our “universe” or “domain” V? None of the B(n)’s, n=0,1,2,3, seem to be co-operating!

Or I could repeat the exercise for a less simple theory, namely LO. Now B(1), and even B(0), have some real meat on their bones! Quantifiers stick around.

I don’t have such a clear understanding of the n-types of LO. Maybe if I finish reading Joe Rosenstein’s marvelous book…

Oh, one more thought. The contravariant S functor gives us S(f) going in the other direction from f. But we need two morphisms going the other way: the left and the right adjoints.

JB: Yes, I’ve been talking about the left adjoint since that relates to existential quantifiers, and those seem to be the key to “Henkinization”. My vague idea is this—I’ll just sketch a special case. If our hyperdoctrine B has some unary predicate PB(1) that gives “true” in B(0) when we existentially quantify it, Henkin wants to expand the Stone space S(B(1)) by throwing in an element x such that P(x) = ⊤. (Remember, elements of B(n) are precisely continuous boolean-valued functions on the Stone space S(B(n)).)

And here, when I said “existentially quantify it”, I meant take an element of B(1) and hit it with the left adjoint of the boolean algebra homomorphism

B(f): B(0) → B(1)

coming from the unique function

f: 0 → 1

Does this make sense?

If so, of course we need to do this sort of thing in a way that involves not only B(0) and B(1), but also the boolean algebras B(n) for higher n.

To make progress on this idea, it’s very good that you found a nice example of a hyperdoctrine, namely the theory of dense linear orders, and started working out B(n) for various low n. I think we should focus on this example for a while, and think about trying to “Henkinize” it, throwing new elements into the Stone spaces S(B(n)), until we get a set-based hyperdoctrine C, i.e. one for which every C(n) is the power set of Vn for some fixed set V. This will be a countable infinite set, right? It’s probably the underlying set of some Stone space. And it should be obtained in some systematic way from our original hyperdoctrine B.

You’ve nicely shown why B itself is not set-based.

MW: Sounds like a plan!

Prev TOC Next

Leave a comment

Filed under Categories, Conversations, Logic

Set Theory Jottings 11. Zermelo to the Rescue! (Part 2)

Prev TOC Next

In 1908 Zermelo published his paper “Investigations in the foundations of set theory”. This contained the axiom system that eventually led to ZFC. Zermelo opens the paper with this rationale:

Set theory is that branch of mathematics whose task is to investigate mathematically the fundamental notions “number”, “order”, and “function” … At present, however, the very existence of this discipline seems to be threatened by certain contradictions, or “antinomies” [such as the Russell paradox]. …it no longer seems admissible today to assign to an arbitrary logically definable notion a “set”, or “class”, as its “extension”. Cantor’s original definition of a “set” as “a collection, gathered into a whole, of certain well-distinguished objects of our perception or our thought” therefore certainly requires some restriction … Under these circumstances there is at this point nothing left for us to do but to proceed in the opposite direction and, starting from “set theory” as it is historically given, to seek out the principles required for establishing the foundations of this mathematical discipline. In solving the problem we must, on the one hand, restrict these principles sufficiently to exclude all contradictions and, on the other, take them sufficiently wide to retain all that is valuable in this theory.1

Zermelo’s motivation is pragmatic, unlike the philosophical approach of Russell and Whitehead’s Principia Mathematica.

Axiomatization was “in the air” at this time, with people throwing out various suggestions. Moore (p.151) offers some examples. Julius König proposed two axioms: (1) There are mental processes satisfying the formal laws of logic. (2) The continuum ℝ, treated as the totality of all sequences of natural numbers, does not lead to a contradiction. Schoenflies opted for the Trichotomy of Cardinals, and wanted to hold on to the Principle of Comprehension. Cantor sent a letter to Hilbert with some principles that look a lot like Zermelo’s axioms, but this letter didn’t come to light until decades later.

Zermelo’s article stands out as the first published proposal with a full set of axioms, demonstrating that it could save some of Cantor’s Paradise, and recognizing that the Principle of Comprehension was kaput.

Zermelo’s eschews philosophy:

The further, more philosophical, question about the origin of these principles and the extent to which they are valid will not be discussed here. I have not yet even been able to prove rigorously that my axioms are “consistent”, though this is certainly very essential…

The paper, he says, develops the theory of equivalence in a manner “that avoids the formal use of cardinal numbers.” He promises a second part, dealing with well-ordering, but this never appeared.

After the introduction, Zermelo begins:

  1. Set theory is concerned with a “domain” 𝔅 of individuals, which we shall call simply “objects” and among which are the “sets”. …
  2. Certain “fundamental relations” of the form aεb obtain between the objects of the domain 𝔅. …An object b may be called a set if and—with a single exception (Axiom II)—only if it contains another object, a, as an element. [I will use the modern ∈ in place of Zermelo’s ε from now on.]
  3. [Definition of subset and disjoint]
  4. A question or assertion 𝔈 is said to be “definite” if the fundamental relations of the domain, by means of the axioms and the universally valid laws of logic, determine without arbitrariness whether it holds or not. Likewise a “propositional function” 𝔈(x), in which the variable term x ranges over all individuals of a class 𝔎, is said to be “definite” if it is definite for each single individual x of the class 𝔎. Thus the question whether ab or not is always definite, as is the question whether MN or not.

Note that item (1) allows for so-called urelements or atoms—things like integers. ZFC is a so-called “pure” set theory, without atoms.

Next come seven axioms, interlarded with extensive discussion.

Extensionality:
“Every set is determined by its elements.” In other words, if MN and NM, then M=N.
Elementary Sets:
The null set exists. Given any elements a and b of the domain, the sets {a} and {a,b} exist.
Separation:
To quote Zermelo: “Whenever the propositional function 𝔈(x) is definite for all elements of a set M, M possesses a subset M𝔈 containing as elements precisely those elements x of M for which 𝔈(x) is true.”

Zermelo notes that “sets may never be independently defined by means of this axiom but must always be separated as subsets from sets already given”, and that this prevents the Russell paradox and the like. Indeed, the Russell paradox is turned into a theorem: for any set M there is a subset M0 such that M0M. He also notes that “definiteness” precludes some semantic paradoxes, e.g., Richard’s paradox (see post 5).

Zermelo shows that Separation implies the existence of set differences MM1 (denoted MM1) and intersections MN (denoted [M,N]), and even ⋂XTX for a set of sets (which he denotes 𝔇T, for “Durchschnitt”).

Power Set:
For any set T, there is a set whose elements are precisely all of T’s subsets. He denotes the power set of T by 𝔘T (for “Untermengen”).
Union:
For any set T, there is a set whose elements are precisely the elements of the elements of T. In modern notation, ⋃XTX. Denoted 𝔖T, for “Summe”. He writes M+N for our MN.
Choice:
Given any set T of mutually disjoint nonempty sets, the union ⋃XTX contains a subset S such that SX is a singleton for each XT.

Zermelo adds, “We can also express this axiom by saying that it is always possible to choose a single element from each element M,N,R,… of T and to combine all the chosen elements, m,n,r,…, into a set” S.

The set of all S’s satisfying this condition (card(SX)=1 for all XT) Zermelo calls the product of the elements of T, denoted 𝔓T, or just MN for a pair of disjoint sets M and N.

Infinity:
There is a set Z containing the null set 0, and for each of its elements a, it also contains {a}.

This leads to the so-called “Zermelo finite ordinals”, 0, {0}, {{0}}, {{{0}}}, etc. Z contains all these, and using Separation, we can assume Z contains exactly these. The Zermelo finite ordinals have two drawbacks: (1) They don’t extend naturally into the infinite ordinals; (2) Each of them, except 0, contains exactly one element. The von Neumann ordinals removed both of these blemishes.

The rest of the paper develops the theory of equivalence from the axioms. I noted that Zermelo allows atoms. On the other hand, he does not have ordered pairs, and thus neither relations nor functions. This lack calls for some gymnastics. When M and N are disjoint, the set of all unordered pairs MN={{m,n}:mM, nN} substitutes for our M×N.

To define equivalence between sets M and N, he assumes first that M and N are disjoint. Using MN instead of M×N, he can define “bijection between M and N’’. If one exists, then M and N are “immediately equivalent”. Dropping the disjointness condition, he says M and N are “mediately equivalent” if there exists a third set that is disjoint from both and “immediately equivalent” to both. It takes a couple of pages to show that this definition makes sense.

Zermelo proves the Equivalence Theorem, that is the “(Cantor-Dedekind-Schröder)-Bernstein Theorem”. (A couple of decades later, he discovered that Dedekind had basically the same proof.) He gives detailed proofs of the basic facts about equivalence. He defines “M has lower cardinality than N’’ in the usual fashion (M injects into N but not vice versa) but avoids defining “cardinal number”, as he promised in the introduction. The paper crescendoes in a proof of J. König’s inequality, a generalization of Cantor’s 𝔪<2𝔪. Expressed using cardinal numbers, this says that if 𝔪k<𝔫k for all k in some index set K, then ∑k𝔪k<∏k𝔫k. Zermelo, of course, phrases this without mentioning cardinal numbers.

Zermelo spills a fair amount of ink on the question of “definiteness”. He initially claims that ab and MN are definite questions, as we’ve seen. When defining 𝔇T, he notes that for any object a, the set Ta={XT: aX} exists by Separation (because aX is definite). But the question whether Ta=T is also definite. So using Separation again, 𝔇T={aA: Ta=T}, where A is any element of T. A similar discussion accompanies his definition of “immediately equivalent” showing that it is definite whether a given subset of MN defines a bijection.

Nonetheless, a certain nimbus of indefiniteness surrounds Zermelo’s “definite”. Twenty-one years later, Zermelo published a paper, “On the concept of definiteness in axiomatics”. By this time, people had suggested replacing “definite” with “definable in first-order logic”. Zermelo did not accept this, and his proposal had no influence on ZFC.

[1] Despite this clear statement from Zermelo, Moore argues that “his axiomatization was primarily motivated by a desire to secure his demonstration of the Well-Ordering Theorem and, in particular, to save his Axiom of Choice” (Moore (p.159)). He notes that Zermelo composed the two 1908 papers—the axiomatization, and the second well-ordering proof—together, and that “there are numerous internal links connecting the two papers” (Moore). Zermelo’s biographer takes an intermediate view: “Above all, however, one has to take into consideration how deeply Zermelo’s axiomatic work was entwined with Hilbert’s programme of axiomatization and the influence of the programme’s ‘philosophical turn’ which was triggered by the publication of the paradoxes in 1903” (Ebbinghaus (p.81)).

Prev TOC Next

8 Comments

Filed under History, Set Theory

First-Order Categorical Logic 13

Prev TOC Next

MW: It’s been a minute! Well, almost 60,000 minutes.

We left off with a question: does a natural transformation from a syntactic hyperdoctrine to a semantic hyperdoctrine automatically “respect quantifiers”? We saw that this amounts to a Beck–Chevalley condition. We wondered if we had to add that condition to our definition of a model, or if it came for free.

Maybe you’ve had the experience of putting aside a crossword, half-completed, and coming back to it in a hour. Hey, of course the answer to 60 Across is “Turing Test”!

The same thing happened to me here. And the answer is: no free lunch.

Let me recap, just to get my brain back up to speed, and fix notation. B is a syntactic hyperdoctrine, C is a semantic hyperdoctrine, and F:BC is a natural transformation.

For example: say B is the theory LO of linear orders, and C is an actual linear order. B(2) contains predicates like [x<y], C(2) contains binary relations on the domain of the linear order, and F2([x<y]) is the less-than relation on this domain.

Here’s the key diagram:

Ignore the dashed arrows for just a second. Say f is an injection in FinSet, like f:{x}→{x,y}. Then B(f) and C(f) are the “throw in extra variables” morphisms we’ve talked about ad nauseum. This diagram is a commuting square in the category BoolAlg.

The dashed arrows are morphisms in a larger category we dubbed PoolAlg. This is a full subcategory of Poset, and BoolAlg is a so-called wide subcategory of it. Its objects are all boolean algebras, and its morphisms are all order-preserving maps, i.e., all the morphisms those objects have as denizens of Poset.

Within PoolAlg, the dashed arrows are left adjoints of the right arrows. Now does the diagram commute? In PoolAlg, that is. Beck–Chevalley says yes. We saw last time that we want this to be true. For example, the top left arrow takes [y<x] to [∃y(y<x)]. We want this predicate to map to the corresponding relation on an actual linear order. That’s what we mean by “respecting quantifiers”.

JB: This sounds right. Thanks for getting us started again!

As you might expect, I have a couple of nitpicks. While I feel sure there’s no free lunch here, I don’t think you have proved it. Maybe for some reason the Beck–Chevalley condition always holds in this situation! I feel sure it doesn’t, but I think that can only be shown with a counterexample: a defective model that doesn’t obey Beck–Chevalley.

It’s probably easy enough to find one. However, I don’t feel motivated to do it. We have bigger fish to fry. I’m happy to assume our models obey this Beck–Chevalley condition.

(Do we also need to assume they obey some sort of Frobenius condition?)

And here’s another even smaller nitpick: I don’t see the need for this category PoolAlg. I believe whenever you’re tempted to use it we can use our friend Poset. For example, the dashed arrows are left adjoints in Poset, and the square containing those dashed arrows commutes in Poset. Saying PoolAlg—restricting the set of objects in that way—isn’t giving us any leverage. It doesn’t seriously hurt, but I prefer to think about as few things as is necessary.

Maybe it’s time to finally try to state and prove a version of Gödel’s completeness theorem. Do you remember our best attempt at stating it so far? I think I can, just barely… though it’s somewhat shrouded by the mists of time.

MW: That’s right, to prove the “no free lunch” result we need a counter-example. That’s what came to me when I started thinking about this stuff again. A way to construct a whole slew of counter-examples.

And I think it’s worth going through, because it relates to Henkin’s proof of Gödel’s completeness theorem. The cat-logic proof will have to surmount the same obstacles. So here goes.

Say B is a theory, aka syntactic hyperdoctrine. Say C is a semantic hyperdoctrine, and F:BC is one of those natural transformations that does respect quantifiers. Suppose the domain of C is V: C(n) is a set of n-ary relations on V. Now let W be a subset of V. Let D(n) be all the relations you get by restricting the relations in C(n) to W. And for any predicate φ∈B(n), let Gn(φ) be the restriction of Fn(φ) to W. Let be the image of B(n) under Gn. I claim that D is a hyperdoctrine, and G is a natural transformation from B to D. And very often, G will be disrespectful of quantifiers.

Using my LO example, let C have V=ℤ as its domain, and let W=ℕ. The predicate [y<x] gets sent to a less-than relation on ℤ by F2. This restricts to a less-than relation on ℕ. So G2([y<x]) is that relation.

Now let’s apply the left adjoints of the injection {x}→{x,y}. Up in B, we get the predicate [∃y(y<x)]. F1 sends this to the always-true relation in ℤ, which of course restricts to the always-true relation in ℕ. What about the left adjoints in C and D? The relation supy(y<x) is always true in ℤ, but is x≠0 in ℕ. So “go left, then down via G1’’ gets you to always-true in D, while “go down via G2, then left” gets you to x≠0.

Another example: let V=ℚ, W=ℤ, and for our predicate, use [x<z<y]. The left adjoint from {x,y}→{x,y,z} is [∃z(x<z<y)]. Taking one path brings us to the relation x<y in ℤ, while the other path leads to “follows, but not immediately”.

It’s clear now that one path leads to a “model” where ∃x means, “does there exist an x in a larger domain?”. The Henkin proof features a whole sequence of such enlargements. Thinking about that suggested this class of counter-examples to me.

I believe that restricting a semantic hyperdoctrine this way results in a hyperdoctrine. I haven’t checked all the details. But it’s a piece of cake when C(n) equals P(Vn) for all n. In that case, D(n) is just P(Wn)! This is what you wanted for semantic hyperdoctrines in the first place, as I recall.

The other thing we need to check: that G is a natural transformation. The key
diagram:

Here, ↾m and ↾n are the restriction maps. It’s pretty obvious that the bottom square commutes. Since the top square does too, an easy diagram chase shows that “outer frame” square does.

Anyway, you wanted a refresher on the statement of the Gödel completeness theorem in our framework. Here goes. A syntactic hyperdoctrine B is consistent if B(0) is not the one-element boolean algebra {⊤=⊥}. The theorem states that every consistent syntactic hyperdoctrine has a model, which is a natural transformation to a semantic hyperdoctrine that respects quantifiers.

(And maybe also equality? Haven’t thought about that yet.)

We want to adapt the Henkin proof to this framework, is that right? I think I see, in a vague way, how to do that. But I don’t see how we’d be leveraging the framework—how the proof would be anything but a straighforward translation, not really using any of the neat category ideas.

JB: Okay, great. Thanks for all that.

Let’s try to clean things up a wee bit before we dive in. When you say “syntactic hyperdoctrine” I’d prefer to just say “hyperdoctrine”.

First, it seems plausible that any hyperdoctrine is consistent iff has a model. Second, part of the whole goal of working categorically is to break down the walls between syntax and semantics, to treat them both as entities of the same kind (in this case hyperdoctrines).

So, we should aim for a version of Gödel’s completeness theorem saying that a hyperdoctrine B is consistent iff it has a morphism F to a hyperdoctrine C of some particular “semantic” sort. You said B should be “syntactic”. That’s great as far as it goes, but here’s a place where we can generalize and do something new—so let’s boldly assume B is an arbitrary hyperdoctrine.

MW: Wait a minute! What do you mean by “morphism” here?

JB: A morphism of hyperdoctrines. Do you object? (That was a pun.)

MW: Who could object to a three layer cake?

In the bottom layer are boolean algebras, regarded as poset categories. The middle filling consists of categories whose objects are boolean algebras; a hyperdoctrine is a functor from FinSet to one of them. And now you propose to top it off with a category whose objects are hyperdoctrines. Copacetic!

Anyway, we never defined a morphism of hyperdoctrines. Do you mean a natural transformation respecting quantifiers?

JB: Well, perhaps not exactly that. We certainly want F to be a natural transformation respecting quantifiers, but we probably want it to be more than that, and we haven’t yet figured out exactly what. I expect that maybe F should be a natural transformation whose components obey the Beck—Chevalley and Frobenius conditions. That should make it respect quantifiers and equality. But we really need to think about this harder before we can be sure! So I said “morphism”, to leave the issue open.

We will probably resolve this issue as part of proving Gödel’s completeness theorem. It’s a time-honored trick in math: you make up the definitions while you’re proving a theorem, so your definitions make the theorem true.

Okay, where was I? I wanted to think about Gödel’s completeness theorem this way: it says a hyperdoctrine B is consistent iff it has some morphism F to a hyperdoctrine C of some “semantic” sort. And by “semantic” let’s mean the most traditional thing: we’ll demand that C be a hyperdoctrine where we pick a set V and let C(n) be the power set of Vn. That is semantic par excellence.

So how are we going to do this? Whatever trick people ordinarily use to prove Gödel’s completeness theorem—and you already mentioned the key word: “Henkin”—let’s try to adapt it to a general hyperdoctrine B. Maybe we can sort of pretend that B is defined “syntactically”, but try to use only its structure as a hyperdoctrine.

MW: Okay, this is the sort of thing I was hoping for!

I’m intrigued. The Henkin proof leans heavily on syntax. It begins by adding a bunch of new constants and axioms. How can we do this, while forswearing syntax?

JB: A question for the next installment….

Prev TOC Next

Leave a comment

Filed under Categories, Conversations, Logic

Set Theory Jottings 10. Axiomatic Set Theory

Prev TOC Next

“An Axiom, you know, is a thing that you accept without contradiction. For instance, if I were to say ‘Here we are!’ that would be accepted without any contradiction, and it’s a nice sort of remark to begin a conversation with. So it would be an Axiom. Or again, supposing I were to say, ‘Here we are not!’, that would be—”

“—a fib!” cried Bruno.

“that would be accepted, if people were civil”, continued the Professor; “so it would be another Axiom.”

“It might be an Axledum”, Bruno said: “but it wouldn’t be true!

—Lewis Carroll, Sylvie and Bruno Concluded

To get to the “good stuff” in math, you almost always need some set theory. Zermelo-Fraenkel set theory (ZF), plus the axiom of choice (AC; ZF+AC=ZFC) has become the standard first-order axiom system for set theory.

Before diving into the details, some generalities on axiom systems. Nowadays we’re pretty chill about them; you can take any collection you like (hopefully consistent) for a theory, and then you can start writing your thesis. Not, perhaps, an interesting thesis, but at any rate Bruno won’t complain that your axioms aren’t true!

For the Greeks, the axioms and postulates were true, in some sense. Idealized, sure, but descriptive of reality. This tie began to fray with the discovery of non-Euclidean geometries. Algebraic axiom systems, like those for groups and for fields, appear by the end of the 19th century.

For roughly two thousand years after Euclid, most math developed without axioms. Take calculus as an example. You have the rules of calculus, but you don’t see anything like the Euclidean treatment of geometry. This remained true even as people subjected its foundations to stricter and stricter scrutiny. Mathematical intuition reigned supreme.

Hilbert’s Grundlagen der Geometrie (Foundations of Geometry, 1899) pushed towards a more formalist attitude. A celebrated quote of his, from years earlier, sums it up nicely:

One must be able to say at all times, instead of points, lines, and planes: tables, chairs, and beer mugs.

At times Cantor seemed to endorse this perspective:

Mathematics is entirely free in its development, and its concepts are only bound by the necessity of being consistent, and being related to the concepts introduced previously by means of precise definitions.

Grundlagen einer allgemeinen Mannigfaltigkeitslehre (Foundations of a general theory of sets)

But he held strong opinions on what’s true in mathematics:

I entertain no doubts as to the truths of the tranfinites, which I recognized with God’s help and which, in their diversity, I have studied for more than twenty years; every year, and almost every day brings me further in this science.

—Letter from Cantor to Jeiler, quoted in Dauben (p.147).

On the other hand, he referred to the “Cholera-Bacillus of infinitesimals”, and called them “nothing but paper numbers!” (Dauben, p.131). The Continuum Hypothesis was for him a question of fact.

Two other themes run through this period: mathematics as a mental activity, and as logic.

Recall that Boole titled his famous treatise An Investigation of the Laws of Thought: on Which are Founded the Mathematical Theories of Logic and Probabilities. Cantor’s definition of “set” in his last major work reads

By a set we are to understand any collection into a whole M of definite and separate objects m of our intuition or our thought.

Here is the first sentence of Dedekind’s Was sind und sollen die Zahlen?: “In what follows I understand by thing every object of our thought.” His proof of the existence of an infinite set relies on this ontology:

Theorem: There exist infinite systems.

Proof: My own realm of thoughts, i.e., the totality S of all things which can be objects of my thought, is infinite. For if s signifies an element of S, then the thought s′, that s can be an element of my thought, is itself an element of S

[Dedekind then appeals to his definition of infinite as having a bijection with a proper subset.]

Frege severely criticized this injection of psychology into mathematics. Cantor’s “proof” of the Well-Ordering Theorem suffers from it, as it consists of successively choosing elements of the set to be well-ordered. If we take this literally, then the choices must take place at an increasing sequence of times t1<t2<…. This limits us to ordinals that are “realizable in ℝ’’, and thus to countable ordinals (see post 4). Yet Cantor claimed that every set can be well-ordered, in particular ℝ.

This is why Zermelo was at pains to say in his second proof of the Well-Ordering Theorem, “…the ‘general principle of choice’ can be reduced to the following axiom, whose purely objective character is immediately evident.” (My emphasis.)

Both Frege and Russell held that the truths of mathematics are logical facts. Thus we find debates on whether Zermelo’s axiom of choice is logically valid. Not surprising, historically. Aristotle’s logic dealt with propositions. From “proposition” we obtain “propositional function”, that is, a proposition with a free variable, like “x is mortal”. It becomes a proposition if we assign a value to the variable (“Socrates is mortal”), or quantify over it (“All men are mortal”). The class of all things satisfying a propositional function went by the name, “extension of a concept”.

Zooming out from these specifics, logic and mathematics both lay claim to necessary truth. This is elaborated in Kantian philosophy. Kant classified mathematical facts as synthetic a priori: necessary truths that go beyond analytic truth, which are true by definition. Poincaré classified the Axiom of Choice as a synthetic a priori judgment, just like the principle of induction.

The rise of formal logic and axiomatic set theory resulted in a sharply drawn boundary between logic and set theory. We have the axioms and rules of inference of first-order logic; then we have the axioms of ZFC or similar systems, which are particular first-order theories. Things weren’t so clear at the dawn of the 20th century.

Prev TOC Next

Leave a comment

Filed under History, Set Theory

Set Theory Jottings 9. Cantor Normal Form

Prev TOC Next

Suppose β>1, and let ζ>0 be arbitrary. Then ζ has a unique representation in so-called Cantor normal form:

ζ α1 χ1+···+βαk χk
α1>···>αk, 1≤χi<β for all i
(1)

Stillwell gives an illustration, close to a proof, in §2.6 (pp.46–47)1, for the most important special case: β=ω and ζ<ε0. (The Cantor normal form for ε0 is just ωε0, not helpful.)

Here’s a more formal proof of the full theorem. First generalize division to all ordinals. If β>1, then any ζ has a unique representation in the form

ζ= βχ+ρ,   ρ<β

Proof: since β>1, β(ζ+1)≥ζ+1, and so there is a least χ′ such that βχ′>ζ. Furthermore χ′ cannot be a limit ordinal: if βξ≤ζ for all ξ<λ, then βλ≤ζ by continuity of multiplication. Set χ=χ′−1, so

βχ≤ζ<β(χ+1)

Now write (by equation (7) of post 8)

ζ=βχ+(−βχ+ζ)=βχ+ρ

setting ρ=−βχ+ζ. We cannot have ρ≥β, otherwise

ζ=βχ+ρ≥βχ+β=β(χ+1)>ζ

Uniqueness needs to be done just right, since ordinal addition has cancellation on the left but not on the right. Suppose

βχ11=βχ22,    ρ12

If χ12 then χ21+γ for some γ>0. So

βχ11=β(χ1+γ)+ρ2=βχ1+βγ+ρ2

Cancelling on the left,

ρ1=βγ+ρ2≥βγ≥β

contradicting the definition of ρ1. So χ12=χ (say), and we can cancel βχ on the left to get ρ12.

The proof of Cantor normal form starts out in a similar fashion. First prove by induction that βα≥α for all α. Since βζ+1>ζ, it follows that there is a least α with βα>ζ. Furthermore α cannot be a limit ordinal by continuity of exponentiation (in the exponent). Set α1=α−1. So

βα1≤ζ<βα1+1

Now we divide ζ by βα1:

ζ=βα1χ11,    ρ1α1

We cannot have χ1≥β, otherwise

ζ=βα1χ11≥βα1β=βα1+1

Repeat with ρ1:

ρ1α2χ22,   χ2<β,   ρ2α2

We must have α21 because βα2≤ρ1α1. Continuing we get a descending sequence of αi’s which must terminate in finitely many steps, because ordinals. This establishes (1).

The proof of uniqueness provides a bonus: a criterion for which of two normal forms is larger. It’s what you’d expect from decimal notation. Suppose

ζ α1 χ1+···+βαk χk
ζ′ = βα1 χ1′+···+ βαm χm

are two unequal normal forms. Because cancellation on the left holds for addition, we can remove any equal terms on the left, and assume either α11′ or α11′ and χ11′ (for the reverse inequalities, just flip things around). Then ζ<ζ′. I will call this the first difference criterion for the ordering of Cantor normal forms.

The proof resembles a decimal computation. To show that 999<1000 (for example), we add 1 and do the carries. Here, we begin by combining the last two terms of ζ, using the facts that χk<β and αk−1k:

βαk-1 χk-1αk χk αk-1 χk-1αk+1
≤βαk-1 χk-1αk-1
αk-1k-1+1)

Next we combine with the previous term, using the fact that χk−1+1≤β and αk−2k−1:

βαk-2 χk-2αk-1k-1+1) ≤βαk-2 χk-2αk-1+1
≤βαk-2k-2+1)

We keep going, until eventually we have

ζ<βα11+1)

If α11′ and χ11′, then we have βα11+1)≤βα1 χ1′. Thus ζ is less than the first term of ζ′. If α11′ then we use the fact that χ1+1≤β, concluding that βα11+1)≤βα1+1≤βα1, and hence ζ is again less than the first term of ζ′. A fortiori, ζ<ζ′.

Uniqueness follows. These ordering criteria are needed for the Goodstein and Hydra theorems.

[1] However, Stillwell messes up at the end of the example. In the last bullet he says, “This means that α is a term in the following sequence with limit ωω2·7+ω·4+11+ω= ωω2·7+ω·4+12.” This equation is incorrect; the last “+ω’’ should be “·ω’’. The sequence on the next line should have “·1’’, “·2’’, etc., instead of “+1’’, “+2’’, etc. So he has many more steps to go.

Another point. When he says that α “falls between” two terms in a sequence, he’s not entitled to assume it falls strictly between. He should say instead that there are two consecutive terms with α greater than or equal to the first and less than the second.

Prev TOC Next

Leave a comment

Filed under Set Theory

Set Theory Jottings 8. Ordinal Arithmetic

Prev TOC Next

Usually one defines the ordinal operations via transfinite induction:

Continue reading

2 Comments

Filed under Set Theory

Set Theory Jottings 7. The (Cantor-Dedekind-Schröder)-Bernstein Theorem

Prev TOC Next

The trichotomy of cardinals says that for any 𝔪 and 𝔫, exactly one of these holds: 𝔪<𝔫, 𝔪=𝔫, or 𝔪>𝔫. It’s equivalent to the conjunction of these two propositions, for any two cardinals 𝔪 and 𝔫:

Continue reading

Leave a comment

Filed under History, Set Theory

Set Theory Jottings 6. Zorn’s Lemma

Prev TOC Next

Zermelo’s 1904 proof of the well-ordering theorem got a lot of blowback, as we’ve seen. On the other hand, the very next year Hamel used it to prove the existence of a so-called Hamel basis. In 1910, Steinitz made numerous applications in the theory of fields. He wrote:

Continue reading

Leave a comment

Filed under History, Set Theory

Nonstandard Models of Arithmetic 32

Prev TOC Next
Previous Paris-Harrington post

[Ed. note: This post was essentially ready two years ago, but I got distracted with other matters. If you’re seeing this for the first time, or want to refresh your memory, posts 8 and 9 introduced the Paris-Harrington theorem. Posts 21 through 24 continued the discussion, in a dialog with Bruce Smith. MW]

Continue reading

Leave a comment

Filed under Conversations, Peano Arithmetic

First-Order Categorical Logic 12

Prev TOC Next

MW: Last time we looked at the categorical rendition of “C is a model of B”:

  • Functors B:FinSet→BoolAlg and C:FinSet→BoolAlg
  • A natural transformation F:BC

where B and C are hyperdoctrines, and

  • B is syntactic: the elements of each B(n) are equivalence classes of formulas (which we agreed to call predicates);
  • C is semantic: the elements of each C(n) are relations on a domain V.

(We’ve been saying that C(n) is the set of all n-ary relations on V, but I see no need to assume that.)

Continue reading

Leave a comment

Filed under Categories, Conversations, Logic