# Monthly Archives: June 2019

## Non-Standard Models of Arithmetic 10

JB: So, last time you sketched the proof of the Paris–Harrington theorem. Your description is packed with interesting ideas, which will take me a long time to absorb. Someday I should ask some questions about them. But for now I’d like to revert to an earlier theme: how questions about the universe of sets cast their shadows down on the world of Peano arithmetic.

It seems a certain class of logical principles let us construct fast-growing functions $f \colon \mathbb{N} \to \mathbb{N}$: functions that grow faster than any function we could construct without assuming these principles. And these logical principles are also connected to large countable ordinals. For example, there’s a simple theory of arithmetic called PRA, for ‘primitive recursive arithmetic’, that’s only powerful enough to define primitive recursive functions. There are various tricks for coding up countable ordinals as natural numbers and defining the operations of ordinal arithmetic using these code numbers, but PRA is only strong enough to do this up to $\omega^\omega.$ So, we say it has proof-theoretic ordinal $\omega^\omega.$ Corresponding to this limitation (in some way I don’t fully understand, but can intuit), there are functions that grow too fast to be described by PRA, like the Ackermann function.

There’s a whole hierarchy of more powerful theories of arithmetic with larger proof-theoretic ordinals, nicely listed here;

• Wikipedia, Ordinal analysis.

Presumably theories with larger proof-theoretic ordinals can define faster-growing functions. What’s the precise connection? I know that you can use countable ordinals to index fast-growing functions, via the fast-growing hierarchy, so there’s an obvious conjecture to be made here.

Anyway, the time we reach PA the proof-theoretic ordinal is $\epsilon_0.$ There are still functions that grow too fast for PA, like Goodstein’s function:

• Andrés Caicedo, Goodstein’s function.

What does ‘too fast for PA’ mean? I mean it’s a partial recursive function that PA can’t prove is total.

I know Goodstein’s function is somehow related to Goodstein’s theorem, and you seemed to say that’s in turn related to the Paris–Harrington theorem. So I’m wondering: is the Paris–Harrington principle provable in some theory of arithmetic that has proof-theoretic ordinal a bit bigger than $\epsilon_0$?

If so, what is this ordinal?

Here’s another way to express my puzzlement. The Paris-Harrington principle is connected to a fast-growing function. Wikipedia says:

The smallest number N that satisfies the strengthened finite Ramsey theorem is a computable function […], but grows extremely fast. In particular it is not primitive recursive, but it is also far larger than standard examples of non-primitive recursive functions such as the Ackermann function. Its growth is so large that Peano arithmetic cannot prove it is defined everywhere, although Peano arithmetic easily proves that the Ackermann function is well defined.

What ordinal indexes the growth rate of this function, via the fast-growing hierarchy?

MW: Let me first ask you about your phrasing: “Presumably theories with larger proof-theoretic ordinals can define faster-growing functions.” By “define”, do you mean, “has a formula for, and can prove that the formula defines a total function”? I’d use “define” to mean just the first clause. I’d like to keep separate what we can prove from what we can express.

JB: I was being vague, mainly because I don’t really know what I’m talking about. But okay, let’s say I meant “has a formula for, and can prove that the formula defines a total function”. That sounds like what I should have said.

MW: I think it’s also time to mention the Church-Kleene ordinal ω1CK, which looks down from a lofty height on all the proof-theoretic ordinals. Past ω1CK, it seems we don’t even have a decent way to talk about the ordinals (formally in L(PA)), let alone prove things about them. (Or so I’ve read.)

I don’t know proof theory nearly as well as model theory. That’s something I hope to remedy as this series continues. Let me say a few things I’m fairly confident about, plus some guesses.

Things can get finicky, especially low down among the ordinals—many of the different choices seem to “wash out” by the time we get to ε0. You mention PRA, the theory of primitive recursive functions. This theory has no quantifiers! Instead, it has a function symbol for every primitive recursive function. I’d rather think about IΣ1, which is like Peano arithmetic, except we have the induction axioms only for Σ1 formulas. (Recall that a formula is Σ1 if it consists of an existential quantifier in front of a formula with only bounded quantifiers.) A modeler of PA would find this an attractive theory: Σ1 formulas are upwards absolute between end extensions.

Now, IΣ1 and PRA both have the same proof-theoretic ordinal: ωω. Does this mean they’re “basically the same”? My guess is yes, at least in the sense of “mutual interpretability” (like PA and ZF¬∞). Maybe they’re even synonymous! (Or what Ketland called “definitionally equivalent”.)

But what, exactly, is the meaning of “proof-theoretic ordinal”? (Let’s stick with theories in the language of PA.) This could mean: the smallest recursive ordinal α for which the theory cannot prove the validity of transfinite induction up to α (and hence can prove it up to any β<α). (That's the definition in the Wikipedia article on Ordinal analysis.) Or it could mean: the smallest α for which transfinite induction up to α (plus some very basic stuff) proves the consistency of the theory. (That’s the definition in Rathgen’s Art of Ordinal Analysis, a paper you’ve cited in the past. For “very basic stuff”, Rathgen chooses ERA=Elementary Recursive Arithmetic, a subsystem of PRA.) I imagine you prove the equivalence of these two definitions—whenever they are equivalent—using arguments patterned after Gentzen’s proofs for ε0.

Then we have the various function scales: the Hardy, Wainer (aka fast-growing), and slow-growing hierarchies. You might want to say that the proof-theoretic ordinal is the smallest α for which the theory cannot prove that Hardy’s hα is total (and hence it can prove that hβ is total for any β<α). I've seen that definition in various places; here's a stack-exchange question about it.

Here’s a puzzle for you (for me too): glance at the definition of the Ackermann function. Say the two-place version. As the Wikipedia article notes, the value for A(m, n) depends values of A(m*, n*) with (m*, n*) preceding (m, n) in the lexicographic ordering—looks like a job for transfinite induction up to ω2!  Yet the Ackermann function outpaces every primitive recursive function, and as we’ve read, PRA handles induction up to any ordinal less than ωω!

Things are more clear-cut for ε0. First off, Con(PA), induction up to ε0, the Paris-Harrington principle, the generalized Goodstein theorem, a version of the Hydra theorem, and the Kanamori-McAloon principle (don’t ask) are all equivalent over PA. The Hardy and Wainer (aka fast-growing) hierarchies “catch up” to each other, so it doesn’t matter which we use to define sub-ε0 growth rate. The Buchholz-Wainer theorem says that a function is provably recursive if and only if it appears in the Wainer (or Hardy) hierarchy somewhere before level ε0.

So to answer your question about growth rates: the Paris-Harrington function (like the Goodstein function) belongs to level  ε0 of the fast-growing hierarchy (also the Hardy hierarchy). OK, what about the proof-theoretic ordinal of the theory you get by adding any of these to PA? I’m not so sure. It should be the “next” proof-theoretic ordinal after ε0, but I’m pretty sure that’s not ε0+1: once you’ve got transfinite induction up to ε0, it’s no sweat to push it a bit further. My first guess would be ωε0, except that’s equal to ε0! Maybe ωε0+1? Maybe ε1?

It might be fun/useful to think through these matters out loud sometime.

JB: In my readings I got the impression that the two definitions of proof-theoretic are not equivalent. I thought that the “smallest α for which transfinite induction up to α (together with some basic stuff) proves the consistency of the theory” definition was a naive one inspired by Gentzen’s proof of the consistency of PA using induction up to ε0. I thought the “smallest α for which the theory cannot prove the validity of transfinite induction up to α” definition was the one that people actually use these days.

For example, people say that ZFC has a proof-theoretic ordinal, some countable ordinal below the Church-Kleene ordinal that’s so large nobody has figured out how to talk about it. But I’ve never seen anyone claim that you could prove the consistency of ZFC using transfinite induction up to some whopping big countable ordinal (together with some other basic stuff). If this is true I’d be very interested!

I’ll ask on mathoverflow.

MW: My impressions track yours pretty closely. Although I don’t know that any one definition has driven all others off the field. As long as there are thesis topics to assign and papers to write, people will investigate all conceivable definitions, and try to suss out how they relate.

I mentioned my favorite definition above, which coincidentally is the one Noah Schweber mentions in his answer to your mathoverflow question:

The computational proof-theoretic ordinal |T|comp of T is the supremum of the computable ordinals α such that there is some notation n for α which T proves is well-founded.

As he points out, the “consistency” definition has some rather serious problems. Pohler’s Proof Theory says:

…it is tempting to regard the order type of the shortest primitive recursive well-ordering which is needed in a consistency proof for a theory T as characteristic for T. That this idea is malicious was later detected by Georg Kreisel…

who cooked up a way to make the “consistency” ordinal ω for any “reasonable” theory. (Noah Schweber says “pathological” and “appropriate” instead of Pohler’s “malicious” and “reasonable”. I’m guessing his “pathological” example is the same one that Pohler describes, but I haven’t checked.)

By the way, you seem skeptical that one could prove the consistency of ZF using induction up to some countable ordinal (on top of some basic stuff). I can’t say I have an intuition on this, one way or the other. Asserting the consistency of any recursively axiomatizable theory is, after all, a purely combinatorial (i.e., syntactic) claim. Con(ZF) is but a small pale shadow, cast down into the finite realm by the magic mountain of V.

I’m mostly curious about how the computational proof-theoretic ordinals relate to the growth-rate hierarchies. Maybe we’ll revisit this when I’ve spent more time boning up on proof theory.

JB: Yes, let’s move on to Enayat! I suspect we’ll loop back to these themes at various points.

Filed under Conversations, Peano arithmetic

## First-Order Categorical Logic 2

MW: So let’s see. Last time we talked about the functor B from the category FinSet to the category BoolAlg of boolean algebras. Liberal infusions of coffee convinced you that B is covariant; I accidentally suggested it was contravariant. I think I’ve come round to your position, but I still have a couple of things I want to say on the matter. If it won’t be too confusing for our readers.

JB: Okay.  If we’re planning to talk more about the variance, it’s probably good to start out by getting the reader a bit confused.  I used to always be confused about it myself.  Then I finally felt I had it all straightened out.  Then you shocked me by arguing that it worked the opposite way.  Your argument was very sneaky.

MW: I didn’t mean to be sneaky, I just kind of stumbled into it! Here’s where I think I went astray. I said: an n-ary predicate is a function from Vn to {true, false}. A point in Vn is an n-tuple (x1,…,xn), which we can think of as a function from {1,…,n} to V, thus: $i\mapsto x_i$. So if we have a function f:{1,…,m}→{1,…,n}, we form the composition

$\{1,\ldots,m\}\xrightarrow{f}\{1,\ldots,n\}\xrightarrow{x}V^n\xrightarrow{P}\{\text{true},\text{false}\}$

with P being the n-ary predicate. But that composition isn’t right. If we start with i, we’d have

$i\mapsto f(i)\mapsto x_{f(i)}\mapsto\text{ERROR!}$

because P takes as input an n-tuple, not a single component xf(i).

We can still use the pullback trick to beget the m-tuple (xf(1),…,xf(m)) from the n-tuple (x1,…,xn). Hmm, P better take m-tuples as input, if we don’t want another ERROR! Ok, say P is an element of B(m), and f:{1,…,m}→{1,…,n}, and x is an n-tuple. Then $x\mapsto f\circ x=f^* x$, say. And the composition $x\mapsto f^* x\mapsto P(f^* x)$ works. In other words, the pullback P○f* belongs to B(n), and the map $P\mapsto P\circ f^*$ is Bf:B(m)→B(n). Covariant, just like you and the coffee wanted.

Hmm… I was thinking about the case where f is a surjection. But it also works for injections. Say m=2 and n=3, and f is the function from {1,2} to {1,2,3} with f(1)=1 and f(2)=3. We start with a 2-ary predicate P, say x1 = x2. The first pullback “thins out” the 3-tuple (x1,x2,x3) to (x1,x3), and the second pullback gives us the “3-ary” predicate x1 = x3—which is 3-ary only in the sense that the emperor really was wearing new clothes.

In fact, the two pullbacks work just fine for any function f whatsoever.

We also have our other example of a Boolean algebra, namely equivalence classes of formulas. I’ve thought about this for a bit. But I think I’ve said enough for the moment.

JB: Okay. I agree with everything you just said. It’s good that we got those pullbacks straightened out, and I hope all our readers dwell on them for long enough to really understand, because getting this issue straight is a prerequisite for all the fancier stuff we’ll be doing.

By the way: don’t be disheartened! Category theory reduces all of mathematics to the study of arrows. The only mistake you can make with an arrow is get confused about which way it’s pointing. And so, in category theory we spend endless hours confused about which way the arrows are pointing.

I think we now have a choice before us. One choice, the high road, is to figure out how quantifiers get into the game. So far we have a humble functor

$B \colon \textrm{FinSet} \to \textrm{BoolAlg}$

which assigns a Boolean algebra of predicates to any set of variables. These Boolean algebras let us do “and”, “or”, “not” and the like, and the functoriality lets us do substitution of variables. But we really want to get quantifiers—“for all” and “there exists”—into the game. We could do it by brute forces, just putting them in and demanding that they obey the laws we want, and we’d get something very much like the polyadic Boolean algebra invented by Halmos.

But this brute-force approach is rather ugly. It’s much more slick, and enlightening, to get quantifiers into the game using adjoint functors. That’s how Lawvere did it! He called the resulting gadgets “hyperdoctrines”—a fearsome term for something that deserves a more descriptive name, like “first-order theories”.

• Bill Lawvere, Adjointness in foundations, Dialectica 23 (1969), 281–296. (Reprinted in Theory and Applications of Categories, 16 (2006), 1–16.)

And it turns out these adjoints will give us “equality” for free: that is, predicates that say things like x = y. As you know better than I, there’s “first-order logic” and its slightly more refined brother, “first-order logic with equality”. The categorical approach very naturally gives first-order logic with equality. Lawvere pointed that out here:

So, instead of getting Halmos’ polyadic Boolean algebras, we’ll actually get something closer to Tarski’s “cylindric algebras”, which include equality.

All this is delightful.

But there’s a slight catch! The trick with adjoints will automatically give us some of the laws of first-order logic with equality. But it won’t give us all those laws. For that we need to impose two more conditions: Frobenius reciprocity and the Beck–Chevalley condition.

The good news is that these extra conditions aren’t made up from thin air: they also show up in group representation theory and the theory of presheaves. I want to explain them, and it should be quite a nice story.

The bad news is that it’ll take a while, and I’m dying to apply this new approach to first-order logic. Just yesterday, while sitting bored in a conference, I tried translating Gödel’s completeness theorem into this language and it turned out to be shockingly cute! It even gives me hope of finding a somewhat new proof!

So the high road is to talk about how adjoints bring quantifiers into the game, and discuss Frobenius reciprocity and the Beck–Chevalley condition. But the low road is to just assume that there’s some way to get quantifiers into the game, by brute force or elegant theory, and talk about applications of this way of thinking.

Either way, we should eventually get around to doing both. What do you prefer?

MW: I’m opting for the high road, at least for a few more posts. Logicians live and breathe quantifiers! Also, I remember Frobenius reciprocity from group rep theory—that was neat! And I’ve been fond of presheaves ever since learning about Riemann surfaces. I’ll enjoy running into these old friends in a new city.

JB: Okay, the high road it is! Put on your boots and get ready for some exercise!