This is the first in a series of posts, recording an e-conversation between John Baez (JB) and me (MW).

**JB: **I’ve lately been trying to learn about nonstandard models of Peano arithmetic. Do you know what a “recursively saturated” model is? They’re supposed to be important but I don’t get the idea yet.

**MW: **What books and/or papers are you reading? I used to know this stuff, indeed my thesis (1980) was on existentially complete models of arithmetic. When I looked at it a couple of years ago, I was amazed at how much I’d forgotten. Talk about depressing.

Anyway, I’ll toss out a few vague ideas, to see if they help. Maybe this will be the push I need to get back to Kaye’s book, or even Kossak & Schmerl. I picked them up a few months ago, hoping to revisit my youth, but I didn’t make it past the prefaces.

As Hodges puts it, model theory is “algebraic geometry minus fields”. If you have an algebraic number *r* in a extension field *K/F*, it’s natural to look at all the polynomials in *F*[*x*] which have *r* as a root. It turns out that this is a principal ideal, generated by the minimal polynomial.

Sort-of generalize to an arbitrary model *M* of a theory *T*. Let be an *n*-tuple of elements of *M*. Look at the set of all formulas such that *M* satisfies . This is the complete *n*-type of with respect to *M*.

Unlike the case in field theory, complete *n*-types are not usually implied (“generated”) by a single formula, but when they are, they are called *principal*.

Next step is to free the notion of *n*-type from dependence on the model *M*. If we have a set of formulas that is consistent with *T*, then it’s an *n*-type. (Sort of like a polynomial in *F*[*x*], looking for a root.) The type is realized in a model *M* if there is a *n*-tuple such that *M* satisfies all the . It’s omitted if there is no such *n*-tuple in *M*.

The omitting types theorem says that any countable collection of non-principal *n*-types in a countable language can all be omitted by some model of *T*.

You can imagine that *n*-types tell us a lot about possible isomorphisms and automorphisms of models. The poster child: the theory DLO of dense linear orderings without endpoints. This is because there are essentially only very simple types. Pretty much the only thing you can say about is what permutation of the subscripts puts them in monotonically increasing order, and for which subscripts *i* and *j* we have *r _{i}=r_{j}*. Anything else about the

*n*-tuple is implied by this. Especially significant is that no quantifiers need apply.

And of course, any two countable DLOs are isomorphic, and there are lots of automorphisms of a countable DLO. If you look at the back-and-forth argument, you’ll see it relies on the limited repetoire of types.

Also, to hammer the analogy with field theory, the polynomials *r* satisfies tells us all about the isomorphisms and automorphisms.

OK, now specialize to PA. Key thing here is the overspill lemma. This allows you to code info about subsets of the standard into single elements of a non-standard *M*. For example, the set of standard prime divisors of a non-standard *r* in *M*.

If the set you want to code is totally arbitrary, well, you can’t even express it in the language of PA, so we want to look at sets that are definable by a formula. Then we can apply the overspill lemma. Say φ(*x*) is the formula for the set. Consider the formula . Here *p _{x}* is the

*x*-th prime. Since ψ(

*n*) holds for arbitrarily large finite

*n*‘s (indeed all finite

*n*‘s), overspill says that it also holds for some non-standard

*n*. So there is a

*z*such that φ(

*x*) is true iff

*p*|

_{x}*z*, for all

*x<*

*n*. In particular it holds for all finite

*x*, and so

*z*codes the set via its prime divisors.

More generally, it would be nice to look at sets of *n*-tuples defined by an *n*-type. But we need some way to describe the *n*-type in the language of PA—we can’t describe an arbitrary set of formulas in the language of PA. So we look at recursive *n*-types, i.e., sets of formulas whose Gödel’s numbers form a recursive set.

If we’re lucky, we’ll be able to import a lot of classical model theory, developed in ZF (or even naive set theory) into PA, because we can code it all in PA. And this approach will help us understand things like extensions of one model by another, automorphisms, etc.

A couple more notes. First, the prenex normal form hierarchy shows up all the time in logic. A variant especially adapted to PA (the *arithmetic* *hierarchy*) is to let formulas allow arbitrary bounded quantifiers, like . It turns out that a relation is recursively enumerable iff it is . So that’s a nice bridge between the model theory of PA and computability theory.

It also turns out that, because of the MRDP theorem, you don’t even need to allow bounded quantifiers at the ground level. Well, that’s a bit sloppy. What I mean is that is the same as , formulas you get from putting a string of existential quantifiers in front of a quantifier-free formula.

Pingback: Non-standard Models of Arithmetic 9 | Diagonal Argument

Pingback: Nonstandard Models of Arithmetic | Azimuth

I suggest that any discussion about non-standard models of arithmetic should highlight upfront the fact that all such models are putative set-theoretic models of ordinal arithmetic, and not models of PA.

Reason: A model of PA is well-defined if, and only if, PA is finitarily consistent.

Now, a finitary proof of consistency for PA is given in Theorem 6.8 (p.41) of the paper which appeared in the December 2016 issue of `Cognitive Systems Research’:

`The Truth Assignments That Differentiate Human Reasoning From Mechanistic Reasoning: The Evidence-Based Argument for Lucas’ Goedelian Thesis’

https://www.sciencedirect.com/science/article/pii/S1389041716300250?via%3Dihub

The paper addressed the philosophical challenge which arises when an intelligence—whether human or mechanistic—accepts arithmetical propositions as true under an interpretation—either axiomatically or on the basis of subjective self-evidence—without any specified methodology for evidencing such acceptance.

The paper showed that Tarski’s classic definitions permit both human and mechanistic intelligences to admit finitary, evidence-based, definitions of the satisfaction and truth of the atomic formulas of PA over the domain N of the natural numbers in two, hitherto unsuspected and essentially different, ways:

(i) In terms of classical algorithmic verifiabilty (Definition 1, p.37); and

(ii) In terms of finitary algorithmic computability (Definition 2, p.37).

Further, the two definitions correspond to two distinctly different assignments of satisfaction and truth to the compound formulas of PA over N—say I(PA, SV) and I(PA, SC) respectively—such that The PA axioms are true over N, and the PA rules of inference preserve truth over N, under both the interpretations (Theorem 5.6, p.40 and Theorem 6.7, p.41).

Now, whilst the interpretation I(PA, SV) corresponds to the weak, non-finitary, standard interpretation of PA, the interpretation I(PA, SC) is the strong, finitary, interpretation of PA sought by Hilbert in the second of his twenty three Millennium 1900 problems.

Moreover, the finitary proof of consistency for PA entails that PA can have no non-standard model (Theorem 7.1, p.41); and that PA is categorical (Corollary 7.2).

(See also Section 13: The case against non-standard models of PA on p.87 of the following preprint:

The Significance of Goedel’s omega-consistency and Brouwer’s Intuitionism for Hilbert’s Program: Quantification from an Evidence-based Perspective

https://www.dropbox.com/s/ktrsitcj2x5qo0a/16_Anand_Dogmas_Update_IfCoLoG.pdf?dl=0)