Category Archives: Logic

Set Theory Jottings 12. Zermelo on “definiteness”

Prev TOC Next

In the last post, I mentioned Zermelo’s 1929 paper “On the concept of definiteness in axiomatics”. By this time, people had suggested replacing “definite” with “definable in first-order logic”. Zermelo did not agree with this.

Three points, I think, explain Zermelo’s views.

First, Zermelo never really “got” formal logic. He didn’t grasp the distinction between the meta-theory and the object theory, nor that between syntax and semantics. His correspondence with Gödel shows this. Another example: in the 1929 paper, he complains that the inductive definition of a first-order formula “depends on the concept of finite number whose clarification, after all, is supposed to be one of set theory’s principal tasks.” The inductions of course belong to the meta-theory.

Second, Zermelo approached axiomatization in the spirit of Euclid rather than with the philosophy of formalism. The axioms assert mathematical truths. They are not the arbitrary rules of a game. Hilbert’s famous “tables, chairs, and beer mugs” remark expresses the need to rid the development of any reliance on visual intuition. In much the same way, Zermelo stressed the purely objective character of the Axiom of Choice.

Third, Zermelo found Skolem’s countable model of ZF unacceptable. Recall the resolution of Skolem’s paradox: the power set of ω is not the “true” power set. In a 1930 paper, Zermelo gave his final version of the axioms of set theory. In a footnote to the Separation Axiom, he writes:

Like the replacement function in [the Replacement Axiom], the propositional function 𝔣(x) can be completely arbitrary here, and all consequences of restricting it to a particular class of functions cease to apply from the present point of view. I shall consider elsewhere more thoroughly “the question of definiteness” in connection with my last contribution to this journal and with the critical “remarks” by Mr. Th. Skolem.

The implicit criticism: if you hobble the Separation Axiom by allowing only first-order definable properties, no wonder you get a countable power set!

The 1929 paper offered the following definition of “definite property”. Zermelo writes Dp to say that p is a definite property. Then (changing notation and rewording somewhat, except for the quoted parts):

  1. First, all fundamental relations are definite.”
  2. Definiteness is passed on to composite assertions as follows
    1. If Dp, then Dp).
    2. If Dp and Dq, then D(pq) and D(pq).
    3. If Df(x,y,z,…) “for all (permissible) combinations of values”, then D((∀x,y,z,…)f(x,y,z…)) and D((∃x,y,z,…)f(x,y,z…)).
    4. If DF(f) “for all definite functors f’’ then D(∀f F(f)) and D(∃f F(f)).
      Definiteness is passed on to the quantifications.”
  3. If P is the system of all definite properties, then “it has no proper subsystem P1’’ that contains all the fundamental relations and is closed under the compositions listed above.

In clause (II.4), the f ranges over properties (or “propositional functions”), so we have a second-order quantification. Furthermore, we have an implicit circularity: the scope of ∀f is restricted to definite properties, just what we’re in the midst of defining. But clause (III) is perhaps even worse: without a robust set theory, how are we to interpret the quantification over all subsystems of P? Skolem made both these points in his reply.

Zermelo’s 1930 paper “On boundary numbers and domains of sets: New investigations in the foundations of set theory”, gave (as I mentioned above) his final version of the axioms. This includes both Replacement and Foundation, but curiously not Choice—as an axiom. Zermelo writes:

We have not explicitly formulated the “axiom of choice” here because it differs in character from the other axioms […] However, we use it as a general logical principle upon which our entire investigation is based; in particular, it is on the basis of this principle that we shall assume in the following that every set is capable of being well-ordered.

Prev TOC Next

2 Comments

Filed under History, Logic, Set Theory

First-Order Categorical Logic 14

Prev TOC Next

JB: So, let’s think about how we can prove this generalization of Gödel’s completeness theorem. First, remember that a hyperdoctrine B is consistent iff B(0) has at least two elements, or in other words, ⊤ ≠ ⊥ in this boolean algebra. Second, let’s say a hyperdoctrine C is set-based if every C(n) is the power set of Vn for some fixed set V. We call V the universe. Third, let’s say a morphism of hyperdoctrines, say F: BC, is a natural transformation whose components F(n): B(n) → C(n) are Boolean algebra homomorphisms obeying the Beck–Chevalley condition and maybe the Frobenius condition. (We’re a bit fuzzy about this and we’ll probably have to sharpen it up.)

Then let’s try to prove this: a hyperdoctrine is consistent iff it has a morphism to some set-based hyperdoctrine.

Here’s my rough idea of one way to try to prove it: use Stone’s representation theorem for boolean algebras! This says any boolean algebra is isomorphic to the boolean algebra of clopen subsets of some Stone space. A Stone space is a topological space that’s profinite, i.e. a limit of finite spaces with their discrete topology. A subset of a topological space is clopen if it’s closed and open.

The clopen subsets of a topological space always form a boolean algebra under the usual intersection, union and complement. So, the cool part of Stone’s representation theorem is that every boolean algebra shows up this way—up to isomorphism, that is—from a very special class of topological spaces: the ones that are almost finite sets with their discrete topology. A good example of such a space is the Cantor set.

I hope you see how this might help us. We’re trying to show every consistent hyperdoctrine is set-based. (I think the converse is easy.) But Stone is telling us that every boolean algebra is a boolean algebra of some subsets of a set!

MW: Okay, that jibes with some of my own vague ideas. Though I hadn’t got as far as Stone’s Theorem. (I’m familiar with it, of course.)

Two ideas make Henkin’s proof work. One is to find a epimorphism of the boolean algebra B(0) to the two-element boolean algebra {⊤,⊥} (or equivalently, an ultrafilter on B(0)). In other words, we’re deciding which sentences we want to be true; this is then used to define the model. The points of the Stone space are these epimorphisms, so we’re further along on the same track.

The other idea—specific to Henkin—insures that quantifiers are respected. Syntax now comes to the fore: We add a bunch of constants, called witnesses, and witnessing axioms, like so (in a simple case):

x φ(x)→φ(cφ)

where cφ witnesses the truth of the existential sentence ∃x φ(x).

Stone’s Theorem gives us a Stone space Sn for each B(n). That’s nice. But we don’t immediately have Sn=Vn, with the same V for all n. (I’m not even sure that we want exactly that.)

The Henkin witnessing trick “ties together” (somehow!) B(n) and B(n+1). And the tie relates to the left adjoint of a morphism B(n)→B(n+1).

So the stew is thickening nicely, but it looks like we’ll have to adjust the seasonings quite a lot!

JB: Right, this proof will take real thought. I don’t know ahead of time how it will go. But I’m very optimistic.

Stone’s representation theorem says the category of boolean algebras and boolean algebra homomorphisms is contravariantly equivalent to the category of Stone spaces and continuous maps. Let’s call this equivalence

S: BoolAlgop → Stone

S is a nice letter because the Stone space associated to a boolean algebra A should really be thought of as the ‘spectrum’ of that algebra: it’s the space of boolean algebra homomorphisms

f:A → {⊤,⊥}

Our hyperdoctrine B gives a Boolean algebra B(n) for each n, and this gives us a Stone space S(B(n)). Maybe we can abbreviate this as Sn, as you were implicitly doing, and hope no symmetric groups become important in this game. But for now, at least, I want to keep our eyes on that functor S.

Why? Because our hyperdoctrine doesn’t just give us a bunch of Stone spaces: it gives us a bunch of maps between them! For each function

f: mn

our hyperdoctrine gives us a boolean algebra homomorphism

B(f): B(m) → B(n).

Hitting this with the functor S, we get a continuous map going back the other way:

S(B(f)): S(B(n)) → S(B(m)).

And all this is functorial. That has to be very important somehow.

Next: somehow we want to use the Henkin trick to fatten up the Stone spaces S(B(n)) to sets of the form Vn for a fixed universe V. There are a lot of tantalizing clues for how. First, we can actually think about the Henkin trick and its use of “witnesses”. Second, we can note that the Henkin trick is related to existential quantifiers—which in hyperdoctrines are related to left adjoints in the category of posets to the boolean algebra homomorphisms

B(f): B(m) → B(n).

This raises a natural and interesting question. If you have a mere poset map between boolean algebras, what if anything does it do to their spectra? I should at least figure out the answer for the left adjoints we’re confronting.

So, there’s a lot to chew on, but it’s very tasty.

MW: Model theorists call these Stone spaces the spaces of complete n-types. We ran into them several times in our conversation on non-standard models of arithmetic (in posts 1, 2, 9, 18, and 25).

Let me noodle around with what you just said for a nice easy example, the theory LO of linear orderings. Actually, let’s start with something even easier: DLO, dense linear orderings without endpoints. This enjoys quantifier elimination: any formula is equivalent to one without quantifiers.

So, B(0) is just {⊤,⊥}. That makes sense: since DLO is complete, any sentence is provably true or provably false.

How about B(1)? Also {⊤,⊥}! What can we say about x? Provably true things like x=x, and provably false things like xx. That’s it!

Look at it this way: all countable DLO’s are isomorphic, and there’s an automorphism taking any element of a countable DLO to any other element. So all countable DLOs look alike, and for the elements of one, “none of these things is not like the other…”

For B(2) we finally have a bit of variety: B(2)={[x<y],[x=y],[y<x]}. Plus disjunctions like [x<y y<x]. And ⊥. What about the map B(f):B(1)→B(2) with our old friend f:{x}→{x,y}? Obviously ⊤ goes to ⊤ and ⊥ goes to ⊥.

Finally with B(3) things start to pick up a little. It contains the six “permutation situations” [x<y<z], [x<z<y], …, [z<y<x], plus seven more where some things are equal (like [x=y<z] or [x=y=z]), plus all the disjunctions, plus ⊥. If f now is the injection {x,y}→{x,y,z}, then B(f) sends an assertion about x and y to the disjunction of all the xyz situations consistent with it. For example

[x<y] ↦ [z<x<y z=x<y x<z<y ∨ … ∨ x<y<z]

Okay, complete n-types. (We’re concerned here with complete pure n-types, that is, no names added to the language.) These give you the full scoop on the n elements (x1,…,xn). So ignore the disjunctions. For DLO, we have only principal n-types: a single formula tells the whole story. I’ll write 〈φ〉 for the n-type generated by the boolean element [φ]. So for DLO,

S(B(0)) = {〈⊤〉}
S(B(1)) = {〈⊤〉}
S(B(2)) = {〈x<y〉, 〈x=y〉, 〈y<x〉}
S(B(3)) = {〈x<y<z〉, 〈y<x<z〉,…, 〈x=y=z〉}

In other words, we can forget about the disjunctions, and an n-type must be consistent. Take a hike, ⊥!

In general, if h:XY is a boolean algebra homomorphism, then S(h):S(Y)→S(X) is just inverse images: S(h)(p)=h−1(p). Here p, being an n-type, is a subset of Y. For example, let f:{x,y}→{x,y,z}. Above we saw that B(f):[x<y] ↦ [z<x<y ∨ … ∨ x<y<z]. All these disjuncts are sent to 〈x<y〉. (Or to be persnickety, the principle ideals generated by the disjuncts are sent there.)

That makes sense. The 3-type p is “full info” on the triple (x,y,z), and S(B(f))(p) extracts all the info we can glean about the pair (x,y). That’s a 2-type!

Hmm, couple of ways I could go from here. On the one hand, I don’t yet discern any existential quantifiers lurking in the background. Also, what do we want for our “universe” or “domain” V? None of the B(n)’s, n=0,1,2,3, seem to be co-operating!

Or I could repeat the exercise for a less simple theory, namely LO. Now B(1), and even B(0), have some real meat on their bones! Quantifiers stick around.

I don’t have such a clear understanding of the n-types of LO. Maybe if I finish reading Joe Rosenstein’s marvelous book…

Oh, one more thought. The contravariant S functor gives us S(f) going in the other direction from f. But we need two morphisms going the other way: the left and the right adjoints.

JB: Yes, I’ve been talking about the left adjoint since that relates to existential quantifiers, and those seem to be the key to “Henkinization”. My vague idea is this—I’ll just sketch a special case. If our hyperdoctrine B has some unary predicate PB(1) that gives “true” in B(0) when we existentially quantify it, Henkin wants to expand the Stone space S(B(1)) by throwing in an element x such that P(x) = ⊤. (Remember, elements of B(n) are precisely continuous boolean-valued functions on the Stone space S(B(n)).)

And here, when I said “existentially quantify it”, I meant take an element of B(1) and hit it with the left adjoint of the boolean algebra homomorphism

B(f): B(0) → B(1)

coming from the unique function

f: 0 → 1

Does this make sense?

If so, of course we need to do this sort of thing in a way that involves not only B(0) and B(1), but also the boolean algebras B(n) for higher n.

To make progress on this idea, it’s very good that you found a nice example of a hyperdoctrine, namely the theory of dense linear orders, and started working out B(n) for various low n. I think we should focus on this example for a while, and think about trying to “Henkinize” it, throwing new elements into the Stone spaces S(B(n)), until we get a set-based hyperdoctrine C, i.e. one for which every C(n) is the power set of Vn for some fixed set V. This will be a countable infinite set, right? It’s probably the underlying set of some Stone space. And it should be obtained in some systematic way from our original hyperdoctrine B.

You’ve nicely shown why B itself is not set-based.

MW: Sounds like a plan!

Prev TOC Next

Leave a comment

Filed under Categories, Conversations, Logic

First-Order Categorical Logic 13

Prev TOC Next

MW: It’s been a minute! Well, almost 60,000 minutes.

We left off with a question: does a natural transformation from a syntactic hyperdoctrine to a semantic hyperdoctrine automatically “respect quantifiers”? We saw that this amounts to a Beck–Chevalley condition. We wondered if we had to add that condition to our definition of a model, or if it came for free.

Maybe you’ve had the experience of putting aside a crossword, half-completed, and coming back to it in a hour. Hey, of course the answer to 60 Across is “Turing Test”!

The same thing happened to me here. And the answer is: no free lunch.

Let me recap, just to get my brain back up to speed, and fix notation. B is a syntactic hyperdoctrine, C is a semantic hyperdoctrine, and F:BC is a natural transformation.

For example: say B is the theory LO of linear orders, and C is an actual linear order. B(2) contains predicates like [x<y], C(2) contains binary relations on the domain of the linear order, and F2([x<y]) is the less-than relation on this domain.

Here’s the key diagram:

Ignore the dashed arrows for just a second. Say f is an injection in FinSet, like f:{x}→{x,y}. Then B(f) and C(f) are the “throw in extra variables” morphisms we’ve talked about ad nauseum. This diagram is a commuting square in the category BoolAlg.

The dashed arrows are morphisms in a larger category we dubbed PoolAlg. This is a full subcategory of Poset, and BoolAlg is a so-called wide subcategory of it. Its objects are all boolean algebras, and its morphisms are all order-preserving maps, i.e., all the morphisms those objects have as denizens of Poset.

Within PoolAlg, the dashed arrows are left adjoints of the right arrows. Now does the diagram commute? In PoolAlg, that is. Beck–Chevalley says yes. We saw last time that we want this to be true. For example, the top left arrow takes [y<x] to [∃y(y<x)]. We want this predicate to map to the corresponding relation on an actual linear order. That’s what we mean by “respecting quantifiers”.

JB: This sounds right. Thanks for getting us started again!

As you might expect, I have a couple of nitpicks. While I feel sure there’s no free lunch here, I don’t think you have proved it. Maybe for some reason the Beck–Chevalley condition always holds in this situation! I feel sure it doesn’t, but I think that can only be shown with a counterexample: a defective model that doesn’t obey Beck–Chevalley.

It’s probably easy enough to find one. However, I don’t feel motivated to do it. We have bigger fish to fry. I’m happy to assume our models obey this Beck–Chevalley condition.

(Do we also need to assume they obey some sort of Frobenius condition?)

And here’s another even smaller nitpick: I don’t see the need for this category PoolAlg. I believe whenever you’re tempted to use it we can use our friend Poset. For example, the dashed arrows are left adjoints in Poset, and the square containing those dashed arrows commutes in Poset. Saying PoolAlg—restricting the set of objects in that way—isn’t giving us any leverage. It doesn’t seriously hurt, but I prefer to think about as few things as is necessary.

Maybe it’s time to finally try to state and prove a version of Gödel’s completeness theorem. Do you remember our best attempt at stating it so far? I think I can, just barely… though it’s somewhat shrouded by the mists of time.

MW: That’s right, to prove the “no free lunch” result we need a counter-example. That’s what came to me when I started thinking about this stuff again. A way to construct a whole slew of counter-examples.

And I think it’s worth going through, because it relates to Henkin’s proof of Gödel’s completeness theorem. The cat-logic proof will have to surmount the same obstacles. So here goes.

Say B is a theory, aka syntactic hyperdoctrine. Say C is a semantic hyperdoctrine, and F:BC is one of those natural transformations that does respect quantifiers. Suppose the domain of C is V: C(n) is a set of n-ary relations on V. Now let W be a subset of V. Let D(n) be all the relations you get by restricting the relations in C(n) to W. And for any predicate φ∈B(n), let Gn(φ) be the restriction of Fn(φ) to W. Let be the image of B(n) under Gn. I claim that D is a hyperdoctrine, and G is a natural transformation from B to D. And very often, G will be disrespectful of quantifiers.

Using my LO example, let C have V=ℤ as its domain, and let W=ℕ. The predicate [y<x] gets sent to a less-than relation on ℤ by F2. This restricts to a less-than relation on ℕ. So G2([y<x]) is that relation.

Now let’s apply the left adjoints of the injection {x}→{x,y}. Up in B, we get the predicate [∃y(y<x)]. F1 sends this to the always-true relation in ℤ, which of course restricts to the always-true relation in ℕ. What about the left adjoints in C and D? The relation supy(y<x) is always true in ℤ, but is x≠0 in ℕ. So “go left, then down via G1’’ gets you to always-true in D, while “go down via G2, then left” gets you to x≠0.

Another example: let V=ℚ, W=ℤ, and for our predicate, use [x<z<y]. The left adjoint from {x,y}→{x,y,z} is [∃z(x<z<y)]. Taking one path brings us to the relation x<y in ℤ, while the other path leads to “follows, but not immediately”.

It’s clear now that one path leads to a “model” where ∃x means, “does there exist an x in a larger domain?”. The Henkin proof features a whole sequence of such enlargements. Thinking about that suggested this class of counter-examples to me.

I believe that restricting a semantic hyperdoctrine this way results in a hyperdoctrine. I haven’t checked all the details. But it’s a piece of cake when C(n) equals P(Vn) for all n. In that case, D(n) is just P(Wn)! This is what you wanted for semantic hyperdoctrines in the first place, as I recall.

The other thing we need to check: that G is a natural transformation. The key
diagram:

Here, ↾m and ↾n are the restriction maps. It’s pretty obvious that the bottom square commutes. Since the top square does too, an easy diagram chase shows that “outer frame” square does.

Anyway, you wanted a refresher on the statement of the Gödel completeness theorem in our framework. Here goes. A syntactic hyperdoctrine B is consistent if B(0) is not the one-element boolean algebra {⊤=⊥}. The theorem states that every consistent syntactic hyperdoctrine has a model, which is a natural transformation to a semantic hyperdoctrine that respects quantifiers.

(And maybe also equality? Haven’t thought about that yet.)

We want to adapt the Henkin proof to this framework, is that right? I think I see, in a vague way, how to do that. But I don’t see how we’d be leveraging the framework—how the proof would be anything but a straighforward translation, not really using any of the neat category ideas.

JB: Okay, great. Thanks for all that.

Let’s try to clean things up a wee bit before we dive in. When you say “syntactic hyperdoctrine” I’d prefer to just say “hyperdoctrine”.

First, it seems plausible that any hyperdoctrine is consistent iff has a model. Second, part of the whole goal of working categorically is to break down the walls between syntax and semantics, to treat them both as entities of the same kind (in this case hyperdoctrines).

So, we should aim for a version of Gödel’s completeness theorem saying that a hyperdoctrine B is consistent iff it has a morphism F to a hyperdoctrine C of some particular “semantic” sort. You said B should be “syntactic”. That’s great as far as it goes, but here’s a place where we can generalize and do something new—so let’s boldly assume B is an arbitrary hyperdoctrine.

MW: Wait a minute! What do you mean by “morphism” here?

JB: A morphism of hyperdoctrines. Do you object? (That was a pun.)

MW: Who could object to a three layer cake?

In the bottom layer are boolean algebras, regarded as poset categories. The middle filling consists of categories whose objects are boolean algebras; a hyperdoctrine is a functor from FinSet to one of them. And now you propose to top it off with a category whose objects are hyperdoctrines. Copacetic!

Anyway, we never defined a morphism of hyperdoctrines. Do you mean a natural transformation respecting quantifiers?

JB: Well, perhaps not exactly that. We certainly want F to be a natural transformation respecting quantifiers, but we probably want it to be more than that, and we haven’t yet figured out exactly what. I expect that maybe F should be a natural transformation whose components obey the Beck—Chevalley and Frobenius conditions. That should make it respect quantifiers and equality. But we really need to think about this harder before we can be sure! So I said “morphism”, to leave the issue open.

We will probably resolve this issue as part of proving Gödel’s completeness theorem. It’s a time-honored trick in math: you make up the definitions while you’re proving a theorem, so your definitions make the theorem true.

Okay, where was I? I wanted to think about Gödel’s completeness theorem this way: it says a hyperdoctrine B is consistent iff it has some morphism F to a hyperdoctrine C of some “semantic” sort. And by “semantic” let’s mean the most traditional thing: we’ll demand that C be a hyperdoctrine where we pick a set V and let C(n) be the power set of Vn. That is semantic par excellence.

So how are we going to do this? Whatever trick people ordinarily use to prove Gödel’s completeness theorem—and you already mentioned the key word: “Henkin”—let’s try to adapt it to a general hyperdoctrine B. Maybe we can sort of pretend that B is defined “syntactically”, but try to use only its structure as a hyperdoctrine.

MW: Okay, this is the sort of thing I was hoping for!

I’m intrigued. The Henkin proof leans heavily on syntax. It begins by adding a bunch of new constants and axioms. How can we do this, while forswearing syntax?

JB: A question for the next installment….

Prev TOC Next

Leave a comment

Filed under Categories, Conversations, Logic

Nonstandard Models of Arithmetic 32

Prev TOC Next
Previous Paris-Harrington post

[Ed. note: This post was essentially ready two years ago, but I got distracted with other matters. If you’re seeing this for the first time, or want to refresh your memory, posts 8 and 9 introduced the Paris-Harrington theorem. Posts 21 through 24 continued the discussion, in a dialog with Bruce Smith. MW]

Continue reading

Leave a comment

Filed under Conversations, Peano Arithmetic

First-Order Categorical Logic 12

Prev TOC Next

MW: Last time we looked at the categorical rendition of “C is a model of B”:

  • Functors B:FinSet→BoolAlg and C:FinSet→BoolAlg
  • A natural transformation F:BC

where B and C are hyperdoctrines, and

  • B is syntactic: the elements of each B(n) are equivalence classes of formulas (which we agreed to call predicates);
  • C is semantic: the elements of each C(n) are relations on a domain V.

(We’ve been saying that C(n) is the set of all n-ary relations on V, but I see no need to assume that.)

Continue reading

Leave a comment

Filed under Categories, Conversations, Logic

First-Order Categorical Logic 11

Prev TOC Next

MW: Last time we justified some equations and inequalities for our adjoints: they preserve some boolean operations, and “half-preserve” some others. And we incidentally made good use of the color palette!

Continue reading

Leave a comment

Filed under Categories, Conversations, Logic

First-Order Categorical Logic 10

Prev TOC Next

JB: Last time we saw how to get some laws of logic from two facts:

right adjoint functors between boolean algebras preserve products (‘and’),

and

left adjoint functors between boolean algebras preserve coproducts (‘or’).

Continue reading

1 Comment

Filed under Categories, Conversations, Logic

First-Order Categorical Logic 9

Prev TOC Next

MW: Last time we reviewed the four adjoints:

Continue reading

3 Comments

Filed under Categories, Conversations, Logic

First-Order Categorical Logic 8

Prev TOC Next

MW: We’re reviewing hyperdoctrines, which are specially nice functors B: FinSet → BoolAlg. When we have such a functor, any map f of finite sets gives a homomorphism of boolean algebras, B(f). But we’ve seen this is a morphism and a functor. (“It’s a floor wax and a dessert topping!”) What do you think about the term “adjoint morphism”? It might help keep the two levels straight.

Continue reading

3 Comments

Filed under Categories, Conversations, Logic

First-Order Categorical Logic 7

Prev TOC Next

MW: John, it’s been eons since we last discussed First-Order Categorical Logic: not since September 2019! (I read a lot of Russian novels during the break.) But New Year’s seems like a good time to resume the tale.

JB: Yes indeed! It’s been a long time, and it’s mostly my fault. Let’s see if we can get back up to speed.

Continue reading

Leave a comment

Filed under Categories, Conversations, Logic