## Mathematicist Genesis

• 2.8k
I just thought up a fun philosophical forum game for those well-versed in either mathematics or the natural sciences to play, and those interested in learning more about them to watch. This game will make one core philosophical assumption: mathematicism, the view that the physical universe is identical with some mathematical structure. This assumption should only come in to play at one step in the game though, and if you really hate mathematicism, you can interpret everything after that step to be talking about a simulation of the universe instead of the actual universe, and it shouldn't make any difference.

The game is to start with the most elementary of mathematical structures, and build from there up to the structure that is (on our best current understanding of physics) identical to (or if you really prefer, the perfect model of) our physical universe, and then build up from the most elementary physical elements of our universe to, let's say, a human being, or human civilization, or something thereabouts.

More or less one step of the process per turn, more or less one turn in a row per player (unless the thread is dying and you want to revive it), try to make your explanations understandable to people who don't already know the thing you're explaining, and for extra style points, try to make it sound dramatic, vaguely like a creation myth, as you do it.

Also feel free to correct anyone who plays a step wrong, or ask for clarification if you can't follow a step.

I'll start us off:

----------------

To begin with, there is the empty set, and the empty set is without contents and void.

And the joint negation or complement operation is the sole sufficient operation, which returns the set of everything that is not any of the arguments fed into it, i.e. the complement operation on a set is equivalent to the negation of all the things in that set: the complement of {a, b, c, ...} is all that is not-a and not-b and not-c and ...

And the complement of the empty set, all that is not nothing, is everything. The end.

...No wait, let's go back and take things a little slower this time around. Or a lot slower, as the case may be.

The negation or complement of the joint negation or complement operation is the disjunction or union operation, which returns all that is at least one of the arguments fed into it: the complement of the complement of {a, b, c, ...}, which is to say not(not-a and not-b and not-c and ...), is all that is a or b or c or ...

Call the union of something and the set containing that thing the successor of that thing. The successor of the empty set is thus the union of it and a set containing itself, which union is just a set containing the empty set, since the empty set contained nothing to begin with.

Thus the negation of the joint negation of the empty set and the set containing the empty set is the first successor.

And the empty set we call zero, and the successor to that set we call one. And thus are the first numbers.

And the successor to one we call two, and the successor to two we call three, and so on ad infinitum.

And thus the empty set and its successors are the natural numbers. And they are good.

----------------

(Suggested next steps: construction of addition from set operations, then multiplication; then subtraction, necessitating the construction of integers... and so on until we get up to SU(3)xSU(2)xU(1) or such, where the physicists can take over from the mathematicians).
• 1.3k
What's the end game - something like Quantum Field Theory? Quantum gravity? Anyway, even if we accept the premise of the "mathematical universe," there are still a couple of issues here:

1. If you are aiming at some fundamental physics, such as QFT, that still leaves out every non-fundamental theory that is not reducible to fundamental physics - which means pretty much everything, from solid state physics to derivative trading. Now, it is plausible, though still controversial, that, to the extent that they are accurate, those "special sciences" supervene on fundamental physics. However, it looks ever more certain that they cannot be built from the ground up, the way you suggest - there will be a discontinuity where you have to jump from one level to another, with no analytical bridge between the two.

2. You realize that there isn't a unique construction, even for something as basic as the integers, right? Even with set theory there's more than one way to do it.
• 4k

Interested in filling out parts of the story that I know.

1. If you are aiming at some fundamental physics, such as QFT, that still leaves out every non-fundamental theory that is not reducible to fundamental physics - which means pretty much everything, from solid state physics to derivative trading. Now, it is plausible, though still controversial, that, to the extent that they are accurate, those "special sciences" supervene on fundamental physics.

I was thinking about this issue. If we want to arrive at a general description of... well, everything maths oriented as it pertains to modelling physical systems with deterministic laws, we'd probably need to have extremely abstract foundations. The end result of that would look more like the nlab than an accessible thread on a philosophy forum.

So I propose that whoever wants to fill out parts of the story, we limit the scope and split it into a bunch of subtopics.

Set theory -> Definition Algebraic Structure (DAS) of the Naturals -> DAS Rationals -> DAS reals -> Analytic structure of the reals -> calculus of real functions -> differential equations -> specific examples of physical laws.

Linear algebra (if required) would fit somewhere after the DAS of the reals (the field construction and the vector space over a field construction).

That's a simplified version of the story that misses out a lot; like action formulations of physical laws, complex analysis and Hilbert spaces, which would limit the quantum parts of the story we can tell. We're also missing a more general structure for the calculus and analytic structure of the reals (need more general topological/differential geometric notions) that allow tensor equations to be written.
• 1.3k
Max Tegmark in one of his mathematical universe papers sketches a hierarchical roadmap of this type (leading to fundamental physics, natch).
• 2.8k
What's the end game - something like Quantum Field Theory?

If you read the OP you’d see that that’s just the midpoint. We build up to something like QFT, and then build from there to something like a human. I’m not specifying the exact route we must take or how far we can go but that’s the rough idea.

If you are aiming at some fundamental physics, such as QFT, that still leaves out every non-fundamental theory that is not reducible to fundamental physics

I always forget that even among physicalists the reducibility of everything to fundamental physics in contentious. So I suppose that’s another presumption of this thread, and the thread itself can serve as the debate on that, as players put forward constructions of higher levels from lower ones and others challenge the accuracy of those.

there isn't a unique construction

That’s fine. Any construction will do.

An approachable simplified story is what I’m aiming for. Something like the version I tell of it in my essay On Logic and Mathematics, but hopefully better. (I’m hoping to learn from this thread myself). The construction of numbers, spaces, groups, manifolds, Lie groups, special unitary groups, and then quantum fields from those, particles from those, atoms from those, molecules from those, cells from those, organisms from those, etc.
• 4k
If you read the OP you’d see that that’s just the midpoint. We build up to something like QFT, and then build from there to something like a human. I’m not specifying the exact route we must take or how far we can go but that’s the rough idea.

I don't think you realise how big a project just getting to differential equations of real variables would be. It's a lot of work.
• 1.3k
IIRC the mammoth Principia didn't get quite that far, although @Pfhorrest probably wouldn't want to start as far back and proceed as rigorously as W & R.
• 4k

We should put the logic in the background then, except where it's completely necessary to go into it. I'm guessing that we'll need to set it up so that binary relations make sense; for equivalence relations in setting up algebraic operations on the rationals and reals and for order relations when we need the ordered field construction and the convergence/continuity/differentiability stuff that relies upon the completeness of the real line.
• 726
Good luck with this project. :roll:
• 1.6k
And the joint negation or complement operation is the sole sufficient operation, which returns the set of everything that is not any of the arguments fed into i

Buzz. Wrong. There's no universal set. Russell's paradox. You can't take the complement of the empt set except with respect to some given enclosing set.
• 2.8k
I think you meant to quote the part where I wrote "And the complement of the empty set, all that is not nothing, is everything. The end." That part was a joke, and I immediately backtracked to get back to the actual point of that first post.

The part you quoted isn't talking about taking the complement of the empty set specifically, it's just talking about how joint negation (NOR) is a sole sufficient operator: you can build all the other logical operators out of it. Specifically, in that post, I trivially build OR out of it, which as I understand it is the equivalent of union, which I use to build the successor function. The thought that we can start out with empty sets and joint negation and with that build everything out of "negations of nothing" (operations built out of negation on operands built out of empty sets) was the inspiration for this little game.

I could use a little clarification though, because as I understand it the logical operations are all equivalent to set operations, and as I understand it complement is the set-theoretic equivalent of negation, but negation isn't quite the same as joint negation though it can trivially be made to function as such (just feed an operand into both arguments of NOR instead of the single argument of NOT), so I'm not entirely sure if complement can be treated as the set-theoretic equivalent of joint negation too, or just plain unary negation, and in the latter case, what actually is the set-theoretic equivalent of joint negation?
• 1.3k
The game is to start with the most elementary of mathematical structures, and build from there up to the structure that is (on our best current understanding of physics) identical to (or if you really prefer, the perfect model of) our physical universe

So, we are going to build the Theory of Everything from scratch, starting from an empty virtual inference machine (VM).

Of course, we'd better load the propositional-logic language-extension pack, along with first-order extensions (universal quantifiers). Language must of course be "pluggable". Still, if we load the propositional-logic language extension pack, we must also load its axiom pack with the 14 basic rules of propositional logic. Language packs and axiomatic packs are not entirely independent.

(According to Wolfram, we also need to load a few first-order logic axioms too. Not sure, though.)

So, our two extension types are: language packs and axiom packs, i.e first-principle packs.

Not sure, though, that the elusive ToE is a first-order logic theory. We assume already quite a bit by saying a thing like that!

Ok. Now our VM can reason in first-order logic.

To begin with, there is the empty set, and the empty set is without contents and void.

So, you have already decided that we are supposed to load something like the ZFC axiomatic extension pack?

In my intuition, I would rather load the PA number theory pack. It can certainly "see" everything that ZFC considers to be provable, but it does not trust quite a bit of it. I do not consider ZFC to be necessarily more powerful than PA. ZFC could just be more gullible!

as I understand it the logical operations are all equivalent to set operations

Operator symbols must be mentioned in the language pack, but some of their invariants will have to be mentioned in the axiomatic pack. The other invariants can be derived. So, yes, do we load the logic operator rules from the logic pack or do we use a translation layer running on top of the ZFC pack?

Aren't we trying to rebuild Isabelle or Coq from scratch now?
It increasingly starts looking like an exercise in reinventing the wheel ... ;-)

Microsoft Research recently did exactly that. Apparently dissatisfied with Coq and Isabelle, they ended up concocting their own lean theorem prover. They probably consider Coq and Isabelle to be "bloated".

Still, myself, I have unlearned to give in to the temptation to rewrite from scratch. It is always a lot of work while the final result will probably end up being as bloated as the thing it was going to replace, once you add all the intricate bug fixes that may at first glance appear to be superfluous. If you don't know why all that shit is there, then unceremoniously stripping it away, is rarely the solution ...
• 1.3k
I always forget that even among physicalists the reducibility of everything to fundamental physics in contentious. So I suppose that’s another presumption of this thread, and the thread itself can serve as the debate on that, as players put forward constructions of higher levels from lower ones and others challenge the accuracy of those.

To be clear, the contention isn't necessarily metaphysical. The prospects for Nagelian inter-theory reduction - the derivability of "phenomenological" theories from more "fundamental" ones - has been robustly challenged on mathematical grounds. See for instance Robert Batterman's Emergence, Singularities, and Symmetry Breaking (2010)
• 4k
I think there's a big difference between expecting to show how the entire universe works mathematically (what some people seem to be reading the thread as) vs exhibiting one way to go from set theory to the mathematics of physical laws without loads of rigour. The "derive cells from sets" goal is very silly, the "show what math goes into some simple physical laws from the ground up" goal is not.

I'd be willing to contribute to a more modest, though still quite ambitious, project, where we go from say, sets to simple harmonic motion when we can handwave too arduous technical requirements.
• 1.6k
↪fishfry I think you meant to quote the part where I wrote "And the complement of the empty set, all that is not nothing, is everything. The end." That part was a joke, and I immediately backtracked to get back to the actual point of that first post.

I think I realized that a little later but didn't go back and update my post.

I could use a little clarification though, because as I understand it the logical operations are all equivalent to set operations, and as I understand it complement is the set-theoretic equivalent of negation, but negation isn't quite the same as joint negation though it can trivially be made to function as such (just feed an operand into both arguments of NOR instead of the single argument of NOT), so I'm not entirely sure if complement can be treated as the set-theoretic equivalent of joint negation too, or just plain unary negation, and in the latter case, what actually is the set-theoretic equivalent of joint negation?

Ok so there is no complement of the empty set, but the negation of T is F and vice versa, is that what you mean? I think you want negation in a Boolean algebra, not in set theory at large, if you're trying to formalize negation. I'm not sure why NOR is important to you, it's somewhat obscure in logic although I believe it's more important in computer chip design. Or actually I'm probably thinking of NAND gates.

LOL I just checked it out on Wiki and it says, "The NOR operator is also known as Peirce's arrow ..."

That Peirce really gets around!

https://en.wikipedia.org/wiki/Logical_NOR
• 2.8k
Of course, we'd better load the propositional-logic language-extension pack, along with first-order extensions (universal quantifiers). Language must of course be "pluggable". Still, if we load the propositional-logic language extension pack, we must also load its axiom pack with the 14 basic rules of propositional logic. Language packs and axiomatic packs are not entirely independent.

Someone else please check this because I don't trust alcontali's word on this for various reasons, and my understanding is that from set theory you can build predicate logic, which serves as an extension of propositional logic (anything you can say in propositional logic you can say in predicate logic), so if you start with sets you don't need to "manually" include propositional logic, it's one of the things you can build along the way.

expecting to show how the entire universe works mathematically (what some people seem to be reading the thread as)
Yeah no this thread is not supposed to be demonstrating mathematicism, it's just assuming it, and as I said in the OP you can instead assume we're building a simulation of the universe instead, if that makes you more comfortable.

The "derive cells from sets" goal is very silly, the "show what math goes into some simple physical laws from the ground up" goal is not

Well the former is what I'm aiming for, but if you only want to play along up to the latter end that's fine with me. As I said in the OP, I expect mathematicians to contribute more to the first half of the project and scientists to contribute more to the latter half. (Or rather, people with extensive math and science knowledge, not necessarily professionals in such fields).

Also as I said in the OP, as I understand it QFT takes SU(3)xSU(2)xU(1) to be the mathematical model of quantum fields, which in turn are where we can stop with the math and pick up with sciences instead. So for the math segment of the game, we're aiming to build up to special unitary groups, for which we need Lie groups, for which we need both groups generally, and differentiable manifolds, for which we need topoligical spaces, for which we need points, and so on. I'm probably skipping a lot of steps along the way, but that's generally the course I'm expecting this to take to get to the point where physics can take over. Then from quantum fields we can build fundamental particles, build atoms out of those, molecules out of those, cells out of those, and so on. I thought I already sketched an expected trajectory like this in the OP.

I'm not sure why NOR is important to you

Because it's a sole sufficient operator. You can start with just that one operator and build all the others from there, so it's a great starting place. (NAND would work too, but I like the symmetry of starting with the empty set and joint negation). You can trivially build NOT(x) with NOR(x,x), then OR(x,y) with NOT(NOR(x,y)) and AND(x,y) with NOR(NOT(x),(NOT(y)), NAND(x,y) with NOT(AND(x,y)) obviously, IF(x,y) with OR(x,NOT(y)), ONLY-IF(x,y) with OR(y,NOT(x)), IFF(x,y) with AND(IF(x,y),ONLY-IF(x,y)), XOR with NOR(AND(x,y),NAND(x,y)), and so on...
• 1.3k
anything you can say in propositional logic you can say in predicate logic

Yes, predicate logic gets loaded automatically by first-order logic. I don't think it comes with a separate axiom pack. It is just additional language grammar. In my impression, the entire axiom pack of logic is already attached to propositional logic.

anything you can say in propositional logic you can say in predicate logic

Yes, and anything you can say in predicate logic you can say in first-order logic, which is what you need for PA or ZFC. Predicate logic is not sufficient. You need the universal quantifiers too (again, supposedly language-only).

so if you start with sets you don't need to "manually" include propositional logic, it's one of the things you can build along the way.

It would be fantastic if it were possible to derive the 14 axioms of propositional logic as theorems in ZFC, but I have not run into any documentation that confirms this or clarifies how to do that. I am absolutely not sure that it can be built along the way on top of ZFC. All documentation that I have seen up till now suggests that it is simply another, additional axiom pack.
• 4k
Someone else please check this because I don't trust alcontali's word on this for various reasons, and my understanding is that from set theory you can build predicate logic, which serves as an extension of propositional logic (anything you can say in propositional logic you can say in predicate logic), so if you start with sets you don't need to "manually" include propositional logic, it's one of the things you can build along the way.

You need logical symbols to state set axioms. You also need them for Peano arithmetic (and the Von Neumann version you presented in the OP). You don't have any tools to reason about sets without putting the reasoning system in the background - roughly, positing inference rules to output theorem statements from axiom statements.

With more rigour you need to specify a whole formal language to specify the logical connection of statements, typically a Hilbert system with first or second order quantification, AFAIK you only need first order to specify everything in the plan I posted up until you get to the analytic properties of the reals, in which case in the standard treatment you need second order quantification to axiomatise the completeness property of an ordering (quantifying over upper and lower bounds of a set is a second order logical statement).

Regardless, if you actually try to reason about objects in derived theories (like naturals from sets) using only the set and logical axioms, you'll find that the proofs for every statement get intolerably long, and at that point you switch to natural language and appeal to intuition (which is translatable to the inference rules if you do it right).

It would be fantastic if it were possible to derive the 14 axioms of propositional logic as theorems in ZFC, but I have not run into any documentation that confirms this or clarifies how to do that. I am absolutely not sure that it can be built along the way on top of ZFC. All documentation that I have seen up till now suggests that it is simply another, additional axiom pack.

Considering you need to axiomatise the behaviour of quantifiers to state ZF axioms (eg extensionality) or Peano axioms (eg the successor function being injective)... It's kinda redundant to start from sets then derive quantifiers (like for some being arbitrary disjunction over set elements, or for all being infinite conjunction over set elements).
• 1.3k
It's kinda redundant to start from sets then derive quantifiers (like for some being arbitrary disjunction over set elements, or for all being infinite conjunction over set elements).

The LEAN documentation (Microsoft Research) says that they fully support set theory but that their core axiomatization is actually dependent type theory:

The syntax of type theory is more complicated than that of set theory. In set theory, there is only one kind of object; officially, everything is a set. In contrast, in type theory, every well-formed expression in Lean has a type, and there is a rich vocabulary of defining types.

In fact, Lean is based on a version of an axiomatic framework known as the Calculus of Inductive Constructions, which provides all of the following ...

Given this last fact, why not just use set theory instead of type theory for interactive theorem proving? Some interactive theorem provers do just that. But type theory has some advantages ...

It is certainly not the first time that I see them switching to type theory in the context of proof assistants and theorem provers (Coq does that too), even though type theory is so much less straightforward (at a basic level) than set theory ...
• 2.8k
anything you can say in predicate logic you can say in first-order logic, which is what you need for PA or ZFC. Predicate logic is not sufficient. You need the universal quantifiers too

This statement seems confused. First-order logic is a form of predicate logic: the first form of it, before you get to higher-order predicate logics. I think you may be confusing propositional logic with predicate logic.

In any case, you've both made the case that you at least need a quantification operation in addition to the truth-functional joint negation operation I started with, though since existential and universal quantification are DeMorgan duals of each other, you can start with just one (or its negation) and build all the others from that. So I guess to start with we need NOR (or NAND), and a quantifier (pick one; I would start with the negation of the existential, for aesthetic reasons, but logically it doesn't matter), and then we can say what a set is, and from the empty set proceed as previously described.
• 2.1k
How is your game going, PfH? I thought it was supposed to build a play-dough universe of solid, unerring, unassailable, elementary numbers. Such as 1, or 2, or... or 3, for instance. Not any of those newfangled numbers like, i, e, or pi.

But the thread evolved into arguments of conceptual pontifications, it seems.

Why can't people just play along? People lost their child-like sense of play-dough. What I blame is an insidious ugly thing, called "maturity".
• 1.3k
I think you may be confusing propositional logic with predicate logic.

Well, since first-order logic is a predicate logic (extended by quantifiers), I did not mention predicate logic separately. All the axioms of logic, however, are (supposedly) already included at the level of propositional logic.

Wolfram insists, however, that first-order logic adds two new axiom schemata.

The set of axiom schemata of first-order predicate calculus is comprised of the axiom schemata of propositional calculus together with the two following axiom schemata:

∀ x F(x) ⇒ F(r)
F(r) ⇒ ∃ x F(x)

Do you know why this is needed? In my impression, these schemata are not mentioned in Wikipedia's page on first-order logic. I could not find other relevant documentation for this problem ...

By the way, Wolfram also describes first-order (predicate) logic as an extension to propositional logic. They do not see zeroth-order predicate logic as something that needs to be mentioned separately ...
• 2.8k
I thought it was supposed to build a play-dough universe of solid, unerring, unassailable, elementary numbers. Such as 1, or 2, or... or 3, for instance. Not any of those newfangled numbers like, i, e, or pi.

No, we'll get to those eventually... if anybody actually plays the game instead of arguing about the premise of it. I mean, I can build you the integers, rationals, reals, and complex numbers in my next post if you like, though I'm not up to building arithmetic out of set operations alongside it, so I was hoping someone else would do that.

Do you know why this is needed?

I don't, and if I'm reading them correctly, they together contradict my understanding of predicate logic, wherein "for all x, F(x)" does not entail "there exists some x such that F(x)", but only "there does not exist any x such that not-F(x)". Which makes me think I'm not reading that passage correctly.

They do not see zeroth-order predicate logic as something that needs to be mentioned separately

According to Wikipedia, zeroth-order predicate logic is sometimes used as a synonym for propositional logic.
• 1.3k
I don't, and if I'm reading them correctly, they together contradict my understanding of predicate logic, wherein "for all x, F(x)" does not entail "there exists some x such that F(x)", but only "there does not exist any x such that not-F(x)". Which makes me think I'm not reading that passage correctly.

Well, I guess that we'll figure it out later. In the meanwhile, there's a bigger snag. The documentation (LEAN and Coq) suggests that we will regret using set theory as a foundation, because they went with type theory instead, even though all of type theory can be phrased as set-theoretical theorems. Any idea why? Is this just practical or so?
• 4k

Don't know enough about type theory to comment.

No, we'll get to those eventually... if anybody actually plays the game instead of arguing about the premise of it. I mean, I can build you the integers, rationals, reals, and complex numbers in my next post if you like, though I'm not up to building arithmetic out of set operations alongside it, so I was hoping someone else would do that.

Might as well start then. I'm largely going to assume that we can have a classical logic in the background with whatever rules we need (so it's non-constructive as in the usual way people do math), and proceed in the less formal manner of mathematical proof. If you want to try to translate the arguments into formally valid ones, give it a go, but you will almost certainly give up when you realise how insane it is to try and prove everything like in a logic textbook.

Beginnings - An Overview Of Some Of The Crap In The Background to Even Get Set Theory Going

Informally, sets are collections of objects.

When you start building up a mathematical theory axiomatically, unfortunately the word "informally" means "a miracle occurs" or "lots of abstractions are at work in the background to make what I'm saying true".

If you want to remove the miracles, the only recourse is formalisation; that is, to specify how to make a structure of elements you can reason about (like propositions), and specify the ways you can reason your way about in the structure (like rules of inference).

Underneath this is a question of the language that you're using - when you no longer take reason's intervention in language for granted, why should you be able to take the language itself for granted either? So you also need to be able to specify exactly the kind of symbols that you're using and how they relate, and what it means to be a statement.

This specification consists of two components:

(1) A formal language (a syntax).
(2) A way of providing interpretations of that formal language (a semantics).

A formal language is a formal system. It consists of a set of symbols that you define a grammar for. Like the symbols in logical arguments: $A \implies B$. You stipulate a bunch of symbols and ways of combining them. The bunch of symbols are building blocks of the language. The ways of combining them are how you build things with them. The bunch of symbols in a language are called primitives, and the ways of combining and replacing them are called production rules.

When you have a formal language, you have a way of combining symbols with other symbols to produce valid formulas in the language. When you apply stipulated production rules to stipulated starting symbols, you end up with a well formed formula. If you mess up, you end up with something that is no longer part of the formal language. Drawing special emphasis - a formal language consists solely of well formed formulas, things which can be built from applying production rules to primitives.

If we have a language consisting solely of:
Symbols: A, B
Production Rules: (R1) Whenever you have a symbol, you can replace it with two copies of the symbol.
(R2) Whenever you have A, you can join B to its right.
(R3) Whenever you have B, you can join A to its right.

We refer to the collection of all these things together as TOY.

We can make sequences of formulae like $A \rightarrow AA \rightarrow AAB$, which starts with A, applies R1, then applies R2 to the second A. But we can never produce BAA, why? Because the only way we can produce formulas requires that the first B is always to the right of the first A. The only way we can introduce a B is by joining it to the right of an A, and we can only start with A. BAA is then not a well formed formula of the language, whereas AAB is.

If we adjoined another rule (R4); that we can reverse any formula and obtain a new formula, we can now obtain BAA. What this highlights is that what counts as a well formed formula of a formal language depends entirely on how you set up the rules for the formal language; you can only get out consequences of what you put in.

For any sufficiently interesting formal system, there will be a gap between what counts as an element of the language and what is a derivable consequence. This is a distinction, roughly, between what is sayable and what is demonstrable. As may be obvious, to furnish this distinction we need a distinction between a well formed formula and a derivable consequence. This ultimately turns on there being a difference between how well formed formulae are produced and how consequences are derived; and this is fleshed out in a distinction between the formal language (like TOY, or the language of propositional logic) and a system of inference on its elements (to be introduced).

So we're going to talk about propositional logic now. Propositional logic has such a distinction; between its well formed formulae and its derivable consequences.

This sets out what it means for a formula to be well formed in propositional logic:

We need an alphabet of propositional symbols.
We need an alphabet of logical connectives on those propositional symbols - this is like negation, implication, and, or, xor and so on.
(1) Every element of the alphabet of propositional symbols is a well formed formula.
(2) If $x$ is a well formed formula, then $\lnot x$ is a well formed formula.
(3) If $x,y$ are well formed formulae, then $f(x,y)$ is a well formed formula, where $f(x,y)$ is any binary logical connective (like or or and).

Analogously to TOY, (1) gives us the set of starting points (any individual propositional symbol) and (2),(3) are the production rules.

Per @Pfhorrest's comment, if you want to be very conservative you can have a single binary logical connective in your domain - nor - and generate all the logical connectives. But from the perspective of building up the larger structures desired in this thread, such parsimony is not required or desirable (IMO anyway).

This is just like TOY, but the rules are different. Let's have an example: denote logical conjunction by $\cap$. A sequence like we built in TOY goes:

$\displaystyle x \rightarrow \lnot x \rightarrow \lnot x \cap x$

This starts with $x$, assumed to be a propositional symbol of the language which is thereby a well formed formula by rule (1), we then apply rule (2), we then apply rule (3) with conjunction and $x$. This well formed formula is "x and not x". Perhaps why it is silly to require every step in a demonstration to be done explicitly with stated rules becomes obvious at this point; it is easy to construct things which are obviously well formed formulae:

$\displaystyle x \cap (y \implies x) \cap \lnot (x \implies (y \cup \lnot z))$

where $\cup$ is logical disjunction, but demonstrating that this is so based off the rules is tedious and uninformative.

If someone wants to start talking about the inferential theory of propositional logic from here that would be fine, if not I'll make the post later.
• 1.3k
so if you start with sets you don't need to "manually" include propositional logic, it's one of the things you can build along the way.

I have accidentally run into a comment on this issue:

Difference from set theory. There are many different set theories and many different systems of type theory, so what follows are generalizations. Set theory is built on top of logic. It requires a separate system like predicate logic underneath it. In type theory, concepts like "and" and "or" can be encoded as types in the type theory itself.

It seems to suggest that your remark would be true for type theory but not for set theory.
• 4k
Now that we've set up how to specify a formal system to produce strings given a set of production rules and symbol primitives; a formal language; we need to speak about what it means to make a system of inference out of a formal language.

People more familiar with the material will probably be trolled extremely hard by my cavalier attitude to equality and the formal status of arguments, but I hope that it provides enough of a flavour of logic to proceed from. I'm happy to receive comments/criticisms/corrections and rewrite things to be more formally valid, but only if the corrections don't sacrifice (or even improve!) the post being intuitive and introductory.

For a given formal language, we can construct the collection of all possible well formed formulae that can be produced from it. This collection of well formed formulae is the vocabulary with which we will express a system of inference on such a formal language.

For the propositional calculus, the set of well formed formulae is obtainable through the following rules:

An alphabet of propositional symbols.
An alphabet of logical connectives on those propositional symbols - this is like negation, implication, and, or, xor and so on.
(1) Every element of the alphabet of propositional symbols is a well formed formula.
(2) If $x$ is a well formed formula, then $\lnot x$ is a well formed formula.
(3) If $x,y$ are well formed formulae, then $f(x,y)$ is a well formed formula, where $f(x,y)$ is any binary logical connective (like or or and).

$\lnot$ is negation.

We want to augment this language with more rules of a different flavour; a more restrictive collection of ways of linking well formed formulae that embeds intuitions regarding demonstrating propositions given assumptions - deduction proper - rather than constructing new grammatically sound propositions based on old ones. We need to know more than the lexicology of a language to be able to reason using it.

To reason formally using a language, the first step is to formalise and stipulate the rules by which well formed formulas can be reasonably linked to others. These formalised stipulations embed intuitions about what a system of inference should be able to do.

What this entails is letting some variables range over the set of well formed formulas defined above, then stipulating linking statements between the variables using logical connectives. Stated abstractly, it isn't particularly intuitive, so I'll proceed with (some of one way of doing) the specifics for propositional calculus.

All of this is still spelling out the syntax language, so we're still discussing (1) in the previously linked post.

In an introductory symbolic logic class, you'll be introduced to a natural deduction system and asked to prove things in it. This is going to be a very abbreviated form of that.

We're going to introduce a few more rules that allow us to produce well formed formulas given other well formed formulas; except these encode inference rather than grammaticality, these are not an exhaustive list. Given that $x$ and $y$ are well formed formulae of the language of propositional calculus, we introduce some axioms to relate them inferentially.

(some) Inference Rules:

Conjunction (how $\cap$ "and/conjunction" works):
(CE1) $x \cap y$ lets us produce $x$
(CE2) $x \cap y$ lets us produce $y$
(CI) If we have $x$ and $y$, this lets us produce $x \cap y$.

These just say that if two things can be assumed together independently, this means that we can conclude each of the things independently from assuming them together. "There are 2 apples and 3 pears on this shopping list" therefore "there are 2 apples on this shopping list" and "there are 3 pears on this shopping list". These are called conjunction elimination (CE1 and CE2) and conjunction introduction (CI).

Modus ponens (how $\implies$, "implication" works):
(MP) If we have $x$ and we have $x \implies y$, then we can produce $y$.

This tells you that implication lets you transmit assumptions into conclusions whenever those assumptions imply those conclusions.

Now we have to introduce two symbols, $\top$ and $\perp$, which are logical constants (0th order logical connectives)... They will suspiciously resemble true and false respectively for good reason.

Disjunction (how $\cup$, "or/disjunction" works:
(DI) If we have $x$, we can produce $x \cup y$
(DS) If we have $x \cup y$ and $x=\perp$, then we can produce $y$

Informally - since we've not introduced truth or falsity as notions yet - the first says that if we know something is true, we know that either that thing is true or something else is true (which might be quite silly depending on the context, but it's fine here. "1+1=2 therefore 1+1=2 or Santa exists", it's rather artificial from the perspective of reasoning in natural language. The second says that if we know at least one of two things is true, and one of those things is false, then we know the other is true.

A common feature of all these rules is that they have an "if, then" form, given a bunch of inputs, we can produce (a bunch of) outputs. They are production rules that work on well formed formulae that capture a notion of inference. If we have a bunch of inputs $\Gamma$, and they let us produce the conclusion $\phi$, we write $\Gamma \vdash \phi$. The distinct elements of the bunch of inputs will be separated with commas (and canny readers will probably notice that comma separation before $\vdash$ does exactly the same thing as making a conjunction of the separate elements).

As a sample of how to use this notation, we can write out some rules involving negation:

(DNE) $\lnot \lnot x \vdash x$
(PBC) $x \implies \perp \vdash \lnot x$

These say "not not x lets us produce x" and "if x implies the falsehood symbol, then we can produce not x". Rephrasing them (informally at this point) in terms of truth and falsity. If we know that not X is false, then X is true; this is double negation elimination (DNE). The second one says that if x implies a falsehood, we can conclude that x is false, this is proof by contradiction.

Some curious features of the formalism are that we don't need to stipulate anything at all to derive (anything which is equal to) $\top$, IE $\{\} \vdash \top$ (this is writing that there's no premises to the argument, and that the system derives the conclusion with no stipulations). And that if we start with (anything which is equal to) $\perp$ we can derive anything - $\perp \vdash x$. Any statement which is provable from no assumptions is called a tautology (it is equivalent to $\top$ in some sense). The latter is called the principle of explosion.

Two examples of a proofs (using the stated axioms):

We want to show that $\perp \vdash x$.
(1) Assume $\perp$.
(2) $\perp \cup x$ (DI)
(3) $x$ (DS).

We want to show that $A \cap B, A \implies C, B \implies D \vdash C \cap D$:
(1) $A$ (CE1)
(2) $B$ (CE2)
(3) $C$ (using $A \implies C$ and MP)
(4) $D$ (using $B \implies D$ and MP)
(5) $C \cap D$ (3,4, CI).

A canny and sufficiently pedantic reader will have noticed that in each of these proofs I've made assumptions, then used those assumptions to derive conclusions. This is called a conditional proof. But if we're being extremely pedantic about the foundations, even that can't be taken for granted (it either has to be some assumption of the system itself or provable from/about the system).

Formally, this looks like:

(CP) If we have that $\Gamma \cap x \vdash y$, then this means $\Gamma \vdash x \implies y$.
or
$x \vdash y$ implies $\{ \} \vdash x \implies y$.

This is called the deduction theorem, it's what's lets you link the ability to produce a well formed formula using the inference rules to the overall ability of the system to produce that well formed formula as a conclusion.

At this point, notice that there are four things kicking about in the discussion that look a lot like entailment:

(1) The rules that produce well formed formulae from other well formed formulae.
(2) The implication symbol internal to the formal language $\implies$
(3) The statement that the rules that produce an inference from a set of assumptions $\vdash$
(4) Some further sense of entailment that lets us say $x \vdash y$ implies $\{ \} \vdash x \implies y$, the middle "implies"; this is some 'meta theoretical' entailment, as it's entailment about the formal language under consideration.

All of these are formally distinct entities! They aren't the same thing and don't mean the same thing. But when reasoning about mathematical structures, so long as the context is sufficiently clear, we don't need to keep track of all the distinctions; when reasoning about objects that are sufficiently far removed from what spelling out reasoning even means, we don't have to pay so much attention to all the internal distinctions between entailment types.

But for now we do.
• 4k
The previous post gave a sketch of how a formal language; a grammar of well formed formulae; can be augmented with more structure (inference rules) to produce a system of inference. Together these elements specify the syntax of a formal language. Now we can begin to discuss a semantics.

So far we've defined a collection of well formed formulas for propositional logic; then we've defined a bunch of ways of linking well formed formulas together. Now we're going to develop an idea of what it means for a well formed formula to be true or false. Truth and falsity are semantic properties; specifying how they behave over the well formed formulae of a language specifies the semantics of that language.

It's probably strange to consider all the things we've done so far as 'meaningless', insofar as they don't have any formal semantic properties, but nevertheless seem to behave quite a lot like how we reason logically. The trick here is that semantic properties of a formal language and system of inference rules on top of it is a different structure again: we need some way of interpreting what we've done so far.

Reveal
I don't think it's a coincidence that the systems we use and study do indeed have tight relations between the syntax and the semantics, as does natural language (if you're not Chomskybot). We set things up in a formal language to the syntax and semantics relate well.

An interpretation is just a mapping of well formed formulae to true or false. How a well formed formula of propositional logic is mapped to true or false completely determines the interpretation of that well formed formula. Interpretations in propositional logic are done truth functionally. If we have a well formed formula of logical connectives over propositional symbols, how that formula evaluates to True or False is completely determined by what happens when its propositional symbols are True (T) or False (F)

These are typically spelled out with truth tables for the primitive logical connectives. A truth table specifies the argument propositions on the left and the formula containing them on the right, the truth values on the left lead to the truth value of the well formed formula on the right. For example, $\cup$ and $\implies$.
$\begin{array}{c|c|c}A & B & A \cup B\\T &T&T\\T & F & T\\F& T & T\\F& F & F\end{array}$

$\begin{array}{c|c|c}A & B & A \implies B\\T &T&T\\T & F & F\\F& T & T\\F& F & T\end{array}$

The truth value of a complex formula is defined as the truth value obtained by iteratively evaluating all its logical connectives from the propositional symbols up. EG for $(A \cup \lnot B) \implies (B \implies A)$, we set out all the combinations for $A,B$ evaluate $\lnot B$ as a function of $B$, then use that to evaluate $A \cup \lnot B$. We then evaluate $B \implies A$ as a function of $B$ and $A$ and then finally the whole thing. If you do this as an exercise, you should see that it's always true.

Two well formed formulae are semantically equivalent when and only when they evaluate to the same truth value on every single assignment of truth values of their constituent propositions. A well formed formula which always evaluates as true is equivalent to $\top$, a well formed formula which always evaluates to false is equivalent to $\perp$.

A well formed formula which always evaluates as true is called a tautology, and if a collection of propositions linked by logical connectives - an element of the propositional logic $x$ - always evaluates as true under every truth value assignment, we write ${ } \vDash x$. This is read "the system (propositional logic) semantically derives x". In contrast, $\vdash$ from the previous posts is read as "syntactically derives". This is a distinction between "if this is true then that is true" (semantic entailment) and "if this is produced then that is produced" (syntactic entailment).

I believe this is quite perverse, but I think it serves the point of hammering home the wedge driven between syntax and semantics which we're beginning to bridge; we can write:

$\begin{array}{c|c}\perp & F\end{array}$

$\begin{array}{c|c}\top & T\end{array}$

Since $\perp$ and $\top$ were syntactic elements, we still need to interpret them to true or false. Even though we've set them up to behave just like true and false, and to represent tautologies and necessary falsehoods when appropriate in the syntax, we still need to map them to their semantic analogues to complete the picture; that is, to provide a means of interpretation for all our well formed formulae.

It's easy to lose track of why the truth tables were introduced; so I'll spend some more words on it. When we're setting up a logic, we want to ensure that if we what we assume is true, what we can produce from it is true. "What we assume" is a well formed formula, "is true" is stipulated by assuming it, "what we can produce from it" are the inference rules applied to the well formed formula. It would be nice if all these notions coincided, but it is not guaranteed to hold.

So it is an open question, given the interpretation of symbols, whether we can produce all and only true well formed formulae given the use of the inference rules. The constructions are quite different, no? We have a bunch of rules that produce well formed formulae, and we have a way of assigning truth values to well formed formulae - do the inference rules play well with the truth value assignment? What are the relations between syntactic and semantic derivation in a logic?

These are important questions to answer for a logic. The possible gap between what is true statement of the system - the well formed formulae which evaluate as true - and what well formed formulae the system can produce through the inference rules is the site of the famous incompleteness theorems. This gap is closed by a few properties, soundness, completeness, consistency, compactness and decidability which are necessary for a logic to behave in all the ways we pre-theoretically expect, but famously completeness and consistency together are guaranteed not to hold for almost all mathematical structures of interest (with a classical logic).

To discuss these issues in more depth, we need to introduce the idea of a model and relate it to the truth value assignments. The idea of a model of a system of axioms and inference rules will turn out to be how we can turn a constructed object in the language into an object that behaves how we specify with the specification of axioms (using the inference rules).
• 726
Looks like an error in your truth table for implication. F => F is T.

But I haven't been following the discussion and you may be into something else.
• 2.8k
Wow that’s a lot more detail than I was expecting! I’m away for the weekend now so I won’t have time to read up until a few days from now.
• 4k

Thanks, fixed.

The presentation is very non-standard. If and when you've read it I'd appreciate you pointing out flaws and errors as I'm mostly winging it but checking sources to make sure I'm not doing something completely insane.

The one bit I'm already skeptical about is putting in syntactic copies of contradiction and tautology as logical connectives (of 0th order, constants), then feeding them into the semantics to get false and true. I don't think it does anything to the overall logic, despite being redundant. The only place I've used the syntactic copy of the falsehood symbol is in presenting the disjunctive syllogism as an axiom (I think I've seen this somewhere before, Gentzen?).

I'm trying to stress in the presentation that there are a lot of ways to end up in the same place, so I've not defined a system with primitive logical connectives, just let them be whatever.

I'm a lot less competent on the details for quantifiers, so I expect to write many more batshit insaanities if I end up writing that bit.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal