## Musings On Infinity

1678910Next
• 2.1k
Can’t a real number go on and on for infinity? Or is that an irrational number...?
• 2.1k
"
I can't speak to non-pointed set-theoretic probability theory. I know about ETCS (elementary theory of the category of sets) so I understand that sets don't require points. But as to probability, I can't say. However if you take finite line segments as sets, you seem to lose intersections. Are these closed or open segments? You have a reference for this interpretation of probability theory?
— fishfry

Let's look at Kolmogorov axioms here: (http://mathworld.wolfram.com/KolmogorovsAxioms.html)

Everything that is needed is a set W
W
, some Qi
Q
i
, that can be "anything", a function Qi
Q
i
from the Qi
Q
i
to real numbers, and a function "complement" on the Qi
Q
i
.

Let's consider as our probability space the segment [0, 1].

I can take for Qi
Q
i
the closed sets included in [0, 1] made of countable number of non overlapping segments with non zero length, and for W
W
the set of all these sets. The complement of a Qi
Q
i
will be the closure of the remaining part of [a, b] when I remove the Qi
Q
i
. There are no Qi
Q
i
of zero measure (and this is very reasonable for a probability theory: every event that can happen must have a non zero probability to happen).

The complement of a Qi
Q
i
overlaps with Qi
Q
i
only on the end points, and that is compatible with the axioms: the sum of measures adds up to 1.

The elements of W
W
are simply ordered pairs of real numbers instead of single real numbers, but everything works at the same way: from the point of view of set theory two segments are equal if and only if the extremes are equal: no mention of overlapping segments at all.

The definition of overlapping segments is the usual one: the higher number of the first pair is bigger than the lower number than the second pair.

There is no need to consider infinite sets of points, and for probability theory there is no need to speak about points at all: probability theory does not need zero-measure events, and no physical possible event has zero probability.

P.S. This is only a very simple example to show that it's not contradictory to define probability without points. Pointless topology is much more general than this and makes use of the concept of "locales" (https://en.wikipedia.org/wiki/Pointless_topology)

This to me seems to be what @Dfpolis was saying that mathematics must be instantiated in nature first otherwise it is pointless to talk about numbers existing in a Platonic ideal realm. That’s what I got from what you both were saying. Pardon the intrusion.
• 189
Well, I wrote about physical events because probability is a concept that belongs both to physics and mathematics, but from the point of view of mathematics only (Kolmogorov axioms), if axioms do not require zero to be a valid probability measure, you can avoid to consider events with zero probability.
But of course in this case the axioms are chosen with the intent to be the minimal constraints that have to be satisfied by any definition of probability that makes sense in physics.
This is very common: the mathematical definition is derived from the physical intuition, but is used as an axiomatic theory completely disconnected from physics. That's why it often happens that after several years somebody finds a "model" of the axiomatic theory that does not correspond to the physical intuition at all but verifies all the axioms, so from the point of view of mathematics it is a valid model.
The point is that axiomatizations are needed if you want to use formal logic (and you want to use formal logic because physical intuition is not enough to avoid paradoxes), but the axiomatization very often does not capture exactly the physical concept that you have in mind. And that happens even for the most elementary physical concepts, such as the concept of natural numbers.
• 834
↪fishfry Can’t a real number go on and on for infinity? Or is that an irrational number...?

You mean the decimal representation of a real number? Yes, they're infinitely long. For example 1/3 = .333... and sqrt(2) = 1.4142... and so forth. Even terminating decimals like 1/2 = .5 = .4999... have infinitely long representations.

But the representation isn't the real number. The real number is the abstract thing pointed to by the representation, just like 2 and 1 + 1 are two different representations of the same abstract number.
• 834
I guess I got confused because you guys were talking about numbers as points, where I was thinking about zero-dimension points on a two-dimensional line. This stuff to me right now is very esoteric, as I don’t remember the terminology for the different kinds of numbers. I was always better at calculations like an engineer than I was so much interested in or ever had any exposure to theory.

A line is only one dimensional. A plane is two dimensional. Points are zero-dimensional. How a bunch of zero dimensional points make a line is a bit of an ancient mystery. Newton thought of a point as tracing out a curve as the point moves through space. But that only pushes the mystery to space itself.

There's really only one standard real number line that most people care about, the one from high school math used by physicists and engineers. The other stuff is esoteric. Constructivism is the idea that every mathematical object must be able to be explicitly constructed. That leads to a different kind of notion of the real line. But I really can't speak for constructivist philosophy, since I don't know much about it. You can see the difficulties @Mephist is having in explaining it to me!
• 2.1k

Thank you both for your patience with me. This is interesting stuff.
• 189
Yes, I find it interesting too, and probably my problem is that I learned about logic only from a practical point of view: proving the correctness of digital circuits and algorithms. Then I discovered that even category theory and topology are much more "practical" that they seem to be, and the best thing about computer-based proof assistants such as Coq is that proofs are real computations: you don't have to ask an expert if something that you think is right of wrong: just write a program to "build" the proof, and check if it works. What seem to be abstract mathematical concepts become very concrete and start to make sense!

But probably this is the opposite of how mathematics and logic are usually taught, and I have to confess that when I studied analysis at university (I studied engineering), I learned the rules to do the calculations and remembered how to proof the theorems, but I never studied anything about the "foundations" of mathematics, such as Zermelo-Fraenkel set theory.

So, I don't really feel qualified as a "teacher" to explain to @fishfry or to you about philosophical questions related to infinite sets, but I have very clear ideas of how formal logic systems work, and even my own theory of how mathematics "works".
• 413
But our use of real numbers (at least for the most part) is in integrals and derivatives, right?
So the "dx" infinitesimals in integrals and derivatives should be the morally correct model? This is how real numbers are intuitively used since the XVII century.

First of all, does our imprecise use of real numbers warrant a precise account?

What does it even mean to say that a real number denotes a precise quantity?

For in what sense can a formal system speak of universal quantification over all of the natural numbers?

Recall that in a computer program, an infinite loop is merely intended to be an imprecise specification of the number of actual iterations the program will later perform; it is only the a priori specified number of iterations that isn't upper-bounded in the logic of a program, that is to say before the program is executed.

Therefore if the syntax and semantics of a formal system were to mimic a programmer's use of infinity, it wouldn't conflate the 'set' of natural numbers with an a priori precise set, but would instead refer to this 'set' as an a priori and imprecise specification of an a posteriori finite construct whose future construction is generally unpredictable and isn't guaranteed to exist for a given number of iterations.

A universal quantifier over N would not longer be interpreted as representing a literally infinite loop over elements of N, but as representing a termininating loop over an arbitrary and unspecified finite number of elements, where the loop's eventual termination is guaranteed, even if only through external intervention that is outside of the constructive definition of the program or formal system.

By interpreting 'infinite' quantification as referring to an arbitrary finite number of elements, the law of double negation is then naturally rejected; for an arbitrarily terminated loop cannot conclusively prove anything in general.

This interpretation also has the philosophical advantage that semi-decidable functions are no longer informally misinterpreted as talking about their 'inability to halt', and likewise for formal systems supposedly 'talking about' their inability to prove certain statements.

So in short, an imprecise and pragmatic model of the real numbers is what is important, corresponding to how numerical programmers implement them in practice, with particular attention paid to numerical underflow.
• 189
First of all, does our imprecise use of real numbers warrant a precise account?

What does it even mean to say that a real number denotes a precise quantity?
sime

I am not sure what you mean by "imprecise use of real numbers", however the rules of calculus give precise results:

* The volume of a sphere inscribed in a cylinder is exactly 2/3 of the volume of the cylinder (https://en.wikipedia.org/wiki/On_the_Sphere_and_Cylinder)

* $e^{\pi i}$ is exactly equal to -1

For in what sense can a formal system speak of universal quantification over all of the natural numbers?sime

The standard way is assuming the principle of induction: if a proposition is true for n=0 and from the fact that is true for n you can prove that is true even for n+1, then you assume that is true for all natural numbers. (https://www.encyclopediaofmath.org/index.php/Induction_axiom)
It cannot be verified in a finite number of steps, so it's assumed as an axiom.
But assuming it to be false is not contradictory: you can assume the existence of numbers that can never be reached by adding +1 an indefinite number of times. (https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic)

So in short, an imprecise and pragmatic model of the real numbers is what is important, corresponding to how numerical programmers implement them in practice, with particular attention paid to numerical underflow.sime

I don't understand what you mean by "a pragmatic model". Do you mean that there should be some sort of "approximate" logic, like some sort of fuzzy logic? (https://en.wikipedia.org/wiki/Fuzzy_logic). The standard "epsilon-delta" definition of limits is not "pragmatic"?
• 413
For in what sense can a formal system speak of universal quantification over all of the natural numbers?
— sime

The standard way is assuming the principle of induction: if a proposition is true for n=0 and from the fact that is true for n you can prove that is true even for n+1, then you assume that is true for all natural numbers. (https://www.encyclopediaofmath.org/index.php/Induction_axiom)
It cannot be verified in a finite number of steps, so it's assumed as an axiom.
But assuming it to be false is not contradictory: you can assume the existence of numbers (inaccessible cardinals) that can never be reached by adding +1 an indefinite number of times (https://en.wikipedia.org/wiki/Inaccessible_cardinal).

So then question is, in what sense does the principle of induction speak of universal quantification over all of the natural numbers?

Take any predicate P with a single argument, then take as axioms

i. P(0)
ii. (x) ( P(x) => P(x+1)) ,

where ii. denotes universal quantification over x, where x is a bound variable.

Then we say, "By informal convention, this predicate is now said to be 'true' for 'all' x."

But what exactly has our demonstration achieved in this extremely small number of steps?

Do we really want to insist that our use of these axioms is equivalent to a literal step-by-step substitution of every permissible number into P?

Do we even want to say that we are assuming the existence of this inexhaustible substitution?

All that i and ii define in a literal sense is a rule that permits the substitution of any number for x.

Nowhere does this definition imply that induction is a test for the truth of every substitution.

Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.
• 413
Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.sime

Sorry, that's confusing and misleading, by theoremhood i was referring to truth in the respective model of the axioms.

What i mean is that mathematical induction 'proves' a universally quantified formula in a completely vacuous sense in that the 'conclusion' of the induction, namely the universally quantified formula (x) P(x), is merely an abbreviation of the very axioms i and ii.

On the other hand, for universally quantified formulas over infinite domains that do not possess a proof by mathematical induction, there is no reason to support their informal interpretation as enumerating the truth or construction of the predicates they quantify over, by the very fact that they do not correspond to a rule of substitution.
• 189
Do we really want to insist that our use of these axioms is equivalent to a literal step-by-step substitution of every permissible number into P?

Do we even want to say that we are assuming the existence of this inexhaustible substitution?
sime

I would say that "forall x in S", where S is an arbitrary set, is not interpreted as a step-by-step substitution, because in general not for every set is possible to define a total order: so not only it's impossible to visit all the elements during the iteration, but it's not even possible to define what the step n+1 should be for an arbitrary step n.
In general the interpretation is "choose whatever element x in S", or "choose a random element x of S".
The meaning is given only by the rules of logic, and the rules say that if you have "forall x exists y", then you have a function from x to y, and a function is assumed to be a primitive concept. So I guess there is no real definition of what "forall" means.

All that i and ii define in a literal sense is a rule that permits the substitution of any number for x.

Nowhere does this definition imply that induction is a test for the truth of every substitution.

Therefore it is coherent to accept mathematical induction as a principle of construction, and yet reject it's interpretation as a soothsayer of theoremhood.
sime

Well, the meaning of the axiom is exactly this: there are no other natural numbers except the ones that you can reach by iterating the successor function starting from zero. So, there are no "unreachable" natural numbers.
Or, put in another way, if a property of a natural number is true for every n+1 number starting from zero, then whatever natural number you choose will have that property.

P.S. If you don't assume the induction principle to be true, there could be for example several infinitely long "chains of numbers" starting from different points, not only from zero. Or you could assume that there is an integer so far away from zero (infinitely far) that will be never reached by counting, even if each number is "attached" to a following one, as in nonstandard arithmetic
(https://en.wikipedia.org/wiki/Non-standard_model_of_arithmetic).
• 189
On the other hand, for universally quantified formulas over infinite domains that do not possess a proof by mathematical induction, there is no reason to support their informal interpretation as enumerating the truth or construction of the predicates they quantify over, by the very fact that they do not correspond to a rule of substitution.sime

Yes, exactly. Not every domain of quantification (set in set theory or type in type theory) is supposed to be enumerable, or iterable.
• 189
I do not believe that an oracle in computer science is in any way analogous to the axiom of choice. If you know of a connection perhaps you could explain it. An oracle allows you to compute something that was formerly noncomputable. The axiom of choice doesn't let you compute anything. In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word.

Hi @fishfry.

Reading again what you wrote, I think that maybe I am able to explain what I meant here.

Here's the axiom of choice, taken from wikipedia: (https://en.wikipedia.org/wiki/Axiom_of_choice)

$\forall X [ \emptyset \notin X \Rightarrow \exists f:X \to \bigcup X, \forall A \in X ( f(A) \in A ) ]$

In a constructivist logic interpretation, it says that if you have built a set of nonempty sets X, using this axiom you can build a function f.
In other words, this axiom is like a black box function (or an oracle), where you put as input the set of nonempty sets X and you get as an output the choice function f.
Then, you can use the choice function f as you do in ZFC. The resulting proof is the same, only that in ZFC you don't treat the axiom as if it were a function: you simply apply it.
In this sense the axiom is an oracle: it allows you to compute f without knowing how the function that produces f from X is implemented, and f does not have to be computable or incomputable: you simply don't care about it's computability.

Yow write "In fact the elements of the sets given by the axiom of choice have no properties at all and could never be computed by any stretch of the word". But in constructivist logic is the same thing: if X is the class of all subsets of real numbers determined by the equivalence relation "two numbers are equivalent if their quotient is rational", you can define as A the class determined by $\pi$, and then write f(A) to get a real number. "f" is treated in a purely symbolic way, never computed.

I think that the main source of confusion is that usually a logic is called "constructivist" when it does't have any axiom: in that case there will be no "oracle" functions, and the "forall x ..., exists y ..." will correspond to really computable functions from x to y. But even in this case, a real number will be typically defined as a sequence of rational functions (a limit, in the usual way), not as a function that computes the digits of the decimal representation of the number, and the fact that Cauchy functions are convergent will be part of the axiomatic definition of real numbers: it will not be a "logical axiom", but still will be treated at the same way as an "oracle", because it's part of the definition of what real numbers are.
It's the same thing as when you define natural numbers: you don't have to prove the principle of induction: it's simply part of the (axiomatic) "definition" what natural numbers are.

If you add the missing logical axioms to constructivist logic (excluded middle and choice) treating them as "oracles" (excluded middle can be seen as a function that returns "P1 or P2"), you obtain back classical logic, only with a slightly different "interpretation" of the rules.

I don't know if this explanation makes sense: maybe the best way to explain would be to show some examples of proofs..
• 834
In this sense the axiom is an oracle: it allows you to compute f

As I think of it, perhaps a new axiom is an oracle. I understand that point of view.

So I'm willing to be agreeable on this point, but still no less confused on the issues I've raised earlier.

Also ... no form of the AC could possibly be construed as a computation. That, I just do not see at all. It's a pure statement of existence. A particular set exists with no means to possibly compute or construct or identify any of its elements. Whether this is worse or the same as the powerset axiom I can't say. Perhaps it's no worse. But a lot of people historically regarded it as worse. I can't put any of that in perspective.

I've re-read the SEP article on constructive math with a much deeper level of comprehension than I've had in the past. Your efforts have not been in vain.
• 189
I've re-read the SEP article on constructive math with a much deeper level of comprehension than I've had in the past. Your efforts have not been in vain.

I don't remember which one is the SEP article. Could you send me a link?

Also ... no form of the AC could possibly be construed as a computation. That, I just do not see at all. It's a pure statement of existence. A particular set exists with no means to possibly compute or construct or identify any of its elements.

Yes, the AC can't be construed as a computation, and it's not part of constructivist logic. What I am saying is that AC can be added to a constructivist logic framework such as Coq if you want to use standard logic: standard logic can be obtained as an extension of constructivist logic (in a similar way as metric geometry can be seen as an extension of projective geometry): if you want to use classical logic in Coq (and I think that's how Coq is used most of the time) you just have to import the Classical_Prop library: https://coq.inria.fr/library/Coq.Logic.Classical_Prop.html#

The possible forms of the axiom of choice (without assuming EM) and the relations between them are here: https://coq.inria.fr/library/Coq.Logic.ChoiceFacts.html
• 834

I don't remember which one is the SEP article. Could you send me a link?

https://plato.stanford.edu/entries/mathematics-constructive/

They echoed many of the things you've been saying, such as that there are several (four in fact) variations of constructivism, and that there's a constructivist logic ... well basic stuff, but stuff I've been reading without comprehension before and can now understand. Also I've learned that a lot of this goes back to the nineteenth century. Poincaré for example was troubled by nonconstructive thinking.

That's why you should never be frustrated about not being able to get through to me. You are getting through by osmosis. I've never bothered to try to understand constructivism at a technical level before and this is an education. The article talked about putting explicit bounds on the rates of convergence of Cauchy sequences, and that related back to the Italian paper, so it's starting to feel familiar. One learns by repetition.

I will say one thing though. I still just don't get constructivism as a philosophy. I DO understand that certain formulations of constructive math lend themselves beautifully to computerized proof systems. Nobody would deny the digital deluge that's flooding the world right now, why should math be spared?

But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it." But the world is full of things that have no algorithm. The world itself is not an algorithm.

And here I think is the source of my discomfort with constructivism. Many these days believe that the world IS an algorithm. I disagree with that proposition. To the extent that some -- not all, ok, but some -- aspects of constructivism are in support of the belief that everything that exists must be an algorithm; I oppose that aspect. I instinctively believe that the constructivists are missing a lot: namely, everything that is important but that can't be explicitly constructed.

Yes, the AC can't be construed as a computation, and it's not part of constructivist logic. What I am saying is that AC can be added to a constructivist logic framework such as Coq if you want to use standard logic: standard logic can be obtained as an extension of constructivist logic (in a similar way as metric geometry can be seen as an extension of projective geometry): if you want to use classical logic in Coq (and that's how it's used most of the time) you just have to import the Classical_Prop library:

You've said this to me probably several times and I finally understand it. If I've got this right, you can add various axioms to constructive math and recapture standard math. So we get the benefit of computerization and we can still do the things standard math cares about. Which is pretty interesting, if true.

Ah, "Classical_Prop." Coq has a library you can import that makes it act like standard math. You've connected adding axioms in a formal system, with with importing a library in a program. Wow. Thanks. Great insight. I got it.
• 189
I will say one thing though. I still just don't get constructivism as a philosophy. I DO understand that certain formulations of constructive math lend themselves beautifully to computerized proof systems. Nobody would deny the digital deluge that's flooding the world right now, why should math be spared?

But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it." But the world is full of things that have no algorithm. The world itself is not an algorithm.

I completely agree. I don't believe that the world is an algorithm either. And mathematical objects (and of course physical objects too) are not algorithms. But proofs of existence are algorithms, in any formal logic.
In my opinion, the reason of treating the axiom of excluded middle as less "fundamental" then the others is that is related to the more philosophical and abstract notion of truth: a proposition is either true or false.

In classical logic when you derive a proposition P you interpret it as a prove that "P is true". When you derive a proposition "not P" you interpret it as a prove that "P is not true (or false)".

Now, it turns out that if you replace "P is true" with "P is provable" and "P is false" with "P is absurd (meaning P implies a logical contradiction)", all the rules and axioms of classical logic will still be intuitively justified, except one: the axiom of excluded middle. A proposition can be not provable and even not absurd.

So, the axioms of logic without excluded middle still make sense, but with a different interpretation of the meaning of propositions. This is in some way similar to what often happens in mathematics: a simplified model appears to be in some way more "fundamental" if it is consistent and still compatible with the full model. For example in projective geometry there is no concept of length, but the concepts of points and intersecting lines still make sense.
Well, at the same way you can see intuitionistic logic as the part of classical logic that doesn't depend on the notion of truth. And even negation does not have the same meaning: "not P" does not mean "P is false", but "P implies a contradiction". In fact, the "not" operator is even not needed in intuitionistic logic: just replace every occurrence of $\neg P$ with $P \Rightarrow \bot$.

Following the analogy with projective geometry, it is true that not all theorems can be proved without using excluded middle, but it still makes sense to distinguish the ones that can be proved without it from the ones that can't, because the first ones are valid for a larger number of models (or, put in another way, are true for more "fundamental" reasons).

The same thing happens in all mathematics: why do you want to isolate group theory from Galois theory about the solution of polynomial equations? Because group theory is more general (since it's theorems depend only on a smaller set of assumptions), and then can be seen as more fundamental.

[ I have to go now: I'll explain in the next post what all this has to do with algorithms ]
• 189
Now, what does intuitionistic logic have to do with algorithms?

Here's the quick answer: in intuitionistic logic a proposition P can be interpreted as the "description" of the algorithm that has been executed (using the rules of logic) to build the proof of P. And this of course has to be a deterministic algorithm that terminates in a finite number of steps, because every proof must be made of a finite number of steps.

Now, the problem is that the long answer is much, much longer. So, I'll describe the consequences of the short answer before.

The first thing to clarify is that intuitionistic logic (more concretely these rules: https://en.wikipedia.org/wiki/Natural_deduction#/media/File:First_order_natural_deduction.png)
are only a first order logic, and not a theory of sets.
ZFC, instead, is classical first order logic (intuitionistic + EM), plus the axioms of set theory.
Now, the axioms of ZFC "work well" with classical logic, but not with intuitionistic logic. In particular, the correspondence between algorithms and proofs works only in the very particular formulation of "natural deduction", with no axioms at all. The addition of ZFC axioms would introduce the $\in$ predicate, that does not respect the "propositions as algorithms" interpretation. Some of the reasons why ZFC doesn't "fit well" with intuitionistic logic can be found here: (https://en.wikipedia.org/wiki/Constructive_set_theory)
"""
In 1973, John Myhill proposed a system of set theory based on intuitionistic logic taking the most common foundation, ZFC, and throwing away the axiom of choice (AC) and the law of the excluded middle (LEM), leaving everything else as is. However, different forms of some of the ZFC axioms which are equivalent in the classical setting are inequivalent in the constructive setting, and some forms imply LEM.
"""
Basically, the problem is that ZFC axioms imply EM. So even if you exclude EM from logic axioms, you can still derive it from ZFC axioms.

However, there is a "natural" extension of the rules of intuitionistic logic that respects the correspondence between algorithms and proofs and is interpreted in a similar way as a theory of some sort of "sets": intuitionistic type theory. (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)

In intuitionistic type theory sets are of course replaced by types (that are not exactly the same thing), but there is "practically" an one-to-one correspondence between propositions expressed in the two languages. From a formal point of view, however, there is a big difference: ZFC is implemented as axioms on top of an independent first order logic. Intuitionistic type theory instead is an extension of the rules of intuitionistic first order logic to an high-order logic (basically by adding quantification on functions), still making no use of axioms at all.

As I wrote in the previous post, intuitionistic logic does not speak about truth but about provability and proofs. And proofs are, as in every formal logic, algorithms.
The fundamental difference is that proofs in classical logic are not an essential part of the theory: the important thing is that a proof of a proposition P exists. If a proof exists, the proposition P is true, and that is all what matters about P. The algorithm that has been used to prove P is not considered as part of the language.
In intuitionistic logic, instead, the language speaks about proofs, and the terms of the language are meant to represent proofs (that are algorithms), and then the language speaks about algorithms.

For example, in type theory I can write the expression "x : P", meaning "x is the proof of the proposition P". "x" in this case represents an algorithm.
Then I can have the expression "f : P -> Q", meaning that "f" is the proof that "P implies Q". This means that "f" is an algorithm that takes as input a proof of P (another algorithm) and returns as output a proof of Q (a third algorithm).
The "side effect" of this interpretation is that I never have to verify how a proposition has been proven, because proofs are always "recorded" together with the propositions. The propositions are interpreted as types, and the terms are interpreted as proofs (always written in the form "x : P") and checking the proof
of P means checking if the term x has type P (this is done with an algorithm that is part of the meta-theory).

A full explanation of how all logical operators are interpreted would take me probably several pages, and I can't write a book here.. :smile: (but there are a lot of good books on type theory)

However, back to the point of existential propositions, if I built a proof of "exists x:A, P(x)", it means that I built a proof of A called x, and a proof of the proposition P(x) (P(x) is what is called "dependent type").
If A is the type of real numbers, a Cauchy sequence is a "proof" of A. Meaning: a real number is treated formally as if it was a proof that a real number exists, and corresponds to the algorithm that has been executed (applying the rules of logic) to build that real number.

P.S. Reading what I just wrote here, I doubt that it's in some way understandable by anybody who doesn't already know about type theory. But I am not really able to explain it better in the space of a post...
• 834
I completely agree.

I should quit now!

I just wanted to say that in the cold light of day I no longer believe what I wrote last night. If you start with classical math and remove LEM you get some form of constructivism. If you add LEM back in, you get back classical math. So there's much less than meets the eye when you say you can add in axioms to recover classical math.

Secondly, you said you can import a library into Coq to recover standard math. But if that were true, standard math would be computerizable, and then that would remove the biggest benefit of constructivism.

I confess to being just as confused as ever, but at a somewhat higher level.

It's too hot to type here today so I'll save the rest for later.

ps:

But proofs of existence are algorithms, in any formal logic.

Ok let me just try to get some clarification. If I use the axiom of choice to whip up the Vitali set, the set of representatives of the equivalence classes of $\mathbb R / \mathbb Q$, that set is not "constructive" by any conceivable definition of the word.

But if you code up the axioms of formal ZFC into a computer, the proof of the existence of the Vitali set can be cranked out by a program. Even by an undergrad! So in that sense, proofs are constructions even if they are constructing things that are in no way constructive.

Have I got that right? What I mean is, am I confused about something sensible?
• 189
But if you code up the axioms of formal ZFC into a computer, the proof of the existence of the Vitali set can be cranked out by a program. Even by an undergrad! So in that sense, proofs are constructions even if they are constructing things that are in no way constructive.

Have I got that right?

Yes! Exactly!! Proofs are constructions in ANY formal logic, because that's how formal logic is defined!
There are computer-based proof verification systems for a lot of different logics: https://en.wikipedia.org/wiki/Proof_assistant
You can look here for a short story of the development of these systems: https://en.wikipedia.org/wiki/Automated_theorem_proving

These systems are called "proof assistants" because generally they try to allow the user to skip some steps of a proof and infer the missing steps, so that the user is not forced to write every single step, that can be very tedious (depending on what logic you use), but the main functionality that all of them provide is proof verification.
However, the truth is that this is all "theoretical" stuff: in practice, at least before Voevodsky introduced HOTT, mathematicians have always ignored these systems (as they have ignored formalized logic), because they are way too tedious and time-consuming to use in practice. The problem is that they force you to concentrate on absolutely unessential steps of the proof, and to use a language that does not describe at all what the important steps of the proof really "mean". But this problem is not due to the use of computers (formal systems in a sense were "made" for computers, even if they were invented when computers didn't exist yet); this is a problem with the formal languages themselves, and that's the main problem that HOTT is trying to solve.

However, you can for example look at the HOL proof assistant:(https://en.wikipedia.org/wiki/HOL_(proof_assistant). This is a very clear implementation of proofs in classical higher order logic as programs. In HOL you don't even enter a special environment where you write proofs: you write directly a program in the ML programming language (a perfectly "normal" programming language) to literally "build" the propositions that you are proving. There is a data type in ML called "proposition" and the rules of classical logic correspond exactly to the type constructors of the "proposition" data type. And the logic (higher order logic) is probably the nearest formal language to the "informal" classical logic used in practice by mathematicians in real (non formalized) proofs.
• 189
Secondly, you said you can import a library into Coq to recover standard math. But if that were true, standard math would be computerizable, and then that would remove the biggest benefit of constructivism.

The point is that "computerizable" does not mean "computable", because terms built in classical logic in general don't correspond to computable functions, and that is due to the fact that some of the rules, or some of the axioms, allow you to build mathematical functions in an indirect way, and that is what corresponds to the use of "oracle" (or non computable, or axiomatically defined) functions in constructive logic.

What constructive logic does is to move out of the logic all the non computable parts (leaving only the rules that build terms in a "direct" way), that can be however added back to the logic as axioms (usually by importing them as libraries) to recover classical logic.

What remains after removing the non computable parts of standard logic is a language that speaks only about proofs (not about truth), and proofs are purely syntactical entities that can be reasoned about using only computable functions.
• 189
But I don't get the metaphysics. "A thing only exists when we have an algorithm to instantiate it."

Now, just before leaving for vacations, the metaphysical part, that surely I'll not be able to defend in a philosophy forum :razz:

The algorithms that represent "all the things that exist" are not only finite algorithms, but even infinite ones: they are ALL the functions (even not recursive or not total). So, the rules of logic are based on the model of computable algorithms, but are supposed to work for any function (that can be seen as some kind of generalization of an algorithm).

If you think about it, that's the same kind of "generalization" that is done in set theory: the rules of logic are based on the model of finite sets (I am speaking about the rules of first order logic - that are the ones that define the "meaning" of logical operators - not about the axioms of ZFC), but are supposed to work even for the infinite sets of set theory.

So, in constructivist type theory you replace the concept of infinite sets with the concept of infinite constructions. The rules of logic allow you to add "peaces" to these constructions an to compose them with each other, but of course you can't "decompose" the infinite ones: you have to treat them as a "black box", or an "oracle", or a function of a programming language belonging to an external library that "runs" on some kind of "machine" that can compute even uncomputable functions.

So, as in set theory you can think of everything as made of sets, in constructivist type theory (meaning the Martin-Lof kind of type theories) you can think of everything as made of functions.
And that's basically the reason of the strict correspondence between type theory and category theory: category theory is an axiomatic theory of functions (based on set theory). It axiomatizes the properties of functions to generalize them to morphisms, that are "anything" that "works" as a function.

As the physical intuition for sets is that every object is made of parts, the physical intuition for functions is that every our experience can be seen as an interaction, or an experiment.
The only assumption is that functions are something that always give the same result if you prepare them in the same way.
But in the end, any of these intuitions is really true at a fundamental level of physics:
- it is not true that everything is made of parts, because the "particle number" is not conserved.
- it is not true that there are experiments that can be prepared always in the same way to give always the same result.

But if you don't assume the idea that everything is made of parts, you don't have to assume the unintuitive :wink: idea that a line is made of points: you can think of lines and points as two different types, and an intersection of two lines as a function from pairs of lines to points.
• 834
But if you don't assume the idea that everything is made of parts, you don't have to assume the unintuitive :wink: idea that a line is made of points: you can think of lines and points as two different types, and an intersection of two lines as a function from pairs of lines to points.

The revenge of Russell. Type theory triumphant.

I'm all typed out, no pun intended, nothing else to say at the moment.
• 413
In my opinion, neither type-theory nor category theory are satisfactory in addressing the ambiguity of standard logic regarding infinitary notions.

Take as an example "My local postbox contains some letters". This proposition isn't a full specification of a set, and hence is not representable as a specific algorithm and hence cannot be constructed, and therefore it must be stated axiomatically as a particular set which exists but without having any internal details. Some people might already interpret set theory, type theory or category theory in this way but this isn't thanks to those theories themselves.

- The Axiom of Choice is wrongheaded in two respects:

i) It isn't a logical rule permitting the universal existence of non-constructable sets. Rather, it is used as a non-logical proposition to refer to a specific set in a domain of inquiry whose number of elements is potentially infinite, that is to say, isn't specified in logical description.

ii) The axiom has nothing to do with choice. On the contrary, it is usually used as a reference to sets for which a choice-function isn't specified.

Whereas classical logicians often mistake AOC for a logical rule, Constructive logicians often mistake the Axiom of Choice for a tautology; for they conflate the existence of a choice-function with the existence of a set; yes it is true that 'constructed set', 'computable set' and 'choice function' are synonyms - but this misses the point as to how the axiom of choice is used, namely in order to represent 'externally provided sets' that provide elements on demand, but whose mechanism and quantity of contents are unspecified.

To nevertheless insist that every physical set must be constituted by a computationally describable mechanism, should be regarded as a physical thesis, rather than being a logical theorem. For logic is a purely descriptive language without physical implications. Physicists might use set theory as a means to document their epistemological certainty and uncertainty regarding the internal operation of physically important sets, but this epistemological interpretation of set-theory isn't part of set theory per-se.

The correct semantics of logic and mathematics is game semantics describing the interaction of entities created and controlled by the logician on the one hand, and the entities of his external environment that he is aware of, but for which he has only partial control or specification of. This semantics serves to reinforce the notion that logic and mathematics are languages of empirical description that documents the interactions of the logician with his external world.
1678910Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal