## Musings On Infinity

• 189
Well, not exactly...

∞ - ∞ = 0 is not true, because ∞ - ∞ is not a term of the language.

Let me explain better: let's take the function "int", defined as "the integer part of a decimal number":
- for example, int( 5.65 ) = 5
I want to invert this function, and call it "unint", such that unint( 5 ) = 5.65
I cannot do it, because int( 5.7 ) = 5, and int( 5,8 ) = 5, etc..

This is the same situation:
I define ∞ + 0 = ∞, ∞ + 1 = ∞, ∞ + 2 = ∞, ecc..
So, every finite number is a neutral element for ∞, such as 0 is THE neutral element for natural numbers.

Now, if for transfinite numbers the neutral element for addition is nor unique, you can't define subtraction as the inverse of addition, simply because addition is not invertible!

If you believe there is only one infinity (like I do)

Sorry to disappoint you, but (as I just wrote in the previous post), I believe that at least infinitesimal objects do exist in nature in some sense, but you cannot decide if they exist or not using mathematics.

From the point of view of mathematics, I don't think there is a reason why transfinite numbers should be not allowed. Yes, you can say "I don't like set theory because it's a complicated way to define something that seems intuitively simple as sets", but a hundred years ago this was the only theory that allowed to reason in an unambiguous way on real numbers (that, as the name suggests, appear to be "real") and to say such an obvious thing as the fact that the square root of two (i.e. the measure of the diagonal of a square whose side has measure one) is a number.

And without real numbers how do you build limits and calculus? And calculus has to be "real", since it works so well with the laws of nature.

So, my point is that you can use logic to speak about infinity even if you cannot decide from logic itself if infinity really exists in nature.
• 2.1k
Now, if for transfinite numbers the neutral element for addition is nor unique, you can't define subtraction as the inverse of addition, simply because addition is not invertible!

Because addition with transfinite 'numbers' does not make sense, nor does subtraction. This is indicative of the fact the transfinite are not numbers.

Sorry to disappoint you, but (as I just wrote in the previous post), I believe that at least infinitesimal objects do exist in nature in some sense, but you cannot decide if they exist or not using mathematics.

I do not believe infinitesimal objects exist in maths or nature:

1. We have no examples from nature of actual infinity
2. Creating anything infinity large is impossible; not enough time / would never finish
3. Creating anything infinity small is impossible; no matter how small it is made, it could still be smaller
4. Basic arithmetic says infinity is not a number: if it were a number, it would be a number X greater than all other numbers. But X+1>X
5. Numbers are magnitudes or sizes. Infinity is not measurable: for infinity in the large, we would never finish measuring it. For infinity in the small, a finely graduated enough ruler is not available. So infinity has no size; so it is not a number.
6. Numbers have a fixed value; that is their defining characteristic. Numbers are not variables. Infinity has no fixed value so is not a number.
• 189
I understand you point of view: I think this was the common point of view of mathematicians from ancient Greeks until the beginning of XVII century: you cannot allow the concept of actual infinite (and infinitesimal) in mathematics because from it you can derive paradoxical conclusions (Zeno's paradox, for example), and if paradoxes are allowed you cannot trust mathematical proofs any more.

But then people started to discover that you can use infinitesimals to derive lots of correct results about the laws of nature (law of gravitation, for example). So, there had to be some way to allow the calculations that gave correct results using infinitesimals!
The solution to the problem was to define real numbers as "limits" (Cauchy) or "cuts" (Dedekind). But, in both cases, they are made of actually infinite objects (infinite sequences or infinite sets). So, at least for what I know, if you want calculus to be part of mathematics you have to allow the existence of actually infinite objects!

Formal logic and set theory were mainly created as a way to reason about the counter-intuitive concepts of "infinite set of things". If you do not allow infinite sets, set theory becomes trivial, and fundamentally useless: you don't need an axiomatic set theory to reason about finite sets.

With formal logic, you don't need a "concrete model" of mathematical things: a "set" is whatever "thing" on which you can operate following a (carefully chosen) given set of rules. So you can reason about infinite things avoiding contradictions.

But in this way, you cannot say that "this thing should not be allowed because it does not make physical sense". Everything that is not contradictory should be allowed, since we are not speaking about physical objects any more.

For what I know, there is no other way to make sense of calculus or differential equations (and other useful concepts in mathematics) without using formal logic or without "concrete" infinitesimal objects.
• 2.1k
So, at least for what I know, if you want calculus to be part of mathematics you have to allow the existence of actually infinite objects!

I don't see why calculus cannot be defined purely in terms of potential infinity. A limit should approach but never reach actual infinity:

lim 1/n
n->0

It's not possible to make this expression equal actual infinity; n is either non-zero (leading to a finite number) or zero (leading to UNDEFINED). There is no place for actual infinity between the large numbers and UNDEFINED; actual infinity should be UNDEFINED IMO.

I think that calculus worked just fine before Cauchy and Dedekind; it's just actual infinity has been used in the logical justification of calculus when it should not have been.
• 189
I don't see why calculus cannot be defined purely in terms of potential infinity. A limit should approach but never reach actual infinity:

All irrational numbers are defined as infinite objects (usually infinite sums or products):

pi = lim ( sum for k=1..n ((-1)^(k+1) / (2*k - 1)) )
n->infinite

If infinite sums do not exist, pi is not a number, since there is no way to write it with a finite expression using only + and *

How do you define the function sin(x) if you can't use infinite expressions?

Integrals are defined as sums of infinite terms and most of them do not converge to rational numbers.

If a limit does not converge to a rational number you have to treat the limit itself (its expression) as a number, because there is no way to write it avoiding limits. Then, you have to say that real numbers are the usual rationals plus some "objects" that are defined as infinite sums of rationals.
That's similar to how transfinite numbers are defined: take natural numbers and add to them other "objects", such that the operations with preexisting natural numbers give the same result as before.
• 415

Calculus can actually be summarized with the greek method of exshaustion. Look it up on google or bing. Calculus always was and always will be close principle but not an exact science. People have written calculus solving software using the greek method of exhaustion, or at the very least have produced the pseudo code for it. I've come up with an algorithm for it.

See my profile or click on my name if you like.
• 415

if you use a 2 dimensional array and produce enough entries in that 2 dimensional array. Using the greek method of exsaustion is just a few steps away. If any one needs the algorithm i can provide it.

Click on my name or see my profile. No wrong answer.
• 2.1k
How do you define the function sin(x) if you can't use infinite expressions?

lim sin x
n-> infinity

It already works, just read the above as 'n tends to but never reaches actual infinity'. Thats a better definition.

For example, with π, it's irrational so it can never be fully defined - it's impossible to know all the digits, so saying an expression tends to π rather than is equal to π is actually more accurate. We can never know π exactly.

IMO, whenever an infinite sum is evaluated, it is more correct for maths to use ~= (approx equals) rather than = (equals).
• 189
OK, I have no convincing argument against this point of view, except that if irrational numbers do not exist a great part of mathematics becomes more difficult to reason about. For example, there are no more really continuous functions, and you have to speak all the time about sequences of points that approach other points.
Since the results are the same and logic says that it's no more contradictory using infinities than avoiding using them, it seems to me more "convenient" to use them. Even if in the real universe they do not exist.
At the end, even real perfect circles do not exist (everything is made of atoms), and euclidean space does not exist (space-time is not flat), but it's much simpler convenient for reasoning about real objects to "pretend" that they existed.
• 715
As usual, a citation from the old good wikipedia can come to the rescue :-)
"Set theory (which is expressed in a countable language), if it is consistent, has a countable model; this is known as Skolem's paradox,

Right. A countable model. Not a finite one.

This of course doesn't mean that uncountable sets "do not exist", but only that you cannot use a finitary
first order logic theory to prove that they exist.

I don't believe this is true. ZF is a first-order theory with a countably infinite language that easily proves uncountable sets exist. See Cantor's theorem, as simple a proof as one can imagine for a fact so profound.

So, you could even say that "every mathematical object is a string of characters",

No I don't believe so. This is confusing syntax with semantics. A formal theory consists of finite-length strings of symbols. But models of those theories are not strings of symbols. They're sets, often quite large ones.
• 189
I don't believe this is true. ZF is a first-order theory with a countably infinite language that easily proves uncountable sets exist. See Cantor's theorem, as simple a proof as one can imagine for a fact so profound.

OK, probably this is not the conventional point of view in mathematics, but I'll try to explain my point of view:

In ZF Set theory you have the "axiom of infinity" that says ( in a simplified form ):
There exists a set A such that:
-- ∅∈A,
-- for every x∈A, the set (x∪{x}) ∈ A.

This, from my point of view, is a complicated definition for a recursively defined structure that is equivalent to Peano's natural numbers:
You can build ∅ (equivalent 0), ∅∪{∅} (equivalent to S0), ∅∪{∅}∪{∅∪{∅}} (equivalent to SS0), etc...

Then, first order logic postulates the existence of functions.

Basically, Cantor's theorem proves that, for every set A, the function A -> (A -> Bool) is always bigger than A (in the usual sense of no one-to-one correspondence). Then, since (A -> Bool) is itself a set, you can build an infinite chain of sets of the form (((A -> Bool) -> Bool) -> Bool) -> ...
So, from the infinite set you can build an infinite hierarchy of infinites.

The crucial point here to decide the cardinality of "A -> (A -> Bool)", is what you take as a model for functions.

If functions are only recursive functions (what I can evaluate using a decidable set of rules applied to the symbols of the language), I can never build anything that has non-numerable cardinality.
In this case, it is true that for every set you can find a bigger set, but their cardinality will be numerable: at the same way as it is true that for every integer, you can always find a bigger integer.

If instead you define functions in the usual way as "sets of couples", then there are 2^|A| possible functions of type A -> Bool, and there exists a non-numerable set of sets. But, in any way, inside your language you are able to describe only an enumerable set of them.

(https://en.wikipedia.org/wiki/Skolem%27s_paradox):
"From an interpretation of the model into our conventional notions of these sets, this means that although u maps to an uncountable set, there are many elements in our intuitive notion of u that don't have a corresponding element in the model. The model, however, is consistent, because the absence of these elements cannot be observed through first-order logic"

Since both models for functions are consistent, what's the reason to take the standard interpretation as the "true" one?

No I don't believe so. This is confusing syntax with semantics. A formal theory consists of finite-length strings of symbols. But models of those theories are not strings of symbols. They're sets, often quite large ones.

See Herbrand interpretation: https://en.wikipedia.org/wiki/Herbrand_interpretation.

The demonstration of Godel's incompleteness theorem is using a similar technique to map natural numbers on the syntactic structures of the language and properties of arithmetic operations on syntactical rules of logic.
• 41
For example, with π, it's irrational so it can never be fully defined - it's impossible to know all the digits, so saying an expression tends to π rather than is equal to π is actually more accurate. We can never know π exactly.

Pi is computable and therefore has a finite description. https://en.wikipedia.org/wiki/Computable_number

'Most' irrational numbers are not computable, since the computable numbers (being countable) have measure zero. There are indeed some rich philosophical issues here. Personally I'm an anti-foundationalist when it comes to math. I think we learn to use it just as we learn to use English. A few people concern themselves with the formalities of set theory. Most just use it as an economical language.

The danger of worrying too much about the metaphysics of infinity is in repeating the 'Wittgenstein' mistake (not his so much as those in the grip of his charisma)--the 'cure' is assimilated by the disease. The finitist who rails against infinity is charging at his own shadow, the infinitist. Both are minority figures with an obsession in common.
• 896
First Ant: But…but…what if there is more to “what is” than what we ants can see, explore and speculate about?

Second Ant: If you want to fool around with nonsense and “what if’s”…go do it. Just do it somewhere else. We intelligent ants are discussing REALITY. We are discussing why we can figure out what is logical and what is not...what can exist and what cannot.
• 715
OK, probably this is not the conventional point of view in mathematics, but I'll try to explain my point of view:

Thanks for this interesting post.

If I am reading you correctly you are espousing a computational or constructivist point of view. Perfectly valid, and even trendy lately as computer science begins to influence math. Homotopy type theory, etc. I like to think of it as the revenge of Brouwer!

In ZF Set theory you have the "axiom of infinity" that says ( in a simplified form ):
There exists a set A such that:
-- ∅∈A,
-- for every x∈A, the set (x∪{x}) ∈ A.

This, from my point of view, is a complicated definition for a recursively defined structure that is equivalent to Peano's natural numbers:
You can build ∅ (equivalent 0), ∅∪{∅} (equivalent to S0), ∅∪{∅}∪{∅∪{∅}} (equivalent to SS0), etc...

Yes and no. The axiom of infinity does provide a model of PA inside ZF.

However there is a crucial distinction. PA says that if n is a number then S(n), the successor of n, is a number. That gives us (after the standard naming conventions) a handy collection 0, 1, 2, 3, 4, ... which we can use to do a fair amount of number theory.

What the axiom of infinity says is that {0, 1, 2, 3, ...} is a set, not just a collection. That's a much stronger statement. Without the axiom of infinity we still have PA and everything that we can do with it. But we don't have the powerset of the natural numbers nor do we have an easy way to get the theory of the real numbers off the ground.

The axiom of infinity is a strong statement that says that we not only have successors; but that the collection of all successors forms a set. In fact this gives us an easy way to visualize proper classes. In ZF minus infinity, the collection of all natural numbers is a proper class because it's too big to be a set.

Then, first order logic postulates the existence of functions.

I asked on math.stackexchange once what functions are in PA, since we can't be sure we have enough sets. I don't recall getting an answer that satisfied me. But if we naively regard functions as mappings, we can probably get away with it. I'm sure logicians have a good answer for this point.

Basically, Cantor's theorem proves that, for every set A, the function A -> (A -> Bool) is always bigger than A (in the usual sense of no one-to-one correspondence).

Let me be picky here (as if I'm ever any other way!) First, it's not the function that's bigger, it's the powerset. But Cantor doesn't prove that P(A) is bigger than A. He proves that there is no surjection from A to P(A); then we DEFINE "bigger" to mean there's no surjection. This is a common point of confusion among those who complain that it's absurd to call one infinite set bigger than another. They are quite right! Rather, "bigger" and "smaller" in this context refers to the existence or nonexistence of injections and surjections. I see that you basically said that by mentioning 1-1 correspondence, but I wanted to emphasize this point. There is no surjection from the positive integers to the reals. Whether the reals are "bigger" in any meaningful sense, I have no idea.

Then, since (A -> Bool) is itself a set, you can build an infinite chain of sets of the form (((A -> Bool) -> Bool) -> Bool) -> ...
So, from the infinite set you can build an infinite hierarchy of infinites.

This blew my mind when I first saw it. Still does. Cantor really put a zap to the world of math. This was quite a revolutionary discovery. Or fraudulent sophistry, depending on one's point of view.

The crucial point here to decide the cardinality of "A -> (A -> Bool)", is what you take as a model for functions.

Well again, yes and no We do NOT need to know what is the cardinality of P(N). We only know that there is an injection from N -> P(N) and no surjection. So we say P(A) has a larger cardinality. In fact nobody has the foggiest idea what is the cardinality of P(N). That's the Continuum hypothesis.

If functions are only recursive functions (what I can evaluate using a decidable set of rules applied to the symbols of the language), I can never build anything that has non-numerable cardinality.

Perfectly well agreed. I hope you don't think I'd be shocked or would object. The computable powerset of N is countable of course, since there are only countably many Turing machines. And now we're into constructivist philosophy. And since we live in the age of computation, there's growing support for this point of view. Can't fight the tide.

In this case, it is
true that for every set you can find a bigger set, but their cardinality will be numerable: at the same way as it is true that for every integer, you can always find a bigger integer.

Yes. But then you can't get the theory of the real numbers off the ground and the intermediate value theorem is false. The constructivists counter: The IVT becomes true again we only consider computable functions. And so the argument goes. I'm sure we both understand each other's point of view here. There's no right or wrong to the matter. Just history and philosophy.

If instead you define functions in the usual way as "sets of couples", then there are 2^|A| possible functions of type A -> Bool, and there exists a non-numerable set of sets. But, in any way, inside your language you are able to describe only an enumerable set of them.

Yes agreed. We agree on everything. I just think the world of the full powerset of N is richer and more interesting than the computable powerset. That's an aesthetic judgment. And it's more useful. That's a pragmatic judgment.

(https://en.wikipedia.org/wiki/Skolem%27s_paradox):
"From an interpretation of the model into our conventional notions of these sets, this means that although u maps to an uncountable set, there are many elements in our intuitive notion of u that don't have a corresponding element in the model. The model, however, is consistent, because the absence of these elements cannot be observed through first-order logic"

Right. The model thinks it's uncountable but from "outside" we can see it's countable.

As I understand it, Skolem himself thought that his paradox showed that our notion of set is murky and not entirely coherent. This is a point of view I can't disagree with. Nobody knows what a set is. Sets as conceived in set theory are much more strange than the "collections of objects" that we teach in high school.

Since both models for functions are consistent, what's the reason to take the standard interpretation as the "true" one?

Aha! Well, I don't think Cantorian set theory is true any more than I think chess is true. Does that knight "really" move that way? It's a silly question. Chess is a formal game. And when pressed into my Cantorian corner, I put on my formalist hat and say that we can adopt the axiom of infinity or reject it. Accept the full powerset of N or only the computable powerset. Accept the law of the excluded middle or reject it. Accept the axiom of choice or reject it.

Now the question is: Why adopt the full powerset? The answer is pragmatic. In the 20th century, full Cantorian set theory has proven supremely useful. It lets us develop a logically rigorous theory of the real numbers. The physicists use the real numbers to build their theories. Throw out Cantorian math and you lose a lot. Nobody has yet succeeded in developing a computable or constructive theory of physics. People are working on it. Perhaps we'll need another few decades to get more insight into this question.

No I don't believe so. This is confusing syntax with semantics. A formal theory consists of finite-length strings of symbols. But models of those theories are not strings of symbols. They're sets, often quite large ones.
— fishfry

See Herbrand interpretation: https://en.wikipedia.org/wiki/Herbrand_interpretation.

I read that article and didn't fully understand it or the point you're making. But whether math refers to anything outside itself is another one of those philosophical questions. Chess doesn't. Why should formal math?

The demonstration of Godel's incompleteness theorem is using a similar technique to map natural numbers on the syntactic structures of the language and properties of arithmetic operations on syntactical rules of logic.

Math is more than ‎Gödel numbering. But now I'm making a Platonist argument when a couple of paragraphs ago I was making a formalist one. I tend to use whichever argument is handy for my point. But ‎Gödel's incompleteness theorems are about syntax and not semantics. ‎Gödel himself was a Platonist. I found that surprising when I first learned it. He believed there is mathematical truth "out there" that's beyond the limitations of formal systems.
• 7
Hello.

Infinity is not a number. Infinity just means unboundedness. The idea of infinity is refered to using a noun, and I think this is why most people think infinity is a "thing".

Of course, this is not even remotely rigorous, but it is the best way to explain to ordinary people why infinity is not a number and why they thought it might be.
• 189
What the axiom of infinity says is that {0, 1, 2, 3, ...} is a set, not just a collection. That's a much stronger statement. Without the axiom of infinity we still have PA and everything that we can do with it. But we don't have the powerset of the natural numbers nor do we have an easy way to get the theory of the real numbers off the ground.

You are right. I think I should be more careful with this kind of statements. I wrote this without really checking the recursive structure generated by the the axiom of infinity.. I hadn't much time and I wanted to give a quick answer :-) ( mathematics is only my hobby, not my work... )

Well,
But Cantor doesn't prove that P(A) is bigger than A. He proves that there is no surjection from A to P(A); then we DEFINE "bigger" to mean there's no surjection.

Well, if you expand the definition "there is no surjection from A to B", it becomes:
forall F: A->B, exists y:B, forall x:A, F(x) != y [ I prefer to use the syntax of Coq ( https://coq.inria.fr/ ) because I didn't take the time to learn writing symbols :-) ]
This tranlsates to the english: "however you choose to match elements of A with elements of B, there will always be an element of B that is not matched by any element of A"
I think this is a quite intuitive definition of the fact that B has at least one element more than A.

Yes. But then you can't get the theory of the real numbers off the ground and the intermediate value theorem is false. The constructivists counter: The IVT becomes true again we only consider computable functions.

Sorry, I don't understand what you mean here: The constructivist (intuitionist) logic is only a "more general" logic than classical logic, since it has less axioms. As a result, it is valid for a bigger set of models:

If you substitute "true" with "provable" and "false" with "contradictory", there are three possibilities:
-- A proposition is provable ( then it's true in all models )
-- A proposition is contradictory ( then it's false in all models ) ( equivalent to "it is provable that it's not provable": if you give me a prove, I can derive a contradiction from your prove )
-- A proposition is not provable, but it's not contradictory ( then, it's true only in some models ).

-- A is not derivable from not( not A ), because this means ("A is not provable" is not provable), and that is not true for all models (it's not true if functions are partial recursive functions)
For example, if you can interpret functions as partial recursive functions (Turing machines) and propositions as assertions on the behavior of Turing machines:
-- forall is a loop that stops returning false if it finds a false value, and it's value as proposition is false if it stops and true if it loops forever
-- exists is a loop that stops returning true if it finds a true value, and it's value as proposition is true if it stops and false if it loops forever

Since the halting problem is undecidable, there are theorems that are not provable and not decidable.

The model of Turing machines (functions are partial recursive functions) is countable, but you can interpret the types and functions of type theory in uncountable models too ( including real numbers ):

The axiom of excluded middle is independent from the others, so if you want to speak only about models in which all functions are total (non necessarily recursive), such as standard analysis, you can assume not( not A ) => A as an axiom. In this way, you recover the full classical logic and all propositions can only be true or false (even if some of them - much less than before - can still be non provable).

Did I miss something? What is objection to this argument?

Yes agreed. We agree on everything
That's reassuring :-)

I just think the world of the full powerset of N is richer and more interesting than the computable powerset. That's an aesthetic judgment. And it's more useful. That's a pragmatic judgment.

Yes, that's exactly the point that I wanted to discuss when I started the other topic https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented
( look for example at my post number 38 )

In fact, I started writing on thephilosophyforum.com a couple of weeks ago because I was looking if somebody had an answer to this question: https://thephilosophyforum.com/discussion/5789/is-it-possible-to-define-a-measure-how-interesting-is-a-theorem
but it seems that nobody knows o consider the question important :-(

Nobody has yet succeeded in developing a computable or constructive theory of physics. People are working on it. Perhaps we'll need another few decades to get more insight into this question.

Voevodsky showed that Homotopy Type Theory can be used as a foundation for mathematics in a very natural way. What do you mean by "constructive theory of physics"?

I read that article and didn't fully understand it or the point you're making

I wrote "you could even say that "every mathematical object is a string of characters", and you replied that "This is confusing syntax with semantics".
Herbrand interpretation says that you can build a model of your theory (semantics) made from the syntactic structures of the language itself (strings of characters).

Math is more than ‎Gödel numbering
I agree. And I think Godel himself would agree: he built this model to confute Hilbert's idea that there is no truth except formal logic.
Gödel himself was a Platonist. I found that surprising when I first learned it. He believed there is mathematical truth "out there" that's beyond the limitations of formal systems.
I think Godel's idea was something similar to what I tried to express in (https://thephilosophyforum.com/discussion/5792/is-mathematics-discovered-or-invented): there exists a meaning of truth based on a physical world that is more "fundamental" than the physical world that we observe.
• 715
Yes. But then you can't get the theory of the real numbers off the ground and the intermediate value theorem is false. The constructivists counter: The IVT becomes true again we only consider computable functions.
— fishfry

Sorry, I don't understand what you mean here: The constructivist (intuitionist) logic is only a "more general" logic than classical logic, since it has less axioms. As a result, it is valid for a bigger set of models:

That's an interesting point. Here's why IVT is false in constructive math. IVT, you will recall, says that if a function from the reals to the reals is continuous, and if it takes a negative value at one point x and a positive value at another point y, then it must be zero at some point between x and y.

That is if we have f(x) < 0 and f(y) > 0 then there exists p with x < p < y and f(p) = 0. This is intuitively correct about a continuous function f.

If you take the standard real line and omit all the noncomputable points, you get the computable real line. The computable real line is full of holes where the noncomputable reals used to be. You can have a continuous function whose graph passes through the x axis at one of the holes. To a constructive mathematician that zero does not exist. You can drive a continuous function through a hole in the computable real line.

For example for a noncomputable p, there's a straight line that passes through the x-axis at p. But the point (p,0) does not exist for constructivists. We have a straight line that crosses the x-axis without ever taking on the value zero.

I personally find such a model of the continuum unsatisfactory in the extreme. It violates every intuition about what a continuum should be. I think that's both a mathematical and a philosophical problem for the constructivists.

You mentioned a lot of other interesting things that I'll try to get to later. One was about LEM and HOTT. I was not under the impression that HOTT is a constructivist theory. I'm sure Voevodsky believed in uncountable sets. They are a simple consequence of the first-order axioms of set theory. A machine could verify Cantor's theorem. So I am not sure that HOTT and constructive math are the same. On the contrary I'd assumed they're different. But I'm mostly ignorant regarding HOTT.

One thing I do know is that the axiom of choice implies LEM. To reject the axiom of choice involves throwing out quite a lot of modern math.

https://en.wikipedia.org/wiki/Diaconescu%27s_theorem
• 715
-- forall is a loop that stops returning false if it finds a false value, and it's value as proposition is false if it stops and true if it loops forever

I'm not sure what that means. You can't iterate through a set unless it's well-ordered. And there are sets that are not well-ordered, unless you accept the axiom of choice, which implies LEM. How would you iterate through the real numbers, for example? Here's how I'd do it. First, well-order the reals. Second, use transfinite recursion. Those are two powerful principles of math not available to constructivists. I don't know how constructivists feel about transfinite recursion, but it applies to very large sets that constructivists can't possibly believe in.

But the "for all" operator can be applied to the reals without the need for such powerful machinery as the axiom of choice and transfinite recursion. You just say "for all reals" and you're good. That's much more powerful than iteration IMO.

I do not believe that the universal quantifier is the same thing as iteration, for this reason. There are sets that you can apply "for all" to but that you can't iterate through in any obvious manner.

This tranlsates to the english: "however you choose to match elements of A with elements of B, there will always be an element of B that is not matched by any element of A"
I think this is a quite intuitive definition of the fact that B has at least one element more than A.

I can't agree that intuition is useful here. Intuitively, the even numbers adjoined with a single odd number are "larger" than the even numbers; and in fact this is confirmed by the proper subset relationship. But these two sets have the same cardinality. This is specifically a point of confusion for beginners. I reject naive intuition here and insist that "bigger" means injection but no surjection, end of story and no intuition allowed! Intuition is exactly what gets students into trouble when they first learn the math of infinity.

Moreover if you reject the axiom of choice (necessary if you reject LEM) there are "amorphous" sets that have no sensible cardinality at all. Such sets defy anyone's intuition.

https://en.wikipedia.org/wiki/Amorphous_set

I should add that constructivists have considered these issues regarding the axiom of choice.

https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics)#Axiom_of_choice
• 189
First of all, let's look for a definition of "constructive mathematics", because I have the impression that we are not speaking about the same thing.

https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics)
"In the philosophy of mathematics, constructivism asserts that it is necessary to find (or "construct") a mathematical object to prove that it exists. In classical mathematics, one can prove the existence of a mathematical object without "finding" that object explicitly, by assuming its non-existence and then deriving a contradiction from that assumption. This proof by contradiction is not constructively valid. The constructive viewpoint involves a verificational interpretation of the existential quantifier, which is at odds with its classical interpretation.

There are many forms of constructivism. These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as CZF and the study of topos theory."

Well, the constructivist theory that I know quite well is the "Calculus of Inductive Constructions", or CIC ("https://en.wikipedia.org/wiki/Calculus_of_constructions"), implemented in the Coq proof assistant (https://coq.inria.fr/).

This is a particular version of "Martin-Lof Type Theory" (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)

Here's a summary of the main properties of Coq's logic: https://github.com/coq/coq/wiki/The-Logic-of-Coq

Basically, Coq is a programming language (let's cal this the "internal language" of the logic) based on dependently typed lambda calculus (https://en.wikipedia.org/wiki/Dependent_type) that allows only to build total functions: using this language it's impossible to define a function that does not terminate (then it's less powerful than the full untyped lambda calculus, or a Turing machine).

However, using the internal language you can operate on inductive (and co-inductive) data structures and on functions (the models about which you can speak about and on which you can operate using the internal language) that ARE NOT REQUIRED TO BE COMPUTABLE.

For example, you can say "let R be a field and let F be a function such that forall x:R, F(x)^2 = x; Let x be F(2)". You can represent for example real numbers as functions from natural numbers to the finite set {0...9 and comma} to make things simpler (the infinite decimal expansion of the number), and then you can prove that the algorithm that produces the digits of the square roots of two IS a real number that solves your equation. Of course "x" is not a recursive function (of course, it does not terminate if you execute it on all natural numbers! ), but to build it you only need a finitely defined algorithm.

In a similar way, you can say "Axiom excluded_middle: forall A:Prop, A \/ ~A." meaning: excluded_middle is a function that takes as input the prove of a proposition A and produces as output the prove of the proposition A \/ ~A. This function is not implementable in the internal language of Coq, but you can treat it as a pre-defined "external" function that you can call and use the result without knowing how it works (something like a function defined in an external library of a programming language).

The logic doesn't make sure that the function exists (and even it doesn't guarantee that i's consistent: the proof of consistency is related to the meta-theory of the internal language), but only says that if you have this function, you can use it. And the resulting logic is equivalent to classical higher order logic (not set-theory).

The correspondence between set theory and the various versions of dependent type theory is rather complex:
See for example (https://mathoverflow.net/questions/69229/proof-strength-of-calculus-of-inductive-constructions):
"IIRC, the calculus of inductive constructions is equi-interpretable with ZFC plus countably many "inaccessible cardinals" -- see Benjamin Werner's "Sets in Types, Types in Sets". (This is because of the presence of a universe hierarchy in the CIC."

And this is a very good reference for intuitionistic type theory: https://plato.stanford.edu/entries/type-theory-intuitionistic/

========= end of the "defintions" :-) =========

Here's why IVT is false in constructive math. IVT, you will recall, says that if a function from the reals to the reals is continuous, and if it takes a negative value at one point x and a positive value at another point y, then it must be zero at some point between x and y.

That is if we have f(x) < 0 and f(y) > 0 then there exists p with x < p < y and f(p) = 0. This is intuitively correct about a continuous function f.

If you take the standard real line and omit all the noncomputable points, you get the computable real line. The computable real line is full of holes where the noncomputable reals used to be

I read that you can't define Cauchy or Dedekind real numbers in intuitionistic logic (without additional axioms), but you can use other definitions of "continuity":

http://www.alternatievewiskunde.nl/QED/brouwer.htm
"Any function which is defined everywhere at an interval of real numbers is also continuous at the same interval. With other words: For real valued functions, being defined is very much the same as being continuous."

and MY FAVOURITE ONE:
https://plato.stanford.edu/entries/continuity/#9 "Smooth Infinitesimal Analysis"

What is your objection to this axiomatization of real numbers?

One thing I do know is that the axiom of choice implies LEM. To reject the axiom of choice involves throwing out quite a lot of modern math.

Intuitionistic type theory implies a somewhat "weaker" version of the axiom of choice (the "good" part of it :-) )

[ ..going to continue next time! ]
• 189
You mentioned a lot of other interesting things that I'll try to get to later. One was about LEM and HOTT. I was not under the impression that HOTT is a constructivist theory.

HOTT is Martin-Lof Type Theory with higher inductive types, but with the addition of Voevodsky's "univalence axiom". So, it's not so simple:
first of all: univalence does not imply excluded middle nor the axiom of choice. So, in this sense, it could be called intuitionistic. But if by intuitionistic you mean that all the internal functions of the language are computable, then the presence of an axiom makes it become not computable (if it were computable it wouldn't be needed as an axiom).
However, both the axiom of choice and excluded middle are consistent with Martin-Lof Type Theory. So there is no problem of including them in the theory (if you don't require that the theory should be intuitionistic).
But the main distinctive point with HOTT is the fact that it is a Type Theory, and not first order logic with axioms from set theory. This makes it much easier to use in practice for real mathematics ( I am pretty sure that nobody uses formal set theory for real mathematics: take a look at the post on this site on how to prove 1 + 1 = 2 :-) ). But on the other side, there were a couple of things that were not completely "compatible" with standard mathematics. And both of them are magically "fixed" with the univalence axiom: one of them (the simpler one) is that functions in type theory are not directly interpretable in the usual way of input-output correspondence, but rather as lambda functions. The difference is that two functions can be different because they are implemented in a different way (different algorithms) but are equal as input-output (this is called "function extensionality"). The other, most important point, is that there are two kinds of "equality" in type theory, and neither of them matches exactly the equality of set theory. I don't think I can explain this in a few words, and this post is just becoming too long... :-)

I'm sure Voevodsky believed in uncountable sets.
Yes of course, but this in my opinion is not really related to intuitionism.

A machine could verify Cantor's theorem. So I am not sure that HOTT and constructive math are the same.

A machine could verify any theory based on a formal system: this, I believe, can be taken as a definition of "formal system".

I'm not sure what that means. You can't iterate through a set unless it's well-ordered. And there are sets that are not well-ordered, unless you accept the axiom of choice, which implies LEM. How would you iterate through the real numbers, for example? Here's how I'd do it. First, well-order the reals. Second, use transfinite recursion. Those are two powerful principles of math not available to constructivists. I don't know how constructivists feel about transfinite recursion, but it applies to very large sets that constructivists can't possibly believe in.

But the "for all" operator can be applied to the reals without the need for such powerful machinery as the axiom of choice and transfinite recursion. You just say "for all reals" and you're good. That's much more powerful than iteration IMO.

I wanted to take a very simple countable model for an intuitionistic theory. You can use this model to "understand" the reason why the rules of logic are that particular ones. This is useful because if a theory has a model is surely not contradictory. And if you are sure that they are not contradictory you can reinterpret them on another model that you don't understand (for example one that has an infinite set of elements) and assume that is has to make sense for this model too. You can't iterate through real numbers with a procedure made of finite steps. But you can give it some physical meaning (covering an 1-dimensional segment with a 2-dimensional figure, for example) that works following the same rules.
• 189
I can't agree that intuition is useful here. Intuitively, the even numbers adjoined with a single odd number are "larger" than the even numbers; and in fact this is confirmed by the proper subset relationship. But these two sets have the same cardinality. This is specifically a point of confusion for beginners. I reject naive intuition here and insist that "bigger" means injection but no surjection, end of story and no intuition allowed! Intuition is exactly what gets students into trouble when they first learn the math of infinity.

OK, your are right. Let's stick to a formal logic that uses only the forall-exists language. But we have to keep in mind that the meaning of "forall" and "exists" is only determined by the rules of the language. And that we can trust these rules because they are verified in a "physically verifiable" (intuitionistic?) model (such as the "iterate and check on countable sets" model that I described before), but they ARE NOT REQUIRED TO HAVE THAT PHYSICAL MEANING, at the same way as the word "larger" doesn't have to have the intuitive meaning related to our physical world.

Moreover if you reject the axiom of choice (necessary if you reject LEM) there are "amorphous" sets that have no sensible cardinality at all. Such sets defy anyone's intuition.

I didn't know what amorphous sets are. I searched on Wikipedia:
"After Cohen's initial work on forcing in 1963, proofs of the consistency of amorphous sets with Zermelo–Fraenkel were obtained.".

So, looked for the proof of consistency with Zermelo-Fraenkel to see for which model amorphous sets make sense.. but I don't have access to  :-(

However, I would guess that the crucial point here is that you define "infinite" as "existence of an one-to-one correspondence with a proper subset", but the definition of subset is given by a proposition of the language, and everything depends on what propositions you can express in the language.
The axiom of choice expressed in set theory expands the set of propositions that you can express, and this has the effect to limit the sets that you can define.
But the set-theoretical axiom of choice is not verifiable in any "concrete" (intuitionistic) model, at the same way as amorphous sets aren't.
• 715
First of all, let's look for a definition of "constructive mathematics", because I have the impression that we are not speaking about the same thing.

Wow you wrote a lot and I'm a little overwhelmed. I'll try to respond where I have some value to add. FWIW I have heard about HOTT. I do in fact know what a homotopy is from topology. I don't know the details of how homotopy is used to do logic or foundations. I've heard of the univalence axiom and my understanding is that it says, "Equivalence is equality" or some such. I'm sure I'm missing 100% of the philosophical and mathematical depth of the statement. I know enough category theory to vaguely imagine what a topos might be. I know who Grothendieck but not quite exactly what a sheaf is. I've heard of constructive math and I know about Brouwer. As I've said I regard HOTT and neo-intuitionism in general as Brouwer's revenge. So I know enough to make a little joke but not enough to discuss any of the technical aspects. I know of Bishop's book and I even know a book where a physicist tried to do constructive physics. That's a project in its infancy but you constructivists will have to face the problem someday. Physics is founded on classical math, ie continuity and the real numbers as defined in set theory.

This is all by way of indicating some of the boundaries of my knowledge so that you'll get a better understanding of the deep pool your posts just threw me into!

Yes I've perused that article.

There are many forms of constructivism. These include the program of intuitionism founded by Brouwer, the finitism of Hilbert and Bernays, the constructive recursive mathematics of Shanin and Markov, and Bishop's program of constructive analysis. Constructivism also includes the study of constructive set theories such as CZF and the study of topos theory."

I know a little about Brouwer and once attempted to try to grok the idea of a free choice sequence. That's as far as I ever got with classical intuitionism. I find it murky in the extreme.

I wouldn't call Hilbert any kind of finitist, but you might know more about that than I do. Bernays I only know from ‎Gödel-Bernays set theory but beyond that I know nothing.

Shanin and Markov I've never heard of, I'll Google. Bishop's book I've heard of. Constructive set theory I know nothing about. As I said I know enough category theory to have a vague notion of what a topos is. I'd love to get to that point someday.

Well, the constructivist theory that I know quite well is the "Calculus of Inductive Constructions", or CIC ("https://en.wikipedia.org/wiki/Calculus_of_constructions"), implemented in the Coq proof assistant (https://coq.inria.fr/).

This is a particular version of "Martin-Lof Type Theory" (https://en.wikipedia.org/wiki/Intuitionistic_type_theory)

I'll check out the Wiki link. Martin-Löf type theory I've heard of about a million times and have no idea what it is. I confess I give priority to trying to understand more of the classical math I wish I'd studied harder in my youth, and not so much with all the neo-intuitionist stuff. I like Steve Awodey's category theory book and Youtube videos and he's a big name in HOTT so I'm sure I'm missing a lot of good stuff. I'm just buried under an avalanche of stuff that I know a little "about" but that I'll never know. It's tragic, really. All this is part of that avalanche.

Here's a summary of the main properties of Coq's logic: https://github.com/coq/coq/wiki/The-Logic-of-Coq

I know a little about Coq as a proof assistant but no details. I am grateful for your summary and I will try to work through it at some point. I have to confess I did not dive into this exposition but I'm glad you wrote it.

Basically, Coq is a programming language (let's cal this the "internal language" of the logic) based on dependently typed lambda calculus (https://en.wikipedia.org/wiki/Dependent_type) that allows only to build total functions: using this language it's impossible to define a function that does not terminate (then it's less powerful than the full untyped lambda calculus, or a Turing machine).

However, using the internal language you can operate on inductive (and co-inductive) data structures and on functions (the models about which you can speak about and on which you can operate using the internal language) that ARE NOT REQUIRED TO BE COMPUTABLE.

For example, you can say "let R be a field and let F be a function such that forall x:R, F(x)^2 = x; Let x be F(2)". You can represent for example real numbers as functions from natural numbers to the finite set {0...9 and comma} to make things simpler (the infinite decimal expansion of the number), and then you can prove that the algorithm that produces the digits of the square roots of two IS a real number that solves your equation. Of course "x" is not a recursive function (of course, it does not terminate if you execute it on all natural numbers! ), but to build it you only need a finitely defined algorithm.

In a similar way, you can say "Axiom excluded_middle: forall A:Prop, A \/ ~A." meaning: excluded_middle is a function that takes as input the prove of a proposition A and produces as output the prove of the proposition A \/ ~A. This function is not implementable in the internal language of Coq, but you can treat it as a pre-defined "external" function that you can call and use the result without knowing how it works (something like a function defined in an external library of a programming language).

The logic doesn't make sure that the function exists (and even it doesn't guarantee that i's consistent: the proof of consistency is related to the meta-theory of the internal language), but only says that if you have this function, you can use it. And the resulting logic is equivalent to classical higher order logic (not set-theory).

Note to self, read and understand all of the above someday.

But if I'm skimming reasonably, you are saying that someone's figured out how to do a satisfactory theory of the real numbers using constructive math? Well ok if you say so but where are all the noncomputable numbers? Maybe you can explain that more simply. Also what about the various constructions that require the axiom of choice? The nonmeasurable set, the Hahn-Banach theorem used in functional analysis which is used in physics? Zermelo's well-ordering theorem. The fact that every vector space has a basis, and that every surjection has a right inverse? That every unital commutative ring has a maximal ideal. That every Dedekind-infinite set is infinite. That every field has an algebraic closure. Mathematicians are not going to give all those things up and they all depend on the axiom of choice hence LEM.

The correspondence between set theory and the various versions of dependent type theory is rather complex:
See for example (https://mathoverflow.net/questions/69229/proof-strength-of-calculus-of-inductive-constructions):
"IIRC, the calculus of inductive constructions is equi-interpretable with ZFC plus countably many "inaccessible cardinals" -- see Benjamin Werner's "Sets in Types, Types in Sets". (This is because of the presence of a universe hierarchy in the CIC."

Ok I'll add this to my reading list. You know I could spend the summer just working through your post! But ... are you telling me you can believe in countably many inaccessible cardinals but not a noncomputable real number? I didn't even think constructivists believed in uncountable sets.

And this is a very good reference for intuitionistic type theory: https://plato.stanford.edu/entries/type-theory-intuitionistic/

More for my reading list.

I read that you can't define Cauchy or Dedekind real numbers in intuitionistic logic (without additional axioms), but you can use other definitions of "continuity":

Ok right. This is perhaps a sticking point for me because I am so steeped in the traditional set-theoretic real numbers that I have a very hard time taking any other approach seriously. If you remove the noncomputable reals then there are Cauchy sequences that don't converge. Your "continuum" has more holes than points. I personally find that a serious problem. However I'm certain that all the smart people working on HOTT are perfectly aware of the problem and are ok with it somehow.

http://www.alternatievewiskunde.nl/QED/brouwer.htm
"Any function which is defined everywhere at an interval of real numbers is also continuous at the same interval. With other words: For real valued functions, being defined is very much the same as being continuous."

Yes, I've heard that. If you restrict to continuous functions you can make all this work. Will add the link to my reading list. Seriously, this is an awesome research program you've outlined for an aspiring neo-intuitionist.

and MY FAVOURITE ONE:
https://plato.stanford.edu/entries/continuity/#9 "Smooth Infinitesimal Analysis"

Oh yes another one I've heard of. Well differential geometers are closet physicists so they believe in infinitesimals and I gather you can take a category-theoretic approach that makes all this work. More stuff I know a little about without knowing any of the details. It's like my brain is filled with a skeleton but not enough is filled in. But yes I'll perfectly well agree that smarter people than me can make infinitesimals work in a categorical framework. I'm not saying they can't, LOL!!

What is your objection to this axiomatization of real numbers?

I'm sure I have no objection. Lawvere I know from his elementary theory of the category of sets, ie you can do set theory without ever talking about elements. I'm sure in a hundred years nobody will remember set theory anymore, it's clearly doomed. I have no objection. I can only plead ignorance of even more stuff I wish I knew.

One thing I do know is that the axiom of choice implies LEM. To reject the axiom of choice involves throwing out quite a lot of modern math.
— fishfry

Intuitionistic type theory implies a somewhat "weaker" version of the axiom of choice (the "good" part of it :-) )

I'm gratified that we are in agreement on the areas where our knowledge overlaps. Weak forms of choice are often sufficient. However the facts that every vector space has a basis and every surjection has a right inverse (ie section) are both equivalent to full choice. I suppose the constructivists have a fix.

[ ..going to continue next time! ]

I'm out of breath just typing. Let me tackle the other posts later.
• 189
I am glad that you replied to my post, so I can explain in more detail my point of view :smile:

I know of Bishop's book and I even know a book where a physicist tried to do constructive physics. That's a project in its infancy but you constructivists will have to face the problem someday. Physics is founded on classical math, ie continuity and the real numbers as defined in set theory.

I don't believe that physics is founded on ZF Set theory, if that's what you mean by "classical math".
Physics is based on calculus (for the most part), that was created starting from XVII century (Newton's "principia" was published in 1687), at a time when logic wasn't even considered as part of mathematics.

In fact, I believe that most of physicists don't even know the exact formulation of the axioms of ZF set theory, and that's because these axioms are never used in physics directly. The thing that really matters is that you can use logic to prove that your results follow from the algebraic rules of calculus, differential geometry, group theory, etc...

If this was true, the distinction between the various axiomatizations of real numbers would be irrelevant for physics. ZF set theory only happens to be the first axiomatization to be discovered.

But this is in fact not completely true. There are some physical results that are not the same in all axiomatizations. For example, the Banach-Tarski theorem. If you believe that the physical space can't be split in peaces and then reassembled with double volume, you have to conclude that the real numbers' axiomatization of ZFC is not the right model for physical space.

A more "practical" example are Feynman's path integrals: they can be used to calculate physical quantities with extreme precision, if you don't take for real the result that derives from ZFC's axioms. In fact, from ZFC axioms you can derive that the result is "infinite". But if you take only some terms of your infinite (divergent) series, the result is experimentally correct with a very high accuracy!

I believe that the most popular explanation of this fact among physicists today is that physical space is in fact discrete. But there is no experimental evidence of this until now.
( The same thing happens with energy: if energy is continuous you get infinite results for black body radiation. With Plank's quantization you get the correct results )
But with physics you can assume nothing until you find experimental evidence, because nature has surprised us so many times! However, if I had to guess, discrete space (like pixels of a video game) is too "ugly" to be true: nature has shown to prefer "beautiful" mathematical structures.
I think an interesting question is: is there an axiomatization of a continuous space (where you can take measures represented as real numbers) where Feynman integrals make sense and are convergent? I don't know, but I don't see a reason why this shouldn't be possible.

I am grateful for your summary and I will try to work through it at some point. I have to confess I did not dive into this exposition but I'm glad you wrote it

I'll try to give you a simpler explanation: In logic you have propositions, terms and relations. Martin-Löf type theory ( and HOTT too ) reduces all these three concepts to only one: functions. That's basically the reason why it has a simple categorical model.
The functions that you consider in the theory are, exactly as for morphisms in category theory, not limited in any way. Now the question is: how is it possible to use and reason about functions that are not computable? Obviously, you can't "execute" the function on a given argument to check what's the result. Well, the "trick" is: lambda calculus! With lambda calculus you can "execute" a function by what's called "beta reduction". That means that you can transform and compose functions from one form to another (equivalent to the previous one) using very simple rules that are supposed to be respected by everything that can be called "function".
The whole point about constructivism is that you can assume existing for sure only the functions that you can effectively compute. All the others can enter in the theory only as premises of your deductions (no axioms!). And that is enough to prove all mathematical theorems that can be proved with classical logic, if you add the appropriate axioms corresponding to the "non computable" parts of classical logic. In this way you make clear the distinction between the computable and not computable parts of logic!

[ that's enough for today.. :-) ]
• 715
I am glad that you replied to my post, so I can explain in more detail my point of view

Oh no now I am falling even further behind in my replies!

I don't believe that physics is founded on ZF Set theory, if that's what you mean by "classical math".
Physics is based on calculus (for the most part), that was created starting from XVII century (Newton's "principia" was published in 1687), at a time when logic wasn't even considered as part of mathematics.

Yes but by the 19th century it became clear that the logical problems of calculus were becoming a problem and needed to be addressed. We needed to know exactly what was a real number and a continuous function in order to resolve the delicate convergence issues that were arising.

I agree of course that the physicists don't care and I didn't mean to imply that they do. But physics is based on the real numbers and mathematical analysis in general; and those things were discovered to need a foundation; and that foundation is currently set theory.

In fact, I believe that most of physicists don't even know the exact formulation of the axioms of ZF set theory, and that's because these axioms are never used in physics directly.

Of course. In fact most working mathematicians couldn't state the axioms and don't use set theory directly! That's a fact. My point still stands. Once you get picky about which things converge and which things don't, a problem that arose in the infinite trigonometric series that arose out of 19th century physics, you need a foundation and set theory is it. Currently of course.

The thing that really matters is that you can use logic to prove that your results follow from the algebraic rules of calculus, differential geometry, group theory, etc...

Sure, not disagreeing at all.

If this was true, the distinction between the various axiomatizations of real numbers would be irrelevant for physics. ZF set theory only happens to be the first axiomatization to be discovered.

Perfectly well agreed. But we are not talking about whether Dedekind cuts or equivalence classes of Cauchy sequences are a better way to represent the real numbers. You are advocating nonconstructive foundations. It's in that context that my remarks make sense. We can NOT currently (as far as I understand it) get modern physics off the ground using nonconstructive math. The 't' in the Schrödinger equation is taken to be a variable that ranges over the standard mathematical real numbers, noncomputables at all.

I am aware of only one book that attempts to get physics working on nonconstructive math. That's a tiny drop of water in an ocean that needs to be filled.

I will agree that synthetic differential geometry is some kind of counterexample to my claim but I don't know enough about it. But SDG is not the same as nonconstructive approaches, is it? I thought SDG is just a categorical approach to infinitesimals so it's more like a complementary idea to nonstandard analysis. Not something directly related to homotopy type theory or neo-intuitionism or whatever. But I could be all wrong about all this, I have very little to no actual knowledge.

But this is in fact not completely true. There are some physical results that are not the same in all axiomatizations. For example, the Banach-Tarski theorem.

I would never call B-T a "physical" result. On the contrary it's only a technical result regarding the isometry group of Euclidean 3-space, which happens to contain a copy of the free group on two letters, which has a paradoxical decomposition. It has nothing to do with the real world. As far as we know, of course.

But if earlier you said that physics doesn't depend on set theory, then surely we can't apply the axiom of choice to regions of physical space!

If you believe that the physical space can't be split in peaces and then reassembled with double volume, you have to conclude that the real numbers' axiomatization of ZFC is not the right model for physical space.

I've often raised this point myself. If anyone seriously believed it was, then we'd see physics postdocs applying for grants to count the number of points in the unit interval and thereby discover the truth of the continuum hypothesis. The fact that the ides is absurd shows how far ZF, let alone C, is from the real world.

I'm not claiming the world's based on set theory. I apologize if my earlier wording made it sound like I did.

I claim that our best physical theories are based on the real numbers and the theory of limiting processes that go under the name of mathematical analysis; and that our current formulation of analysis is in terms of set theory. I don't state this as a metaphysical point, only as a historically accurate one. I don't care if you want to replace one definition of the real numbers with another. I do care that there is as yet no fully worked out constructive theory of the real numbers that will support modern physics. That is my belief. Certainly I could be wrong.

A more "practical" example are Feynman's path integrals: they can be used to calculate physical quantities with extreme precision, if you don't take for real the result that derives from ZFC's axioms. In fact, from ZFC axioms you can derive that the result is "infinite". But if you take only some terms of your infinite (divergent) series, the result is experimentally correct with a very high accuracy!

I've heard about renormalization and I am not aware of whether it's been mathematically formalized or not. I found a stackexchange thread that some that "some" version of QFT are mathematically sound and others not! This is all inside baseball in the physics biz.

https://physics.stackexchange.com/questions/86597/qft-as-a-rigorous-mathematical-theory

Regardless, I take renormalization as a modern example of Newton's great use of calculus to work out gravity, two hundred years before we had a logically rigorous foundation for calculus. Physics often leads math that way. That doesn't mean the math of renormalization won't be fully patched up someday in terms of ZFC or some future foundation. Maybe tomorrow morning someone will derive renormalization from homotopy type theory. If that happens I'll have to shut up about constructivism.

I believe that the most popular explanation of this fact among physicists today is that physical space is in fact discrete.

Yes we are now back to the ancient question of atomism. Zeno. All that stuff. We are not going to solve it today. The Planck scale doesn't say the world's a discrete grid. It only says that our contemporary theories don't allow us to speculate below that scale. Maybe there are little thingies in there.

Personally I do not think there are Euclidean points. I don't regard the real numbers as a good model of continuity. I'm not making any metaphysical claims for math at all. But I am making historical claims that our best continent theories DO need an underlying Euclidean model.

[By Euclidean I don't mean the geometry, I mean the Cauchy completeness of n-space],

But there is no experimental evidence of this until now.
( The same thing happens with energy: if energy is continuous you get infinite results for black body radiation. With Plank's quantization you get the correct results )

I hope my Planck point was clear. I didn't deny the Planck scale. It's that my own personal understanding is that the Planck scale is that scale of space and time below which we can't sensibly measure or calculate or speculate about. I'm agnostic on whether there's anything actually there.

But with physics you can assume nothing until you find experimental evidence, because nature has surprised us so many times! However, if I had to guess, discrete space (like pixels of a video game) is too "ugly" to be true: nature has shown to prefer "beautiful" mathematical structures.

I think we're meeting in the middle. I don't think mathematical continuous space is quite the right model for the world, and you admit that neither is a discrete grid. Most likely it's something even stranger, yet to be discovered by a genius not yet born.

I think an interesting question is: is there an axiomatization of a continuous space (where you can take measures represented as real numbers) where Feynman integrals make sense and are convergent? I don't know, but I don't see a reason why this shouldn't be possible.

Don't know enough about the subject, more stuff to read. But quantum physics is probably not something I'll be tackling in this lifetime. I do know a little functional analysis, enough to know what Hilbert space is. So if you tell me that an observable is a linear operator on some complex Hilbert space, I know what that means mathematically. Just not physically. And I'll probably never know.

I'll try to give you a simpler explanation: In logic you have propositions, terms and relations. Martin-Löf type theory ( and HOTT too ) reduces all these three concepts to only one: functions. That's basically the reason why it has a simple categorical model.

The functions that you consider in the theory are, exactly as for morphisms in category theory, not limited in any way. Now the question is: how is it possible to use and reason about functions that are not computable? Obviously, you can't "execute" the function on a given argument to check what's the result. Well, the "trick" is: lambda calculus! With lambda calculus you can "execute" a function by what's called "beta reduction". That means that you can transform and compose functions from one form to another (equivalent to the previous one) using very simple rules that are supposed to be respected by everything that can be called "function".
The whole point about constructivism is that you can assume existing for sure only the functions that you can effectively compute. All the others can enter in the theory only as premises of your deductions (no axioms!). And that is enough to prove all mathematical theorems that can be proved with classical logic, if you add the appropriate axioms corresponding to the "non computable" parts of classical logic. In this way you make clear the distinction between the computable and not computable parts of logic!

I understand the words but not the ideas. I suspect it's the kind of thing where mere Wiki reading wouldn't help. I've heard of lambda calculus but don't know anything about it except that everyone thinks it's terribly important. I gave the Wiki a quick scan and they pointed out in their section on beta reduction that "The lambda calculus may be seen as an idealised version of a functional programming language ..."

Ok. But a functional programming language can't do anything a Turing machine can't, and in fact Turing proved that TMs are equivalent to the lambda calculus, and Turing machines can't crank out the digits of noncomputable numbers. So once again I'm up to my original brick wall of understanding. Or misunderstanding. I'm sure that if I grokked this subject I'd suddenly realize that I've been ignorant all these years, and that you can in fact do everything you need to do with mere computability.

I just don't see it yet.

[ that's enough for today.. :-) ]
a day ago

Thank you much!

ps --

And that is enough to prove all mathematical theorems that can be proved with classical logic, if you add the appropriate axioms corresponding to the "non computable" parts of classical logic.

If I could only understand this. You are saying that if I can prove a theorem at all, I can prove it using constructive methods, by adding certain axioms. This could be within my grasp. What are the axioms that make this work?

pps --

Ah this is that Curry-Howard business I bet. Programs are proofs. Fine, I believe that. A proof has to be computable, but the thing it's talking about need not be. But doesn't that mean something? Proofs aren't sufficient to know mathematical reality, ‎Gödel showed that. Same with physics. Restricting our epistemology to what we can prove with axiomatic systems or computers is not sufficient to understand reality. That perhaps would be my working thesis.
• 189
You are advocating nonconstructive foundations. It's in that context that my remarks make sense. We can NOT currently (as far as I understand it) get modern physics off the ground using nonconstructive math

Wait a moment: I am advocating Martin-Löf type theory, and one of it's particular versions called "Calculus of Inductive Constructions" that is, as the name says, CONSTRUCTIVE mathematics.

From https://en.wikipedia.org/wiki/Constructivism_(philosophy_of_mathematics):
"Constructive mathematics
Much constructive mathematics uses intuitionistic logic, which is essentially classical logic without the law of the excluded middle."

If I could only understand this. You are saying that if I can prove a theorem at all, I can prove it using constructive methods, by adding certain axioms. This could be within my grasp. What are the axioms that make this work?

OK, maybe this is the best point to start with.

Here's MY PERSONAL (not sure if common) interpretation of the "meaning" of classical and intuitionistic logic.

First of all, definitions:
- a model is something on which you can perform "experiments" and verify the results.
- a proposition is the description of an experiment performed on a model.

The result of every experiment (if you can perform it) is always uniquely determined: True or False.
This is the same thing as a physical experiment: if it's doable and you prepared the experiment to obtain a true or false result, you'll always get an answer: it can't happen that the world stops working because it doesn't know how to answer to your experiment!

But there is a difference from physics, and that's the difference between mathematical functions and physical experiments: in a mathematical model for the same experiment you'll always get the same result: a function is supposed to be some fixed well-identified mechanism that always outputs the some output if you put in the some input.
In a physical model instead, as we know from quantum mechanics, this is not always true.

Let's begin from what is probably the most controversial point: the law of excluded middle (LEM).

So, LEM says that "all experiments are performable" (giving as a result True or False).
Not assuming LEM means that in some cases A is a performable experiment, but (A \/ ~A) is not.

The obvious example is the termination of a Turing machine:
A means: "the program A terminates".
You run the program and you verify that it's true (we assume that A always does the same thing, so it's enough to run it one time)

NOT A means: the program A never terminates:
You run the program and wait... forever! Your experiment will never end, so you'll never be able to perform the experiment that corresponds to the proposition "A is false".

Nevertheless, we know that an answer "exists": or the program terminates or it doesn't, No other possiblities! Only that the answer is not implementable as an experiment on the model.
(of course in most cases you can prove that A is false with logic, but here we are not speaking about "proving" propositions but "verifying" on the model if the proposition is true or false)

So, in classical logic we assume that EVERY proposition of the language describes an "experiment" that is executable and when is executed returns an answer True or False.

In intuitionistic theory instead, we assume that there are propositions A for which "NOT A" is not a feasible experiment. So, you shouldn't be allowed write "NOT A" as an assumption for an arbitrary A, because for some propositions A, "NOT A" has no meaning in our model.

The "trick" is that in ituitionistic logic "NOT" is not defined as the boolean function {True -> False; False -> True}, but is defined as "A implies Absurd" ( A -> _|_ ), where Absurd is the "impossible experiment": the experiment that does not return any result, because it's associated with a contradictory proposition. Note that IF A is a possible experiment, then "A -> _|_" is always a possible experiment in intuitionistic logic, because it's implemented as a logical demonstration made of a finite number of steps starting from A (that's the reason why the internal language of logic should be allowed to implement only total functions) and the provability of a theorem is a syntactic operation made of a finite number of steps, independent from the model.
In this way, the rules of logic guarantee that you can never build an "impossible" experiment IF you start from a possible one.

Now, in classical logic "A -> _|_" is equivalent to "NOT A", because there are no "impossible" experiments ( except _|_ that is not derivable in any logic ).
But in intuitionistic logic you can assume as an axiom the proposition "A \/ (A->_|_)" (usually written A \/ ~A), but with the intuitionistic definition of the not function).
"A \/ (A->_|_)" in intuitionistic logic is some kind of "oracle" function that takes as an input a proposition A and ALWAYS returns one of the two propositions: A or (A->_|_).
If this function exists, then you can use it to decide the result of every experiment, and (A->_|_) becomes equivalent to the "NOT A" of classical logic (no more impossible experiments). If this function does not exist, there are impossible experiments, and "A \/ (A->_|_)" cannot be built in any way starting from intuitionistic rules.
So, this means that:
- "classical logic" = "intuitionistic logic" + LEM.
- "models of classical logic" = "models of intuitionistic logic" - "non realizable experiments"
You can use intuitionistic logic to separate the finitary deductions (not using any axiom) from the non finitary ones (using LEM, or some other axiom).
(of course, this is a super semplification since there are many kinds of "classical" and "intuitionistic" logic, even without considering the axioms of set theory)

There are even other models in which not all propositions of classical logic make sense: for example, if you interpret the propositions as experiments of quantum mechanics, the proposition A /\ B does not make sense if A and B are incompatible observables: because both A and B separately are measurable quantities, but the couple (A, B) is not.
So, when you use classical logic to reason about experiments in quantum mechanics you have to be careful not to use the expression A /\ B when A and B correspond to non independent observables.
• 189
We can NOT currently (as far as I understand it) get modern physics off the ground using nonconstructive math. The 't' in the Schrödinger equation is taken to be a variable that ranges over the standard mathematical real numbers, noncomputables at all.

I am aware of only one book that attempts to get physics working on nonconstructive math. That's a tiny drop of water in an ocean that needs to be filled.

OK, I am not an expert either :-)
For what I can say for sure, there is a full formalization of real numbers in COQ standard library (https://coq.inria.fr/distrib/current/stdlib/).

I have used the library for a long time (at the beginning at university, and after because I like to play with math), but to be honest I even didn't know on what axiomatization it was based :-)

I searched for some the documentation (easier than dig into the library searching for axioms):

(https://hal.inria.fr/hal-00860648v1/document at page 6)
"Note that the standard library sometimes makes use of the excluded-middle axiom in addition
to the previous ones. The CoqTail project6
, however, has shown that it was unneeded. So, for the
purpose of this work, we consider that this axiom is not part of the ones defining real numbers in
Coq’s standard library"

From the point of view of calclulus (and physics) you can use limits or derivatives perfectly well without knowing how they are defined in the base libraries (you only apply theorems).

I will agree that synthetic differential geometry is some kind of counterexample to my claim but I don't know enough about it. But SDG is not the same as nonconstructive approaches, is it? I thought SDG is just a categorical approach to infinitesimals so it's more like a complementary idea to nonstandard analysis. Not something directly related to homotopy type theory or neo-intuitionism or whatever.

SDG represents space as a topos. And a topos is the model (or at least the "standard" model) of Martin-Löf type theory (lots of details omitted...)

(From https://ncatlab.org/nlab/show/topos)
"toposes are contexts with a powerful internal logic. The internal logic of toposes is intuitionistic higher order logic. This means that, while the law of excluded middle and the axiom of choice may fail, apart from that, every logical statement not depending on these does hold internal to every topos."
P.S. In reality, the "standard" model for Martin-Löf type theory with extensional equality is "belived" to be an (infinity-1)-topos, but I don't know if anybody found a proof of this.

Simple explanation:

In ZF Set theory, everything is made of sets (or, if you prefer, everything is a set)
Of course you can represent (or build) whatever you want with sets, by composing them with inclusion and equality. But the fundamental "objects" described by the theory are sets.

In Martin-Lof type theory, everythig is made of types (objects) and functions (morphisms).
Essentially, the rules of logic describe what you can "build" starting from an arbitrary finite collection of objects and morphisms.
The "thing" that results from building ALL THE POSSIBLE CONSTRUCTIONS starting from this arbitrary collection results to have the algebraic structure of a topos (the free topos built from the given initial objects and morphisms)

In a similar way, for example, the axioms of classical propositional logic correspond to a distributive lattice, and the axioms of simply type lambda calculus correspond to a cartesian closed category.

Well, SDG is an attempt to add to a topos enough structure (corresponding to axioms of the internal logic) to be able to define on it differential operators that respect the same rules of calculus that we use in physics.
The main difference is that this space is not made by an infinite set of "points" of zero size, but by an infinite set of lines of infinitesimal size. From the point of view of logic, the model is not contradictory. For the point of view of "intuition", I am not sure if it's more "crazy" to believe that you can "attach" one after the other (continuity) an infinite set of objects of length zero to obtain a finite segment, or that you can "attach" one after the other an infinite set of objects of infinitesimal length to obtain a finite segment.

I would never call B-T a "physical" result. On the contrary it's only a technical result regarding the isometry group of Euclidean 3-space, which happens to contain a copy of the free group on two letters, which has a paradoxical decomposition

I don't know what is an isometry group :worry:

I've heard of lambda calculus but don't know anything about it except that everyone thinks it's terribly important. I gave the Wiki a quick scan and they pointed out in their section on beta reduction that "The lambda calculus may be seen as an idealised version of a functional programming language ..."

:-) Lambda calculus (let's say simply typed lambda calculus, to fix the ideas) is a little recursive set of rules that says how you can build some objects starting from other objects that have just been built: exactly like Peano natural numbers, recursive trees, and all other recursive constructions that you can think of.
The objects that you can build in this way have exactly the same structure as the arrows and the objects of a cartesian closed category. Every arrow can start from a type and end in another type, and there is the possibility to build types from all arrows that start from the same source object and end in the same target object (the exponential objects of the category).
So, category theory and lambda calculus represent the same structure, seen from two different points of view:
- category theory describes the collection of all possible objects and arrows seen from a GLOBAL point of view
- lambda calculus describes how you can build the whole structure putting together one piece at a time, from a LOCAL point of view.

About beta reduction: If you are operating on an universe of functions that you don't know, you obviously you cannot execute them, because you don't know how they are "implemented". And, in general, they don't have to be "computable" functions at all. You can only execute the functions that you build within the internal language of logic.
Well, beta reduction is a rule that says "if you apply the function that takes the argument a into the formula ".... a ...." ( lambda a -> ..... a .... ) to the argument "whatever formula you want", you obtain as a result ( ..... "whatever formula you want" .... ). That is: you substitute the expression of the argument to the symbol that represents the input inside the formula. It's the most obvious and stupid thing that you can say that a function "must" do. This rule does not care how the function is implemented. Only operates formally composing functions together. Well, the "strange" thing is that this rule can be used to calculate the value of the function IF the function is computable. It means that you can "execute" the function and obtain a result not really executing it, but transforming it recursively from a form to the next one, until you reach a "standard form". This standard form "represents" the result of the execution of the function (the output). In other worlds, this is a very clever way to mix computable functions that you built yourself with other "external" functions that you assume to exist, but you have no idea what they are doing. The same you do in logic: ( A /\ B ) \/ ( not C ). You can reduce this to a normal form made only of conjunctions without having any idea of what the propositions A B and C say.

Ok. But a functional programming language can't do anything a Turing machine can't, and in fact Turing proved that TMs are equivalent to the lambda calculus, and Turing machines can't crank out the digits of noncomputable numbers. So once again I'm up to my original brick wall of understanding. Or misunderstanding.

Exactly. All that you said is correct. You can operate with logic only using a Turing machine. But the model on which you can operate is a category, without ANY limitation of size. This is exactly the same thing that you do in set theory: you assume that there exists an universe made of sets of whatever size you want. And you say that if there exist two sets A and B, then there must exist even "A intersect B", and "A union B", etc, etc.. The things that you can build with logic operators (conclusions) starting from a finite number of terms of the language (hypothesis) are always a countable set in whatever formal language you use.
And by describing the finite structures that you can build from arbitrary finite sets of other objects, you define a structure on the underlying universe on which you operate.
But you can't impose a limit on the "infiniteness" of the size of the underlying universe only by imposing rules on finite constructions that are part of it. And that is true for intuitionistic logic AND for classical logic. You cannot "build" an irrational number in set theory either. You can only assume that it exists because it's compatible with the rules that you use to build sets starting from other sets!
• 715
I've been meaning to reply to this. I can't possibly respond technically on all you've written. You've outlined a research program that I may return to on an ongoing basis, but can't comment on at the moment. I'm grateful for all your references and comments. I know a lot more about HOTT today than I did before your posts.

It's interesting that you used Coq in school. You are a HOTT native. I of course learned my set-theoretic foundations long ago. As Max Planck said, science progresses one funeral at a time.

I don't know what is an isometry group

The isometry group of 3-space is just the collection of all rigid motions of space. These are all the combinations rotations, translations, and reflections. Simple idea. They form a group in the sense of group theory, hence the name.

I only mentioned it because Banach-Tarski came up. BT is everyone's go-to example that proves that abstract math is completely divorced from common sense.

It turns out that the proof of Banach-Tarski is quite simple, at least in its outline. There are a number of steps but no step is particularly difficult. The core idea is encapsulated in the phrase, "The isometry group of 3-space contains a copy of the free group on two letters." When you unpack the meaning (which would take us far afield here) it is very simple and natural. No set-theoretic magic at all.

The Wiki article on the subject has a nice outline in case anyone's interested in demystifying this theorem for themselves. Point being that BT is much less esoteric than people think.

You cannot "build" an irrational number in set theory either. You can only assume that it exists because it's compatible with the rules that you use to build sets starting from other sets!

I wanted to put in a word for the standard real numbers.

* Set theory's not needed for the standard reals. The idea of Cauchy completion arose historically long before Cantor's work on set theory. Cauchy completeness is a very natural idea, and the formality of defining the real numbers as the (equivalence classes of) Cauchy sequences of rationals would occur even to the most diehard constructivists. They might reject the real numbers from their ontology; but they could not deny its importance as an abstract idea.

The standard reals are the only model of the reals that are Cauchy complete.

The constructive reals are not complete. The sequence whose n-th element is the real number represented by the first n digits of the binary representation of Chaitin's Omega is a Cauchy sequence of computable numbers that fails to converge to a computable number.

On the other hand the other famous alternative model of the reals, the hyperreals of nonstandard analysis, are also not complete. Any non-Archimedean field (one that contains infinitesimals) is necessarily incomplete.

The constructive reals fail to be complete because there are too few of them. The hyperreals fail to be complete because there are too many of them. The standard reals are the Goldilocks model of the real numbers. Not too small and not too large. Just the right size to be complete.

* Cauchy completeness is a second order property. It's equivalent to the least upper bound property, which says that every nonempty set of reals that's bounded above has a least upper bound. It's second-order since it quantifies over subsets of reals and not just over the reals.

The second order theory of the reals is categorical. That means that every model of the reals that includes the least upper bound axiom is isomorphic to the standard reals. Up to isomorphism there is only one model of the standard reals; and it is the only model that is Cauchy complete. Even a committed constructivist would have to acknowledge these facts, and hold the standard real numbers as important at least as an abstraction.

* As a final point, I believe as far as I can tell that not every HOTT-er is a diehard constructivist. In some versions of HOTT there are axioms equivalent to Cauchy completeness and even the axiom of choice.

If I'm understanding correctly, one need not commit to constructivism in order to enjoy the benefits of HOTT and computer-assisted proof.
• 114

Hmmm...When we say divide by 2, we mean make two of that. Or cut once. 2-1. But had our convention been that divide by 2 meant cut two times, then to divide by 0 would mean to cut 0 times or leave it as it is which is 1 piece. So then to get 0 piece you would have to cut i) negative number of times which is impossible ii) an infinitesimal number of times, which is impossible, not just because of the unreal nature of infinitesimal numbers, but also because you cannot cut at a fractional number of times iii) and infinite number of times so that every piece is so small as to be the same as nothing, which is impossible, not just because of the unreal nature of infinite numbers, but also because you'd never get to the end of it even at infinite speed of cutting and forever in time.
• 189
The Wiki article on the subject has a nice outline in case anyone's interested in demystifying this theorem for themselves. Point being that BT is much less esoteric than people think.

OK, I read the article and finally understood the point about isometry group! :smile:

Actually, I tried to look for a formal proof of Banach-Tarski in ZFC Set theory, but the only one I found is using Coq... :-) (https://hal.archives-ouvertes.fr/hal-01673378/document)

- By the way, there is a formal logic computer system using ZFC Set theory with a very extensive library (http://mizar.org/), but there is no Banach-Tarski theorem there (at least at the moment)

I am looking for formalized proofs because usually proofs in ZFC related real numbers are too long to be completely written by hand, and tend to be not clear on the steps that make concrete use of ZFC axioms.

Anyway, the reason of my doubts on ZFC in geometry are more "fundamental", and not related to the existence of the isometry group.
The thing that I don't like is that in BT (as in all topology based on ZFC), a segment (or a surface, or a solid) is DEFINED to be a set of points.
(well since in ZFC everything is defined as a set, you don't really have much choice with this definition...)

The problem is that you can't use the property of a segment being a set in any way, because equivalence relations on sets (one-to-one functions) do not preserve any of the essential properties of the segment:
- the size of the set is not preserved, because additivity doesn't work for uncountable sets (and I believe there is no way to define a sum of an uncountable set of terms that gives a finite result, because the elements of an uncountable set are not "identifiable" with a computable function, and then you can't "use" them to compute a sum).
- the topology of the set is not preserved, since every set can be reordered to have a least elenent (https://en.wikipedia.org/wiki/Well-ordering_theorem).

For this reason, the use of bijective functions with the "meaning" of moving a 3-dimensional object by breaking it into pieces and then reassembling it in another place is wrong.
I mean: it's not contradictory, but it doesn't reflect the intuitive meaning that you give to the transformation, since neither topology nor measure are invariant under this transformation.

The "trick" of breaking the transformation in two parts, so that the second part is only the composition of four pieces doesn't change the fact that the first part of the transformation does not preserve measure (the measure of each of the four intermediate pieces is undefined).

[TO BE CONTINUED]
• 715
OK, I read the article and finally understood the point about isometry group! :smile:

Ok! And I'm so glad you responded. I come to this forum to discuss math, and when there's no interesting math content going on I go over and get in trouble on the political forums. Discussing math is much more productive. As I saw on a mathematician's office door once: Good sense about trivialities is better than nonsense about things that matter!

Actually, I tried to look for a formal proof of Banach-Tarski in ZFC Set theory, but the only one I found is using Coq... :-) (https://hal.archives-ouvertes.fr/hal-01673378/document)

Arghhh. In my opinion you are totally missing the point of math. I don't mean that to be such a strong statement, but in this instance ... yes.

The point isn't to have a pristine logical proof. The point is the beautiful lifting, via the axiom of choice, from the very commonplace paradoxical composition of the free group on two letters, up to a paradoxical composition of three-space itself. It's the idea that's important, not the formal proof. That is the actual point of view of working mathematicians.

I recently ran across an article, link later if I can find it. Professional number theorists were asked by professinal logicians, wouldn't you be interested in seeing a computerized formal version of Wiles's proof of Fermat's last theorem? And the logicians were stunned to discover that the mathematicians had no interest in such a thing!

What's important about Wiles's proof is the ideas; not every last bit and byte of formal correctness.

This is something a lot of people don't get about math. It's the overarching ideas that people are researching. Sure, you can work out a formal proof if you like, but that's more like grunt work. The researchers are not interested.

Likewise, you are interested in a formal computer proof of BT, and that is not the point at all. The point is that the paradoxical decomposition of the free group on two letters induces a paradoxical decomposition of three space. That's the point. It's beautiful and strange. Formal proof in Coq? Ok, whatever floats your boat. But that is not the meaning of the theorem. The meaning is in the idea.

I hope I'm explaining the modern viewpoint here. The ideas are important, the proofs are not. I know, that's the opposite of what they tell people in the philosophy of math classes. But it's how mathematicians think.

I found the article.

Why the Proof of Fermat’s Last Theorem Doesn’t Need to Be Enhanced

Decades after the landmark proof of Fermat’s Last Theorem, ideas abound for how to make it even more reliable. But such efforts reflect a deep misunderstanding of what makes the proof so important.

It's a great read, it makes my point much better than I can.

https://www.quantamagazine.org/why-the-proof-of-fermats-last-theorem-doesnt-need-to-be-enhanced-20190603/

- By the way, there is a formal logic computer system using ZFC Set theory with a very extensive library (http://mizar.org/), but there is no Banach-Tarski theorem there (at least at the moment)

That may well be true, but it is in no way relevant to the interestingness of the theorem!

I am looking for formalized proofs because usually proofs in ZFC related real numbers are too long to be completely written by hand, and tend to be not clear on the steps that make concrete use of ZFC axioms.

That's perfectly cool if you are interested in detailed step-by-step proofs derived from first principles. But that's a different interest than the math itself.

Anyway, the reason of my doubts on ZFC in geometry are more "fundamental", and not related to the existence of the isometry group.

Two points here, one, I didn't realize you have "doubts" about ZFC, but if you do I might try to alleviate them or even perhaps agree with them. "ZFC in geometry" I didn't follow, you mean modern geometry ie algebraic geometry (very category theoretic), geometric algebra, etc.? Not catching your reference.

But secondly, you spoke of the "existence of the isometry group" as something strange or questionable. Perhaps you mean something else. If you take Euclidean 3-space straight out of multivariable calculus class, it's clear that if you have an object or set of points, you can transform them in a distance-preserving manner; and that the collection of all such transformations forms a group. Surely nothing is radical or counterintuitive about that.

If you don't like the idea of "space as a set of points" I do understand that philosophical objection. But then you are objecting to a huge amount of modern math and physical science too. I'm not sure where you're coming from. If you don't like point sets, then you wouldn't like set theory. I can certainly see that.

The thing that I don't like is that in BT (as in all topology based on ZFC), a segment (or a surface, or a solid) is DEFINED to be a set of points.

Yes that is certainly the case. And I get that you are objecting to that perspective. Which is fine. If you are engaged in the century-long project of replacing set theory, which won the 20th century; with HOTT (or category theory or Smooth Infinitesimal Analysis, etc)., which will probably win the 21st; I can't argue the point, we are all just passing through history.

But surely your objection then is not to BT, but to virtually all of modern mathematics and physics. Is that your viewpoint? How far does your rejection of using point sets to model mathematical ideas go?

(well since in ZFC everything is defined as a set, you don't really have much choice with this definition...)

Even if set theory were dethroned as the foundation of mathematics -- and of course it already has been dethroned in much of modern math -- it would still be of interest. Just as group theory is the study of all the mathematical structures that satisfy the axioms for groups; set theory is becoming the study of all the mathematical objects that satisfy the axioms for sets.

In other words now that set theory is "relieved of its ontological burden," as I saw it expressed once, we will now see a great flourishing of set theory, not its demise. Set theory is fascinating whether you regard it as foundational or not!

So if you deny set theory as a foundation, I will not disagree with you! Rather, I will point out that we can now study set theory for its own sake. Do you object?

The problem is that you can't use the property of a segment being a set in any way, because equivalence relations on sets (one-to-one functions) do not preserve any of the essential properties of the segment:
- the size of the set is not preserved,

Correct, the intervals [0,1] and [0,2] have the same cardinality but different measures. But so what? A quart of water weighs differently than a quart of oil. You can have two systems of measurement that are independent of each other. So what? When we care about cardinality we care about cardinality. When we care about measure we care about measure.

because additivity doesn't work for uncountable sets

Sure, that's the age-old problem of the continuum. How can a big pile of dimensionless points give rise to dimension? Uncountable additivity would lead to a contradiction because the measure of the union of singleton points would be 0 but the measure of the unit interval is 1. So we can't have uncountable additivity and nobody knows how points make up a line. You might as well complain to Euclid.

(and I believe there is no way to define a sum of an uncountable set of terms that gives a finite result,

It's a theorem that if you have an uncountable sum that converges to a finite number, all but countably many of the terms must be zero. Not that much of a surprise.

because the elements of an uncountable set are not "identifiable" with a computable function, and then you can't "use" them to compute a sum).

I don't think that has anything to do with it. It's not about the inability to compute a sum. It's a simple proof but it doesn't involve computability.

- the topology of the set is not preserved, since every set can be reordered to have a least elenent (https://en.wikipedia.org/wiki/Well-ordering_theorem).

The w.o. theorem does not come into play here at all. It's an interesting theorem though, worth a separate thread but a little off topic here. It's logically equivalent to the axiom of choice anyway so complaints about one are complaints about the other.

By the way you don't need the power of the well-ordering theorem to change the order type of a set via bijection. Just take the natural numbers 0, 1, 2, 3, ... and reorder them to 1, 2, 3, ..., 0. So it's the usual order but n '<' 0 for every nonzero n, where '<' is the new "funny" order relation. The natural numbers in their usual order have no greatest element; but in the funny order they do. We've changed the order type with a simple bijection. You don't argue that this simple example should be banished from decent mathematics, do you? The funny order is the ordinal number $\omega + 1$, or omega plus one. Turing knew all about ordinals, he wrote his Ph.D. thesis on ordinal models of computation. Even constructivists believe in ordinals.

I would like to call something out here. In your objecting to bijections, you are seemingly objecting to the operation of changing the order of an ordered collection. I wonder if you are going to far. Of course we can rearrange things. With infinite collections (don't call them sets if you like) we can change their order properties by rearranging them. Are you objecting to this as a concept or idea? What would be left in mathematics if we threw out rearranging things?

For this reason, the use of bijective functions with the "meaning" of moving a 3-dimensional object by breaking it into pieces and then reassembling it in another place is wrong.

It's can't be right or wrong any more than the game of chess can be right or wrong. BT is undeniably a theorem of ZFC. That's true even if you utterly reject ZFC. The novel Moby Dick is fiction, yet it is "true" within the novel that Ahab is the captain of the Pequod and not the cabin boy. Banach-Tarski is a valid derivation in ZFC regardless of whether you like ZFC as your math foundation.

Suppose I stipulate that ZFC is banned as the foundation of math.

Ok. Then Banach-Tarski is still a valid theorem of ZFC. So what is your objection to that? B-T is still fascinating and strange and its proof is surprisingly simple. One could enjoy it on its own terms without "believing" in it, whatever that means. I don't think the game of chess is "true," only that it's interesting and fun. Likewise ZFC.

I mean: it's not contradictory, but it doesn't reflect the intuitive meaning that you give to the operation, since neither topology nor measure are invariant under this transformation.

AHA! Yes you are right. But these are RIGID MOTIONS. That is the point. We are not applying topological or continuous transformations in general, which can of course distort an object.

We are applying RIGID MOTIONS. Isometries:

* Rotation around a point;

* Translation in a direction;

* Reflection through a plane.

These are rigid motions. They preserve measure. That is where you get the counterintuitive power of the theorem.

Let me say this again. We are NOT applying general bijections or topological transformations. We are applying only rigid motions in three-space, which are perfectly intuitive and reasonable. If you move an object from one location to another, or rotate it around a point, or reflect it in a plane, you preserve its measure if it has a measure. If not, then there is nothing to preserve.

The "trick" of breaking the transformation in two parts, so that the second part is only the composition of four pieces doesn't change the fact that the first part of the transformation does not preserve measure (the measure of each of the four pieces is undefined).

Well yes. Distance-preserving transformations (isometries) have the following properties:

* They preserve the measure of measurable sets; and
* They map nonmeasurable sets to nonmeasurable sets.

So clearly the concept of nonmeasurable set is in play. And these are a consequence of the axiom of choice. HOWEVER! If you reject AC and thereby banish nonmeasurable sets, you lose all of modern probability theory, which takes with it a big chunk of modern physics, not to mention the social sciences. People didn't get in a room and say, "Let's foist nonmeasurable sets in the world because we're evil." They did it to get formal probability theory off the ground. You'll have to take up the foundations of modern probability theory with Andrey Kolmogorov.

[TO BE CONTINUED]

I hope you won't get so far ahead of me that I despair of keeping up!

To focus the discussion, are you objecting to point-sets and ZFC in general? Then you object to a lot more than just B-T. But if you are interested in B-T for its own sake as a theorem of ZFC, we can talk more about that.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet

#### Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.