Simple proof there is no infinity

• 2.1k
can my monitor represent information about every possible object or can it not.

In one single screen, or in a series of possibly different screen shots, of the same every possible object?

The more you examine it, the more your question seems to be watered down which can't be by good faith answered with a simple yes or no, without further clarifications established.
• 2.1k
That is not the answer, just refusal to accept the premise of the question, and is beside the point since the bottom resolution can be fixed to arbitrary size and precision. Say, human faces. My monitor can show every possible human face at least down to a scale and precision of an electron microscope. Therefore, there is only a finite number of unique human faces. Yes?

Douglas Alan raised a valid point: you are talking about the visual, he is talking about the real. You asked if anything can be depicted; yes, but not everything can be represented. And your initial premise was that infinity is thus denied. But it is not denied, only the depiction of the infinity is denied.

And that has already been established. How can you make a picture of something infinite? You can't. The picture is by definition a limited, finite area.

Thus, your claim that you can't take a picture of infinity, or can't view it on the screen, is true, but it does not deny the fact that things can be infinite.

It would be analogous to a thought. You can't see a thought, you can't depict it, but you can't deny its existence.
• 161
That is not the answer, just refusal to accept the premise of the question

You made an argument with a false premise. Consequently, you have not proven your conclusion. This is Logic 101.

and is beside the point since the bottom resolution can be fixed to arbitrary size and precision. Say, human faces. My monitor can show every possible human face at least down to a scale and precision of an electron microscope. Therefore, there is only a finite number of unique human faces. Yes?

No. Faces can differ in details that are smaller than the resolution that can be captured with an electron microscope. Also, different faces, even if they look the same in a particular pair of photographs, can move very differently from each other, which can completely alter our perceptions of what those faces look like.

|>ouglas
• 599

Can any information be digitally encoded? The answer is yes, and to arbitrary given precision. My monitor can indeed represent any and every possible information.
• 599
Faces can differ in details that are smaller than the resolution that can be captured with an electron microscope. Also, different faces, even if they look the same in a particular pair of photographs, can move very differently from each other, which can completely alter our perceptions of what those faces look like.

And I am not arguing what you are refuting. I want to set up a basic unambiguous premise we can all agree with and thus have some starting point.

So, for some arbitrary given resolution and some arbitrary given size of an object, such that it maximally occupies the whole screen, say 800x600 resolution and passport style photographs of human faces - there exist a finite number of possible human faces for that particular specified size and resolution. Yes?
• 2.1k
My monitor can indeed represent any and every possible information.

No. The monitor has 16 million colours in 1280 times 720 pixels. That gives you a combination of 1.6*1.28*7.2*10^12 combinations. That is not infinity. It is even smaller than the number of atoms in a human body.

Your computer screen can represent any and every possible information up to a combined total of about 10^13 combinations. That is no more than ten thousand billions. That's the maximum number of uniquely different representations that a computer screen can provide

And it proves nothing, actually, of the infiniteness of the combinations possible.

Please tell me why infinity is disproven if your computer screen can represent 10^13 combinations. There has to be a logical link between the two, otherwise the proof is spoof.

Right now you have failed to provide that logical link.
• 2.1k
Can any information be digitally encoded? The answer is yes, and to arbitrary given precision.

Well, as soon as you reduce the precision to below 100%, you lose information. You retain and pass SOME information, but not ALL information. That is the limitation of your computer screen.

It has ten thousand billion pieces of distinct pieces of information.

But a Kg (about two lbs) weight of ANY gas has over 10^23 atoms in it. Each atom is moving in a different direction, at different velocities, at different spins.

How can you even imagine that with a loss of ten billion TIMES the more information just on the NUMBER OF PIECES of atoms you can pass down any precise information?

It's like taking a human body, and taking one millionth of a millionth of its weight, and declare that you passed the information on that human body perfectly.

But that just proves that your computer screen does nothing of a true or even approximate representation of any complex object.

However, you STILL have the task on hand, to show to us, your captive readers, how this by now infamously poor representation proves that infinity is impossible.
• 2.1k
Can any information be digitally encoded? The answer is yes, and to arbitrary given precision.

Actually, the answer is no. There are analogue quantities, that can't be digitized. 1/3, for instance, is impossible for a binary computer to digitize. And it will lose some information if it tries.

In triary computers, yes, 1/3 could be digitized, but 1/2 could not. You can't escape this problem with any digital system.

So your observation and stance that any information can be digitized, is totally wrong.

When do you stop being wrong, @Zelebg? Now, there is an irrefutable instance of inifinity for you.
• 161
So, for some arbitrary given resolution and some arbitrary given size of an object, such that it maximally occupies the whole screen, say 800x600 resolution and passport style photographs of human faces - there exist a finite number of possible human faces for that particular specified size and resolution. Yes?

Sure, but so what? Nothing interesting results from this.

If you want to get to the interesting question, let's take Max Tegmark's argument that in our Hubble Sphere, there are only a finite number of possible states. (Our Hubble Sphere is the area of space that is causally connected to us. I.e., it's radius is defined by the farthest distance from us from which light from the Big Bang has reached us.)

If the universe is flat, then it contains an infinite number of Hubble Spheres, and consequently, if you were to be able to travel at faster than the speed of light, and you went far enough, you would eventually come to a Hubble Sphere that is in the same state as ours. Consequently, this is a way in which there might be parallel "universes" that are identical or very similar to ours.

This argument rests on the premise that all the physical features of the world are quantized, however. And this may or may not be the case. If it is the case, then Tegmark would seem to be correct. If it is not the case, then his argument fails because it is based on a false premise.

|>ouglas
• 161
In triary computers, yes, 1/3 could be digitized, but 1/2 could not. You can't escape this problem with any digital system.

I certainly don't agree that with Zelebg, but this assertion of yours is wrong. Computers can and do represent rational numbers at times with perfect accuracy. This is done by representing them as a pair of integers, rather than in a "floating point" format.

|>ouglas
• 599
Sure, but so what? Nothing interesting results from this.

Of you want to get to the interesting question, let's take Max Tegmark's argument that in our Hubble Sphere, there are only a finite number of possible states.

Now this is funny. Don't you see that is exactly what I'm saying? All I have to do is set my arbitrary resolution to planck scale and define the arbitrary given size as that of the universe to match Tegmark.
• 2.1k
I certainly don't agree that with Zelebg, but this assertion of yours is wrong. Computers can and do represent rational numbersat times with perfect accuracy. This is done by representing them as a pair of integers, rather than in a "floating point" format.
I hear what you are saying. But the emphasis is on, what you described as, AT TIMES. That is, not always.
Once you enter into a variable the value of 1/7, and you use that variable's value in calculations, you will immediately lose the perfect accuracy, as the calculations storage go on binary code representation.
• 161
Now this is funny. Don't you see that is exactly what I'm saying? All I have to do is set my arbitrary resolution to planck scale and define the arbitrary given size as that of the universe to match Tegmark.

Yes, I have agreed as much. The problem is that Tegmark is making a contentious premise in his argument, and therefore, we cannot be sure of his conclusion. All we can say is that if his premises are right, then his conclusion seems to be right, but if his premises are wrong, then we can have no confidence in his conclusion.

|>ouglas
• 161
Once you enter into a variable the value of 1/7, and you use that variable's value in calculations, you will immediately lose the perfect accuracy, as the calculations storage go on binary code representation.

This is not true. A programming language that supports doing mathematical calculations with rational numbers will typically not force you to ever convert the rational number to a floating point number. The program can run from beginning to end using only rational numbers, and can consequently produce results with perfect precision and accuracy. (Assuming that the numbers being represented are accurately represented as rationals.)

|>ouglas

P.S. Here is an example of a library for Python that lets you do just this:

https://www.tutorialspoint.com/python-rational-numbers-fractions
• 2.1k
This is not true. A programming language that supports doing mathematical calculations with rational numbers will typically not force you to ever convert the rational number to a floating point number. The program can run from beginning to end using only rational numbers, and can consequently produce results with perfect precision and accuracy. (Assuming that the numbers being represented are accurately represented as rationals.)

Perfectly true. But the numbers will be thus represented as long as a program is run written in that particular programming language. If you run a different program, written in a more conventional programming language, that does not have that feature programmed into its structure, then you lose accuracy of rationals with infinite repetitions.

This argument does not invalidate mine, where I pointed out your reservation, "AT TIMES".

Maybe in the future all programs will run that way. But not at present.
• 2.1k
Douglas, Where did ZelebG go? You see what you've done? We quibbled, and ZG took the opportunity of the moment that we weren't watching, and he ran away.
• 161
Perfectly true. But the numbers will be thus represented as long as a program is run written in that particular programming language. If you run a different program, written in a more conventional programming language, that does not have that feature programmed into its structure, then you lose accuracy of rationals with infinite repetitions.

I don't understand your argument. We should not be making any metaphysical conclusions based on how computers are typically used today.

As for the "conventionality" of programming languages, all of the most popular programming languages in use these days have libraries for doing math with rational numbers (and never having to convert them to floating point). These languages include Python, Java, C++, etc.

|>ouglas
• 161
Douglas, Where did ZelebG go? You see what you've done? We quibbled, and ZG took the opportunity of the moment that we weren't watching, and he ran away.

A fringe benefit for sure!

|>ouglas
• 2.1k
You opened my eyes, |>, to how new programming languages work. How does a program add 1/3 and 3/7 together?

Bring them to the same denominator? Like humans?

I've been out of programming for 30 years now. You are talking, to a real, live dinosaur, |>. It's exciting, innit? Until I devour you in two bites.
• 2.1k
So, |>, do they have a table in C++ , in Java, and in all other languages, for ALL imaginable non-reducible fractions of integers? If you say "yes", then ZelebG got the better of you. (Because it would mean for a finite set of table entries to contain an infinite number of table entires.) If you say "no", then what is the language to do with it? If one such integer fraction crops up, what do the programs do? I see no alternative but for the program to go and digitize the result before proceeding.
• 161
Bring them to the same denominator? Like humans?

Precisely so!

|>ouglas
• 161
So, |>, do they have a table in C++ , in Java, and in all other languages, for ALL imaginable non-reducible fractions of integers?

No, there are algorithms to determine the greatest common divisor and least common multiple of two Natural numbers.

|>ouglas
• 599
Yes, I have agreed as much. The problem is that Tegmark is making a contentious premise in his argument, and therefore, we cannot be sure of his conclusion.

Right. So if we don’t make any assumptions and instead choose arbitrary resolution and size we can make conclusions related to that specific resolution and size, like: there is only finite number of planets that look unique as seen from the altitude where they maximally occupy the given screen area.

Then we do smaller, lakes and mountains, then plants and animals, and everything else. And while you can argue there can always be some difference further you zoom in down below the decimal point, once we pass the size of an atom those differences are insignificant compared to the more general point.
• 161

Ah I see now! You are smarter than the even the greatest minds of our generation. Forgive me for ever having doubting you.

|>ouglas
• 34
Well, no, the number of distinct digital photos of a given resolution is finite.

Okay, if there is a fixed computer we're using.

But if the photo is say K x L pixels, and each pixel contains N bits of information, then by increasing N (to represent hypothetical better and better computers) then the number of K x L photos that can be conceived is infinite.
• 4k
"Simple proof there is no infinity." Meaning please of "is."
• 850
I think he means numbers don't exist and all objects up to and including the universe are forever finite
• 1.4k
I think he means numbers don't exist and all objects up to and including the universe are forever finite

I haven't followed the discussion but all questions as to the existence of mathematical objects come down to the question as what you regard as mathematical existence. Most people accept that the are "abstract objects," a phrase with a SEP page, and that abstractions live somewhere other than in the physical world. Of course then you have to explain where exactly they live. It turns out to be the same as how you judge the existence of Captain Ahab. In fact if you just regard math as pure fiction no different than a character ina novel; you save yourself a lot of trouble, philosophically. This idea is called mathematical fictionalism.

So whether the number 3 exists or whether $\aleph_{47}$ exists are the same question. They either both do or neither do; because their existence is demonstrable in standard set theory and agreed to by the world's mathematicians. Everything after that is somebody's value judgment.

I have recently come to a definition of mathematical existence. A thing has mathematical existence when a preponderance of working professional mathematicians say it does. In other words the meaning of the word existence is in the way we use it.

There was once a lot of opposition to $\aleph_{47}$ but people got over it and today we teach it to the undergrads and explain it on Wiki pages. The number went from mathematical nonexistence to existence by virtue of people getting used to Cantor's brilliant revolutionary ideas. A revolution that brought a new class of things into mathematical existence: the rigorous theory of the transfinite ordinals and cardinals. Their eventual mainstream acceptance brought them into existence.

Of course @Metaphysician Undercover would (and already has) pointed out that I have said nothing at all since whatever mathematical existence is, it can not possibly be something that is historically contingent. But it is. What we call numbers today, like negative numbers and complex numbers were regarded with horror and opposition from the mathematical establishments of their day.

So: If mathematicians say something has mathematical existence, then it does. There is surely no objective standard. It's a mistake to believe that there is.
• 850

I would add that Descartes, when he did math, want to see the whole series of proof within one "vision of intuition". Deductive logic will always say there can be eternal contradictions, but if you find your vision of intuition and make it fluid, your mind will be like water too and your Zen masters will be proud of you. This is what Hegel did. He said to find your infinity primarily in the infinity of the world. You've never lived in Plato's cave
• 34
A thing has mathematical existence when a preponderance of working professional mathematicians say it does.

So are you saying that when Georg Cantor first defined infinite sets ca. 1871 and there was great resistance among the world's mathematicians, infinity didn't exist yet?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal