This is the title of a discussion about self-reference

Next
• 7.2k
Liar's paradoxes show us that certain assumptions we make lead to illogical conclusions. That's incredibly important, because what if you are making those assumptions in arguments that are not liar's paradoxes?

That's just it. The liar's paradox only shows up when we are talking about sentences that we would never use in normal speech. They are grammatically and semantically correct, but they don't make any sense. Or can you think of a counter-example.

So if the sentence is false, its true, and if its true, its false. We definitely have a contradiction.

Agreed. It's the significance of the contradiction that we are questioning. That I am questioning.

We realize we've said nonsense by being too implicit. That's the lesson we can glean. Just because we can say or posit an idea in language, doesn't mean it makes sense. You've previously posted the question, "What is metaphysics?" Many times people use metaphysics to disguise liars paradoxes. Terms that are ambiguous are great ways to hide nonsense terms and conclusions within them. If you can pick them out, you can ask for clarification.

I don't find this a very convincing argument. As you note, there are plenty of ways to do bad philosophy and logic without needing this paradox to show us another. The liar's paradox seems trivial and I don't see how it's connected with any substantive logical issue. Do you have examples of when "...people use metaphysics to disguise liars paradoxes."

Solving the liar's paradox can give us a tool to solve other nonsense points while keeping within the spirit of the discussion.

I guess my solution is realizing there isn't anything to solve. Yes, I know that's not what you meant. I don't see any solution but to ignore the paradox as an interesting and fun, but ultimately meaningless, pastime.

Liar's paradoxes are a great teaching tool about the ambiguity of language, but also about seeing through the intentionality of a person's argument.

I think this discussion, and all the other ones about this and similar subjects, are evidence that the subject obscures rather than clarifies language, mathematics, and logic.
• 39
I don't think this analogy applies. Seems like with the Russel paradox, we start with what appear to be consistent rules and get contradictory results.

The analogy is contrived, I agree. I've lost the circularity aspect for one. We start with consistent premises and get contradictory predictions (I feel like those are still not the right words, but it's all I've got at the moment). But they are still predictions. Someone has to go out and build the bridge. It ties into this:

what if you are making those assumptions in arguments that are not liar's paradoxes

where I agree with your response:

The liar's paradox only shows up when we are talking about sentences that we would never use in normal speech

Or to put it another way: there is no way to "accidentally" draw well-founded conclusions from a paradox, otherwise there would be a way to resolve it, meaning that it is not a paradox.

Is this the issue, that mathematicians and logicians don't believe math was invented by humans? That they think it is intrinsic to the world?

Yes, I think this could be the case, especially historically. They love the runes so much (talking about the "beauty" of an equation, for example), and why not. It seems like it could easily lead to the emotional conclusion: "maths is discovered". It's too beautiful to be our own work. And us laypeople are partly to blame. Imagine being told over and over: "Oh, you study maths? That's like magic to me." I think here of Tolkien and other fantasy settings where uttering a phrase in some ancient language unlocks an otherwise unattainable power. How fitting, that Spock had ears like an elf...

I'm losing track. Back on topic:

I don't get it.

You are right: there is only a danger if this paradox within set theory has an effect within the practical mathematics (which I suggested would necessarily always be detectable, but maybe not trivially apparent). I don't have an example to hand, although they might be found in e.g. differential geometry (foundation for General Relativity) or, where this all came to light, in computability theory (foundation for, well, computers).
• 39
There are certainly people who believe that the Russell paradox says something profound about math and logic.

I wonder if they have the same reaction to division by zero. After all it is just as "dangerous" (undefined vs contradictory, both impossible to execute), just more boring. If they don't then I can finally say I completely agree with your sentiment, that recursive paradoxes are basically useless, and are artificially raised above other mathematical impossibilities.
• 7.2k
You are right: there is only a danger if this paradox within set theory has an effect within the practical mathematics (which I suggested would necessarily always be detectable, but maybe not trivially apparent). I don't have an example to hand, although they might be found in e.g. differential geometry (foundation for General Relativity) or, where this all came to light, in computability theory (foundation for, well, computers).

I think you and I are mostly in agreement except for this paragraph. It seems pretty clear to me that the math paradoxes we're talking about are trivial. This is not my area of expertise, to put it mildly. I'd be willing to change my mind if there were people who disagree and provide an argument which is more than just arm-waving.
• 39

I see, you are looking for examples of subtle vicious circles. I might have one for you, although I'm not sure how "dangerous" it is in practice.

Define a vector. What is it?

It has magnitude and direction? Cool, so what's a direction?
• 13.7k
Some hidden self-referential puzzles:

1. There are no truths. If true then it is false. Ergo, There are truths! I wish this could be used as a starting point to tackle radical skepticism.

2. Nothing is certain. This can't be certain - sawing off the branch you're sitting on aka self-refuting statement. Still in skeptical territory.

3. Everything is relative. Is that itself relative? If yes, whatever the problem is with relative positions is also a problem for relativism.

4. Cotard's delusion (walking corpse syndrome). "I'm dead" says the patient but he has to be alive to say that!

5. This sentance has 3 erors. Two errors within the sentence and one error is the sentence itself (a counting error).

6. I'm a Cretan and all Cretans are liars.
• 7.2k
It has magnitude and direction? Cool, so what's a direction?

There's no contradiction there. You only need a good definition.

Also, you've brought up circularity several times and I haven't responded. As far as I can see, circularity is not the same thing as self-reference, although I can see they have things in common.
• 15.1k
So, my impression is that most self-reference is useless.
Have you revised this view?
• 7.2k
Have you revised this view?

No, but there really haven't been much in the way of arguments supporting self-reference. Those that there have been have been luke-warm.
• 15.1k

As to the usefulness of self-reference, it was pointed out that it is pivotal to iteration. Any iterative procedure by definition calls itself. Now that's indispensable in coding, but it also leads to many a curiosity. So for example, this beast:

...is calculated using iterative procedures.

Douglas Hofstadter made use of iteration in his discussion of consciousness, a notion that has not dissipated over the years. Chaos theory in general relies on iteration.

Also self-reference is not pivotal to semantic paradoxes. There is at least one paradox that does not make use of self-reference.
• 3.3k
So, my impression is that most self-reference is useless.

Self-referentiality points to our tendency to conflate the thing with our thoughts about said thing.
Also, more generally, it points to the possibility of saying one thing and meaning two things.
(Of course, this works because we take into consideration other statements that contextualize the one under scrutiny, but we do not verbalize those others.)
• 7.2k
As to the usefulness of self-reference, it was pointed out that it is pivotal to iteration. Any iterative procedure by definition calls itself. Now that's indispensable in coding, but it also leads to many a curiosity. So for example, this beast:

I thought about fractals. I've read that many features of the world involve fractal geometry. I don't know what to do with that.

Douglas Hofstadter made use of iteration in his discussion of consciousness, a notion that has not dissipated over the years. Chaos theory in general relies on iteration.

As for iteration. I thought about that too. One of the first things I thought of was a do loop in a computer algorithm. I don't think iteration and self-reference are the same thing. I'm not sure of that.
• 7.2k
Self-referentiality points to our tendency to conflate the thing with our thoughts about said thing.

Confusing "the moon" with the moon doesn't strike me as a self-reference issue.

Also, more generally, it points to the possibility of saying one thing and meaning two things.

I don't understand what you mean.
• 15.1k
I don't think iteration and self-reference are the same thing. I'm not sure of that.

No, they don't seem to be. Languages such as LISP depend on iteration using self-reference. I'm not sure if a do loop avoids, or just hides, that self-reference.
• 13.7k
• 3.3k
Confusing "the moon" with the moon doesn't strike me as a self-reference issue.

It can, depending on one's epistemic theory. The problem is also known as "confusing the map for the territory".

Also, more generally, it points to the possibility of saying one thing and meaning two things.
— baker

I don't understand what you mean.

Saying "There's a draft" when you're in a room with another person and there is a draft, can mean 'There's a draft' and 'Close the window'.
• 3.5k
I saw a nice self-referencing puzzle the other day.
Question: If you pick an answer at random, what are the chances that the percentage written in the pick is equal to the chance of picking that percentage?
There were four answers given from which you could pick at random. One said 50%. One said 25%. One said 60%. And another one said 25%. Altogether there were four answers from which a random choice would be made.
• 3.1k
BTW perturbative quantum field theory was recently put on pretty firm mathematical footing (see Perturbative Algebraic Quantum Field Theory by Kasia Rejzner). This uses Greens functions which are calculated recursively (i.e. G = f[G]).
• 7.2k
BTW perturbative quantum field theory was recently put on pretty firm mathematical footing (see Perturbative Algebraic Quantum Field Theory by Kasia Rejzner). This uses Greens functions which are calculated recursively (i.e. G = f[G]).

I looked up perturbative quantum field theory. I'll spend some more time with it.

Your comment made me think - Are all iterative processes self-referential? Maybe someone else brought this up previously. Is that the same kind of self-reference we're talking about?

Thanks.
• 7.2k
I saw a nice self-referencing puzzle the other day.
Question: If you pick an answer at random, what are the chances that the percentage written in the pick is equal to the chance of picking that percentage?
There were four answers given from which you could pick at random. One said 50%. One said 25%. One said 60%. And another one said 25%. Altogether there were four answers from which a random choice would be made.

Percentage = 0. Right?
• 7.2k
The problem is also known as "confusing the map for the territory".

For some reason, that made me think of a yo mama joke:

Yo mama is so fat, her reflection weighs 5 pounds.
• 3.1k
Are all iterative processes self-referential? Maybe someone else brought this up previously. Is that the same kind of self-reference we're talking about?

All recursive ones processes are, and calculation of the Greens function is recursive. But no, not all iterative ones.
• 7.2k
All recursive ones processes are, and calculation of the Greens function is recursive. But no, not all iterative ones.

I'm not sure I know the difference between "recursive" and "iterative."
• 3.1k

So something like G = g + g S G is recursive, because you can take the whole RHS and substitute into the G on right:

G = g + g S G
= g + g S ( g + g S G )
= ...

Whereas something like

du(t)/dt = u(t)

has to be solved iteratively, but isn't expandable recursively as above. Something like that may have exact solutions, whereas G has to be solved as a power series and terminated arbitrarily.
• 7.2k

Thanks.
• 3.5k
Percentage = 0. Right?

Right.
• 15.1k
It seems iteration is any form of loop, but recursion involves a loop that calls itself.

If that is correct, self-reference occurs in recursion.
• 39
Good to see all the smart people have clarified iteration vs recursion. I probably did muddle them a bit in my earlier posts, sorry fo that.

I think the interesting question that remains for me here, is if we can find non-trivial self-referential paradoxes, such that they could arise from seemingly well-founded frameworks. I'm no longer sure that it is even possible, and I think @T Clark was right to distrust my intuition about that.
• 7.2k
can find non-trivial self-referential paradoxes, such that they could arise from seemingly well-founded frameworks. I'm no longer sure that it is even possible,

Although I found the discussion helpful and interesting, it didn't resolve, for me at least, the answer to your question.
• 1
At the risk of my missing the point here, self-reference in a programming context is definitely handy. On the subjective side, several computing problems (e.g. path-finding, tree-traversal, searching) are more concisely and/or clearly written in a recursive fashion*. More concretely though, self-reference is essential for making radiation-hardened quines.

Regular quines are fixed points of a programming language; programs which when executed can print their source code without reflection (i.e. without needing to be able to read their source code from the hard-disk).

Radiation hardened quines are similar, but are also robust to the removal of one character. This is a useful property in environments where bits can be flipped/damaged on a regular basis (e.g. code on satellites - which are not shielded by the atmosphere); the program can repair itself.

* Here's an example comparing an iterative vs. recursive implementation of the factorial function:
# Iterative
iterative_factorial( x ):
x_factorial = 1
while x > 1:
x_factorial *= x
x -= 1
return x_factorial

# Recursive
recursive_factorial( x ):
return 1 if x == 0 else (x * recursive_factorial(x-1))

Full disclosure, the iterative function could be written more compactly than it is above depending on the language - but using just regular language features, the recursive solution is more easily made concise.

As another example, writing a method to navigate a maze is naturally suited to recursion.
You could write this algorithm in an iterative form, but the recursive way below seems more intuitive to me.

# This function prints the path to the exit, if there is one.
navigate( maze, path_taken ):

current_position = get_position(maze, path_taken)
past_positions = [get_position(maze, path_taken[:n]) for n in len(path_taken)]
if current_position not in past_positions:

if AT_EXIT:
print( path_taken )
exit_program()

else:
if CAN_GO_STRAIGHT:
navigate(maze, path_taken + [ STRAIGHT ])
if CAN_GO_LEFT:
navigate(maze, path_taken + [ LEFT ])
if CAN_GO_RIGHT:
navigate(maze, path_taken + [ RIGHT ])

Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal