## Sleeping Beauty Problem

• 2.3k
This is a fallacy:

If Tails, P(Monday-Tails) = P(Tuesday-Tails)
Therefore, P(Monday-Heads) = P(Monday-Tails) = P(Tuesday-Tails)

The conclusion doesn't follow, because the first two equalities depend on the conditionals being true.

You can see this by observing that

P(Monday-Tails) = 1/4
P(Tuesday-Tails) = 1/4

Also satisfies the two conditional statements, without satisfying the conclusion

The reason why the Double-halfer splits the probability P(Tails) = 1/2 between P(Monday-Tails) and P(Tuesday-Tails) is because they treat them as exclusive outcomes as if a second coin would be tossed to determine if Sleeping Beauty would be awakened on Monday or Tuesday, but not both.

Elsewhere, I made my argument more explicit. Let me rehearse it here by transposing it to the zoo variation:

I must first justify the inferences from:

1. P(Tiger|First) = P(Hippo|First)
2. P(Tiger|T-path) = P(Toucan|T-path)

to

1b. P(Tiger) = P(Hippo)
2b. P(Tiger) = P(Toucan)

The first inference is justified by the fact that placing more enclosures with different animals in them on either path doesn't alter the relative probabilities of the Tiger or Hippo outcomes since I will be seeing those new animals (such as toucans) in addition to seeing the tigers and hippos and not to the exclusion of them.

The second inference is justified by the fact that generating alternative timelines where I don't see either tigers or toucans doesn't alter their relative probabilities (but rather lowers both in equal proportion). The second inference is actually common ground between Thirders, Halfers and Double-halfers, which is presumably why Michael only challenged the first.

• 14.5k
Your claim that "The next enclosure is the toucan enclosure iff I first turned right at the fork (P = 1/2) and then passed the tiger enclosure," is an assumption that can't be put forward without begging the question against the Thirder. You need to substantiate, rather than presuppose, that when you're nearing an enclosure, there's a 1/2 chance the path you're on is a T-path.

My wording may have been imprecise.

Both of these are true (note the tense):

1. To reach the toucon enclosure I must first turn right at the fork and then pass the tiger enclosure
2. The probability that I will turn right at the fork is $1\over2$

When I wake and consider my credence that the next enclosure is the toucon enclosure I consider what must have happened (or not happened) for the next enclosure to be the toucon enclosure. I know that I must have first turned right at the fork (A) and then passed the tiger enclosure (B).

P(A, B) = P(A) × P(B|A)

My claim is that the probability of having turned right at the fork is equal to the probability of turning right at the fork, i.e. $1\over2$.

Your claim is that the probability of having turned right at the fork is equal to the fraction of all encountered enclosures which are right-side enclosures, i.e. $2\over3$.

I don't think your claim makes any sense. The probability of the first event having happened isn't determined by what could happen after that first event happens. The probability of the first event having happened is determined only by the probability of that first event happening.1

Your conclusion only applies if I'm dropped into an enclosure at random, perhaps via parachute. This is your so-called "episodic perspective" (where it’s not the case that I turned right at the fork; it’s only the case that I’m on the right-side path). But given that this isn't what happens to me, I shouldn't reason this way.

1 Where no new relevant information is learned.
• 130
It's not. You say:

"If the coin landed on Heads, then an N-sided die is rolled, where N>=2. She is woken on day D1 - that is, D1 days after day 0 - where D1 is the result of this roll, and asked her credence. Then she is put back to sleep."
It's irrelevant (it refers to occurrences after she has answered). But I did intend to take that one out.

I did adjust the second one to match what you said, so your next point is not only just as irrelevant, it is incorrect when you say I "put her back to sleep" after the second interview:
In my example she isn't put back to sleep. The experiment just ends. The same with her second tails interview. So we have no idea how many days the experiment will last. It could be anywhere between 1 and N days.

But it is also incorrect where you claim you identified when she is sent home (you didn't). Or the implication that the experiment's length makes any difference whatsoever. But since these are the only differences you could find, and they make no difference, I can easily correct them to match the version you now say you want. Which, by your standards, is different than what you said before:
1. She is put to sleep on day 0.
2. A coin is flipped.
3. If the coin landed on Heads, then an N-sided die is rolled, where N>=2. She is woken on day D1 - that is, D1 days after day 0 - where D1 is the result of this roll, and asked her credence.Then she is sent home.
4. If the coin landed Tails, then two N-sided dice are rolled. If they land on the same number, repeat the roll until they are different. She is woken on day D1 and day D2, and asked her credence. Then, on the first of these days, she is put back to sleep with amnesia. On the second, she is sent home.

This is the exact procedure you asked for, except: (A) It lasts between 1 and N>=2 days, not between 1 and 14 days. And (B) the selection of the two random "TAILS" days in that period is uniform, instead of weighted toward the earlier days.

On each day in the range 1 to N, the prior probability that she will be woken on that day AND the coin landed on Heads is 1/N. And the prior probability that she will be woken on that day AND the coin landed on Tails is 2/N.

We can proceed two different ways from here. The first is easier, but relies on the statement "the probabilities are the same regardless of which day it is, so we can treat the day D as a known constant.:

• Pr(Heads&Day=D) = (1/2)*(1/N) = 1/(2N).
• This is the prior probability from above.
• All events of the form (Heads&Day=d), where d is not equal to D, are "ruled out" because it is day D.
• Pr(Tails&Day=D) = (1/2)*(2/N) = 1/N.
• All events of the form (Tails&Day=d), where d is not equal to D, are "ruled out" because it is day D.
• But I will reiterate that "ruled out" is not a definition that is ever used in probability theory. It is one you made up. What is done, is what follows next.
• The conditional probability of event A, given that it is day D, is found by this definition:
• Pr(A|C) = Pr(A&C)/Pr(C)
• Pr(Heads|Day=D) = (1/(2N))/(1/(2N)+1/N) = 1/3.
• In other words, what is done is to only use the prior probabilities of events that are consistent with the condition. In this case, with a specific day.

The more formal method is to use that calculation separately for each day d in the range [1,N], add them all up, and divide the result by N. It still gets 1/3.

There is one unorthodox part in all this. The random variable D in the prior probabilities is different from the random variable d used by SB when she is awake. This is because D can take on two values in the overall experiment, but only one to SB when she is awake. So (D=1) and (D=1) are not independent events in the prior, while (d=1) and (d=2) are independent to SB when she is awake

Finally, I'll point out that if N=2, this is the two-coin version you have ignored.
• 14.5k
On each day in the range 1 to N, the prior probability that she will be woken on that day AND the coin landed on Heads is 1/N.

Pr(Heads & Day = D) = 1/2 * 1/N.

That aside, using your above reasoning, in the normal problem the prior probability that she will be woken on Tuesday and the coin landed on Heads is 0, and the prior probability that she will be woken on Monday and the coin landed on Heads is 1/2.

So when she "rules out" Pr(Heads & Day = Tuesday) she's "ruling out" some Pr = 0, not some Pr = 1/4. "Ruling out" some Pr = 0 after waking does nothing because it was already "ruled out".
• 130
Pr(Heads & Day = D) = 1/2 * 1/N.
Fixed; I was in a hurry, and that didn't affect the answer. All the probabilities I gave were off by that factor of 1/2.

That aside, using your above reasoning, in the normal problem the prior probability that she will be woken on Tuesday and the coin landed on Heads is 0
No, the prior probability that she will be woken on Tuesday, and the coin landed Heads, is 1/4. The prior probability that she is awake and the coin landed Heads is 0. "Will be woken" and "is awake" are not the same events.

So when she "rules out" Pr(Heads & Day = Tuesday)
And for about the tenth time, "rules out" is not a valid expression. I only use it since you can't stop using it, and then only when I really mean a valid one. The conditional probability of event A, given event C, is defined to be:

Pr(A|C) = Pr(A&C)/Pr(C).

I gave the correct realization of this definition in my answer. The fact that it differs from yours, in "ruling out" certain probabilities, can only prove that one is wrong. Not which. The fact that mine is valid, and ours is not, proves which.

It is valid, because it looks at what could have happened, not what could not. The prior probability of reaching an interview on any given day, after Tails is flipped, is twice that of Heads. Always. On any day in your experiment.
• 14.5k
No, the prior probability that she will be woken on Tuesday, and the coin landed Heads, is 1/4.

The rules of the experiment say that she won’t be woken on Tuesday if the coin lands heads. That means that the prior probability that she will be woken on Tuesday and the coin lands heads is 0.

Just look at how you calculated the probability of waking up in my experiment. It’s the same reasoning.
• 14.5k
If a die rolls a 6 then Sleeping Beauty is woken six times otherwise she is woken once. When woken what is her credence that the die rolled a 6?

Halfers have to say $1\over6$ and thirders have to say $6\over11$.

Before she is first put to sleep she is to bet on whether or not the die will roll a 6 – paid out at the end of the experiment – and each time she is woken she is allowed to change her bet.

If she bets according to her credence then both halfers and thirders have to say that before she is first put to sleep she will bet that the die will not roll a 6.

Thirders then have to say that when woken she will change her bet and bet that the die did roll a 6.

Are thirders willing to commit to their position and change their bet?
• 2.3k
Both of these are true (note the tense):

1. To reach the toucon enclosure I must first turn right at the fork and then pass the tiger enclosure
2. The probability that I will turn right at the fork is 1/2

When I wake and consider my credence that the next enclosure is the toucon enclosure I consider what must have happened (or not happened) for the next enclosure to be the toucon enclosure. I know that I must have first turned right at the fork (A) and then passed the tiger enclosure (B).

P(A, B) = P(A) × P(B|A)

My claim is that the probability of having turned right at the fork is equal to the probability of turning right at the fork, i.e. 1/2.

Your claim is that the probability of having turned right at the fork is equal to the fraction of all encountered enclosures which are right-side enclosures, i.e. 2/3.

I don't think your claim makes any sense. The probability of the first event having happened isn't determined by what could happen after that first event happens. The probability of the first event having happened is determined only by the probability of that first event happening.

It actually often makes sense that the probability of an event having happened is determined by what has been found to happen (as a consequence of it) after that first event happened. Take the simple example of a coin toss: The initial probability that a coin would land heads was 1/2. But suppose we have tossed the coin, and we now see it landed heads. Our updated probability that it landed heads is now 1. In this case, our current situation—our observing the fact of the coin having landed heads—has completely determined the probability of the previous event, the coin landing heads. This may seem trivial, but it is a similar principle at play in our journey through the zoo, and it is also key to the Sleeping Beauty problem. The probability of one finding oneself in a specific situation is not only determined by the initial probabilities of different outcomes (or paths taken) but also by the subsequent encounters or observations that are stipulated to occur as a result of those outcomes. Importantly, it is precisely when the proportion of these subsequent observations (or encounters) is dependent on the earlier outcome that those observations warrants a Bayesian updating of our credence.

Let's look back at our zoo journey. Right now, as we approach an enclosure, what would have had to happen for us to be approaching a hippo, tiger, or toucan enclosure? For a hippo enclosure, we must have taken a new fork after passing either a hippo or a toucan enclosure. For a toucan enclosure, we must have walked past a tiger enclosure. Every fork is equally likely to lead to a hippo or a tiger enclosure directly, so we can outline the possible scenarios as follows:

Hippo -> Hippo (1/2) or Tiger (1/2)
Toucan -> Hippo (1/2) or Tiger (1/2)
Tiger -> Toucan (1)

Now, let's consider (for the sake of argument) that we are equally likely to have just passed any of the three types of enclosures. This leads to the following scenarios with their probabilities:

1/3 of the time we passed a Hippo -> we're now approaching a Hippo (1/6) or Tiger (1/6)
1/3 of the time we passed a Toucan -> we're now approaching a Hippo (1/6) or Tiger (1/6)
1/3 of the time we passed a Tiger -> we're now approaching a Toucan (1/3)

This shows that, even if we start with equal chances of having just passed any kind of enclosure, and even if every new fork is equally likely to lead directly to a H-path or T-path, the equal probabilities of approaching each kind of enclosure at each new step are maintained. This refutes your argument that we should have a 1/2 chance of approaching a hippo enclosure based on the equal chances of taking a H-path or T-path at the previous fork. It is precisely because every new fork was equally likely to lead directly to a H-path or T-path that, whenever we are approaching an enclosure, the probability that it is a hippo enclosure is 1/3.

Let us now consider a closer analogy to the Sleeping Beauty problem where there only is one fork (and only one coin toss).

It's Christmas morning and young Leonard Shelby is seated beside a gleaming Christmas tree, eyes wide with excitement. Two of his favorite aunts, Jane and Sue, have come bearing gifts. Each year, it's a tradition that one aunt gifts a singular, unique and expensive present, while the other aunt bestows upon Leonard two less expensive, yet equally fun presents. Who brings the unique gift is decided by a flip of a fair coin - heads for Jane and tails for Sue.

This year, all three gifts are packaged identically, and their order of presentation to Leonard is randomized. Due to the condition Leonard has had since his childhood (unlike what happens in the Memento movie), he forgets about the gifts as soon as he unwraps them. This makes every unwrapping experience as exciting as the first one. A small note accompanies each gift, indicating which aunt has brought it. Given the symmetries of this setup, before unwrapping any gift, Leonard's initial, or prior, probabilities are quite straightforward: there's a 1/2 chance the gift is from Jane, a 1/2 chance the coin landed heads, and a 1/3 chance that the gift is the unique one.

Now, let's consider a scenario where Leonard reads the attached card and learns that the gift is from Jane. What does this tell him about the coin toss? Well, if the coin landed heads, Jane would be the one to give the unique gift. But if it was tails, Jane would have two gifts to give. Knowing this, Leonard updates his belief about the coin toss. Now that he knows the gift is from Jane, the probability of the coin having landed heads, P(H|Jane), is reduced to 1/3.

This key conclusion is supported by Bayes' theorem, allowing Leonard to update his beliefs in light of the new evidence. Bayes' theorem here shows that the probability of heads, given that the gift is from Jane, is equal to the initial probability of Jane being the one to give the gift, given heads (1/3), times the prior probability of heads (1/2), divided by the prior probability of the gift being from Jane (1/2). This gives us a revised probability of 1/3 for heads, given that the gift is from Jane.

In short: P(H|Jane) = P(Jane|H)P(H)/P(Jane) = (1/3)(1/2)/(1/2) = 1/3.

Okay, so now imagine a similar scenario, but instead of gift-giving aunts and Christmas, it involves a scientific experiment Leonard is participating in. In this experiment, if a coin lands heads, Leonard is interviewed once in a room in the West Wing (let's call it Jane's Wing) and twice in a room in the East Wing (Sue's Wing). If the coin lands tails, the number of interviews in each wing is reversed. Similar to the Christmas scenario, Leonard is interviewed three times exactly and his priors before any interview are: P(West Wing) = 1/2, P(H) = 1/2, and P(Unique Interview) = 1/3. (In more details, his priors are: P(West-HU) = P(East-T1) = P(East-T2) = P(East-HU) = P(West-T1) = P(West-T2) = 1/6 where interviews ("awakenings") rather than gifts are labeled as unique ("U"), first ("1") or second ("2").

But now, let's say that Leonard finds out he's being interviewed in the West Wing. This new information allows Leonard to update his belief about the coin toss, similar to what happened in the Christmas scenario. Using Bayes' theorem again, Leonard finds that P(H|West Wing) = 1/3. In other words, given that he's in the West Wing (where the structure of the experiment is identical to the original Sleeping Beauty problem), Leonard's credence in the coin having landed heads is 1/3.

And there you have it! We've demonstrated that when considering the structure of the experiment and the new information that Leonard possesses, the probabilities he assigns to the coin having landed heads or tails should be updated. The key insight here is that the information Leonard gains in the revised scenario—namely that he's being interviewed in the West Wing—doesn't actually provide him with any new facts that he wasn't already aware of in the original Sleeping Beauty problem. Instead, learning he's in the West Wing simply makes explicit to Leonard the fact that he is in a situation which perfectly mirrors the original Sleeping Beauty setup. This underlines the fact that in the original scenario, Sleeping Beauty already has all the information necessary to update her beliefs in line with the Thirder position.
• 14.5k
I appreciate the reply but I’ve run out of motivation and am going to end my involvement with this. I wouldn’t change my bet, so I’m a committed halfer. I don’t think any number of analogies are going to convince me to change my bet in that specific case.
• 2.3k
If a die rolls a 6 then Sleeping Beauty is woken six times otherwise she is woken once. When woken what is her credence that the die rolled a 6?

Halfers have to say 1/6

and thirders have to say 6/11.

Before she is first put to sleep she is to bet on whether or not the die will roll a 6 – paid out at the end of the experiment – and each time she is woken she is allowed to change her bet.

If she bets according to her credence then both halfers and thirders have to say that before she is first put to sleep she will bet that the die will not roll a 6.

Thirders then have to say that when woken she will change her bet and bet that the die did roll a 6.

Are thirders willing to commit to their position and change their bet?

Thirders wouldn't change their bet in this scenario. Although it's true that in this setup, a bettor aware of P("6") = 6/11 (i.e., the proportion of "H-awakenings" to "All-awakenings" equals 6/11) might be tempted to bet on the outcome "6". They're also aware that a successful bet will be rewarded only once at the end of the experimental run, no matter how many times they've made that assessment.

Here's where the nuance comes in: over the course of six experimental runs, they would, on average, place a winning bet 6 times out of 11. This is if we imagine each individual bet as standing alone. However, due to the rules of this experiment, the six winning bets will only result in a single even-money payout. The lack of profitability in this scenario doesn't fault the credence in the outcome. Instead, it reveals a unique aspect of this setup where multiple winning bets are not individually rewarded. Therefore, the "bet" one ought to make doesn't straightforwardly track one's credence in the outcome of the die roll, but rather, it must take into account the rules of payout in this specific experimental setup.
• 2.1k
I had an idea for a rather dark version of this:

Beauty gets cloned with all her memories on a given flip, such that each Monday and Tuesday has a 50% chance of resulting in a new clone being created.

On Wednesday, all the Beauties wake up in an identical room with identical memories. The real Beauty is given breakfast and allowed to leave her room and enjoy the magical castle until Sunday, when the experiment will be rerun.

The clone Beauties are killed by the first person who comes to the door.

The real Beauty retains her memories of her free weekends.

Now, let's say the experiment has been running for a long time, three years. A lot of other Beauties have likely been cloned and killed by this point. But, if you're the real Beauty, and you consistently think you are the real Beauty when you wake up, then you have indeed been right about that fact for three years straight. So, when you wake up next time, how worried should you be that you'll be killed?

Based on (an admittedly simple) Bayesian take, Beauty should be increasingly confident that she is the real Beauty with each passing week. The whole idea is that repeated trials should move the dial in our probability estimates. And yet, this doesn't seem right, no?
• 2.3k
Beauty gets cloned with all her memories on a given flip, such that each Monday and Tuesday has a 50% chance of resulting in a new clone being created.

Are you suggesting that one new clone is always created, but the coin flip determines on which day? Furthermore, are Sleeping Beauty and her clones only awakened on Wednesdays? Regardless, I believe that your experiment could completely ignore the element of randomness introduced by the coin flip, and Sleeping Beauty's uncertainty on Wednesday would still exist, solely because she doesn't know if she's the original Beauty or a clone.

Based on (an admittedly simple) Bayesian take, Beauty should be increasingly confident that she is the real Beauty with each passing week. The whole idea is that repeated trials should move the dial in our probability estimates. And yet, this doesn't seem right, no?

It doesn't seem to me that Bayesian principles would allow Sleeping Beauty to grow increasingly confident with each passing week that she isn't a clone. That's because the design of the experiment ensures that she survives if she is the original Beauty. She can recall (or inherit once the "fake" memory) that every previous week, upon awakening, she had (from her viewpoint) a 1/2 chance of being a clone. However, her past survival events weren't probabilistically independent events, given that the survival of the original is always ensured by the fact that she is the original. It remains true, however, that until she leaves the room and survives yet again, her belief that she is currently the original is reduced to 1/2.

On edit: After asking GPT-4 (always the sycophant) to check my English, I discussed another issue with it.
Reveal

PN: Rethinking about it, I've imagined another form of faulty reasoning that might influence Sleeping Beauty in thinking that her continued survival is an increasingly unlikely event. Can you imagine what it is?

GPT-4: It might be related to the well-known but fallacious gambler's fallacy, which involves the mistaken belief that, if something happens more frequently than normal during a certain period, it will happen less frequently in the future, or that if something happens less frequently than normal during some period, it will happen more frequently in the future. [snip]

Is that the kind of reasoning you were considering?

PN: I was rather imagining that Sleeping Beauty wasn't thinking clearly about her own personal identity. She may have thought (or Count Timothy von Icarus on her behalf) that on each previous week there was a 1/2 chance that "she" (Sleeping Beauty) would survive. Thereby the idea of multiplying independent odds. But whenever she awakens on Wednesday, Sleeping Beauty's uncertainty can't coherently be expressed as "what are the odds that I, Sleeping Beauty, am the original" but rather as "what are the odds that 'I', whoever 'I' am, am the original Sleeping Beauty or a clone."

GPT-4: I see, that makes a lot of sense! In this case, Sleeping Beauty's potential misunderstanding arises from a confusion about personal identity and what it means for "her" to survive. You're right that the probability she should be considering is not "what are the odds that 'Sleeping Beauty' survives" but "what are the odds that 'I', this conscious observer that is experiencing this moment, am the original Sleeping Beauty versus a clone."

In other words, each time she awakens, there is always a 'new' Sleeping Beauty (from the perspective of that conscious observer) asking this question, and for each of them, the odds are always 1/2 of being the original. There is no compounding effect from week to week because each instance of questioning is, in a sense, a unique and independent event from the perspective of the one experiencing it. Each conscious observer is asking about their own identity in that moment, not the abstract identity of 'Sleeping Beauty' across all time.

This is a nuanced but important distinction, and getting it right is crucial for correctly understanding the Sleeping Beauty problem and its variations.

PN: "This is a nuanced but important distinction, and getting it right is crucial for correctly understanding the Sleeping Beauty problem and its variations."

I don't think it's crucial for understanding the original problem, where issues of personal identity (in the guise of questions about reidentification of the self over time) don't arise, but it may be relevant to some of the extensions regarding simulations, the multiverse, the anthropic principle, etc.

GPT-4: You're absolutely right. In the original Sleeping Beauty problem, questions of personal identity indeed don't play a significant role, as the problem focuses on updating beliefs based on self-locating uncertainty.

However, as you pointed out, these considerations become quite important when dealing with variations of the problem that involve cloning, simulations, or multiverse scenarios, where the 'identity' of the observer might be more difficult to determine or may split into multiple instances.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal