• Michael
    15.8k
    The answer to problem B is clearly 1/3 and I think we both will agree here. The problem A is the same question that is asked to SB - on a given wake up event, she is asked in the moment about the probability of the coin showing heads. SO the answer in problem A is also 1/3.PhilosophyRunner

    It’s not the same because she isn’t given a randomly selected waking after 52 weeks. She’s given either one waking or two, determined by a coin toss.

    The manner in which the experiment is conducted matters.
  • PhilosophyRunner
    302
    She is given one waking or two, determined by a coin toss, and this is repeated 52 times. When she wakes up she has no idea which of the wake up events this is - so from her point of view it is a randomly selected wake up event.
  • Srap Tasmaner
    5k
    I'll have more time to look at your response tonight. A couple quick notes:

    Can you explain why the payoff tables you've come up with are unsatisfactory to you?Pierre-Normand

    The fundamental problem is that your stake changes depending on which outcome you bet on. I know when I first looked at this five years ago, I ran into problems determining the true odds: you'd get an event that's 1:2 paying off like it was 1:3. Sleeping Beauty doesn't even out when you bias the coin.

    But I'll look at it again.

    The coin toss result determines the Tuesday awakening, while the Monday awakening is independent of it.Pierre-Normand

    I think the Halfer position is roughly that there are only two outcomes: a single interview conducted in one sitting, and a double interview spread out over two sittings. Those outcomes are equivalent to the two possible outcomes of the coin toss. (If you have an even-numbered population to work with, you can just do away with the coin altogether.)

    What is the Thirder equivalent? If there are three outcomes, they cannot be equivalent to the two outcomes of the coin toss.

    To get back to two, you have to add in [heads & Tuesday], and then split by sequence, like Halfers -- only now it's heads = awake-asleep, tails = awake-awake -- or by day, as you do here, heads = asleep, tails = awake, for Tuesday only.

    That sounds plausible, but it's not what we want. Heads is not equivalent to asleep because you're awakened on Monday. More importantly, awake is not equivalent to tails.

    We don't even have to get into issues about days and indexicals to have problems. (I like "first interview" and "second interview", but it doesn't matter here.)
  • PhilosophyRunner
    302
    It’s not the same because she isn’t given a randomly selected waking after 52 weeks. She’s given either one waking or two, determined by a coin toss.

    The manner in which the experiment is conducted matters.
    Michael

    Also, take this further evolution of Problem B that I outlined earlier. The SB experiment is done every week for a year. Each week she is woken once if the coin lands heads, twice if it lands tails.

    But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post.
  • Pierre-Normand
    2.4k
    I think the Halfer position is roughly that there are only two outcomes: a single interview conducted in one sitting, and a double interview spread out over two sittings. Those outcomes are equivalent to the two possible outcomes of the coin toss. (If you have an even-numbered population to work with, you can just do away with the coin altogether.)

    What is the Thirder equivalent? If there are three outcomes, they cannot be equivalent to the two outcomes of the coin toss.
    Srap Tasmaner

    If I understand correctly, you seem to be asking how the Thirders might be able to infer the probabilities of the three fine-grained types of awakening outcomes from the (prior) probabilities of the two coin toss outcomes?

    Indeed, we can split an even-numbered population into two equal sub-populations Pop-1 (Beauties who awaken once) and Pop-2 (Beauties who awaken twice). This allows us to focus solely on Sleeping Beauty's personal credences upon awakening, concerning whether she's part of Pop-1 or Pop-2.

    In my view, the coin in the original problem provides a convenient source of stochasticity. Without it, @sime would have been justified to worry about the explanation for Sleeping Beauty's priors. Consider this: Suppose I present you with a die that could be loaded to always land on 'six'. If it's not loaded, then it's fair. You throw it once and it lands on 'six'. What is your credence that the die is loaded? Without an objective grounding for your priors, the answer is undefined. However, if I tell you that there are two identical dice - one loaded and the other fair - and a fair coin toss determines which one you'll use, you can now update your credence that the die is loaded from 1/2 to 6/7, given that over time, six out of seven 'sixes' will be from a loaded die.

    Let us therefore assume, as you suggested, that Sleeping Beauty's priors are P(Pop-1) = P(Pop-2) = 1/2, without needing to delve into the specific stochastic process that placed her in either Pop-1 or Pop-2.

    The key disagreement between Halfers and Thirders is whether Sleeping Beauty can update her credence upon awakening that she's part of Pop-1 from 1/2 to 1/3. Halfers argue that since Sleeping Beauty knows she'll be awakened at least once, she can't distinguish whether her current awakening is the only one (Pop-1) or one of two (Pop-2). Therefore, these two possibilities should be equally probable from her perspective.

    This argument seems to misuse the Principle of Indifference. Consider the die example: When the die lands on 'six', you can't distinguish whether this outcome is from the fair die or the loaded one. However, you can still update your credence P('loaded') from 1/2 to 6/7. The die landing on 'six' does convey information in this context.

    Halfers, therefore, need a stronger argument to support their 'no new information' claim. Alternatively, they could challenge Thirders to explain what new information Sleeping Beauty receives that allows her to rationally update her credence in Pop(1) from 1/2 to 1/3.

    I believe this can be explained step by step to make it more intuitive:

    --First step--

    Imagine that upon being divided into populations Pop-1 and Pop-2, the participants in each population are awakened only once the following day in their respective waking rooms. In half of the Pop-1 rooms, a single red tulip is placed on the nightstand, hidden by a cardboard cylinder. In the other half, a white tulip is used instead. In all Pop-2 rooms, a red tulip is utilized. As a participant in this experiment, Sleeping Beauty is informed of these specific details. Upon waking, she is asked about her credence in being part of Pop-1, and what her credence is that the tulip next to her is white. In this context, her credences should be P(Pop-1) = 1/2 and P(white) = 1/4.

    The cardboard cylinder is then removed, revealing a red tulip. What should Sleeping Beauty's credences be updated to now? They should be P(white) = 0 and P(Pop-1) = 1/3, right? This example appears to use Bayesian reasoning in a straightforward manner: Over time, 1/3 of participants who wake up in a room with a red tulip are part of Pop-1.

    (As for the strict proof: P(Pop-1|red) = P(red|Pop-1)*P(Pop-1)/P(red) = (1/2)*(1/2)/(3/4)=1/3)

    --Second step--

    Let's change the previous scenario so that all participants experience two awakenings, one on Monday and another on Tuesday. Participants in Pop-1 awaken once with a white tulip and once with a red tulip, while participants in Pop-2 awaken twice with a red tulip. We also introduce an amnesia-inducing drug to ensure that the participants don't remember the outcome of the Monday awakening when they are awakened again on Tuesday.

    In this new context, whenever Sleeping Beauty awakens, what should her credences P(Pop-1) and P(white) be? Arguably, most people, whether they're Halfers, Thirders or double-Halfers, would agree that these should be P(Pop-1) = 1/2 and P(white) = 1/4.

    The cardboard cylinder is then removed and, as it happens, a red tulip is revealed. What should Sleeping Beauty's credences be updated to now? They should again be P(white) = 0 and P(Pop-1) = 1/3, right?

    Perhaps the complexity of applying Bayesian reasoning in this context stems from the fact that participants in Pop-1 and Pop-2 who awaken on Monday aren't a distinct group from those who awaken on Tuesday. Indeed, the same individuals are awakened twice. To accommodate this factor, we can adjust Sleeping Beauty's Bayesian reasoning in the following manner:

    Every time a participant wakes up, the probability that they are in a room with a white tulip is 1/4. If I awaken in a room with a white tulip, the probability that I am part of Pop-1 is 1/2, and it's zero if I am part of Pop-2. As such, my prior probabilities are P(white) = 1/4 and P(Pop-1) = 1/2, while P(red|Pop-1) = 1/2.

    Consequently, once the tulip's color is revealed to be red, I can make the same inference as before: P(Pop-1|red) = P(red|Pop-1)P(Pop-1)/P(red) = (1/2)(1/2)/(3/4)=1/3.

    In an intuitive sense, this means that, since the majority of awakened participants find themselves next to red tulips because they belong to Pop-2, witnessing a red tulip upon awakening boosts their credence in being part of Pop-2. Although seeing a red tulip doesn't enable them to distinguish cases where the current awakening is the only one where they'll see such a tulip (as in Pop-1) or one of two such instances (as in Pop-2), it still provides information and counts as evidence that they are part of Pop-2. The reasoning behind this is analogous to why a die landing on 'six' constitutes evidence that the die is biased even though a fair die can also land of 'six'.

    --Third step--

    In this new variation, Sleeping Beauties themselves play the role of tulips. The populations Pop-1 and Pop-2 are participants, let's call them Sleeping Uglies*, who each share a room with a Sleeping Beauty. The Sleeping Uglies will be administered the same amnesia-inducing drugs on Sunday and Monday night, but they will always be awakened both on Monday and Tuesday, ten minutes prior to the Sleeping Beauty's potential awakenings.

    Whenever I, as a Sleeping Ugly, awaken, the probability that I am in a room with a 'sleeping' (i.e., not scheduled to be awakened) Sleeping Beauty is 1/4. The probability that I now have been awakened in a room with a 'sleeping' Sleeping Beauty is 1/2 if I am part of Pop-1 and zero if I am part of Pop-2. Therefore, my priors are P('sleeping') = 1/4 and P(Pop-1) = 1/2, while P('awake'|Pop-1) = 1/2.

    Therefore, after Sleeping Beauty is awakened in front of me, I can infer, as before, that P(Pop-1|'awake') = P('awake'|Pop-1)*P(Pop-1)/P('awake') = (1/2 * 1/2)/(3/4) = 1/3, meaning the probability that I am part of Pop-1 after Sleeping Beauty is awakened is 1/3.

    *My use if the Sleeping Uglies as participants in the experience, and of Sleeping Beauties' awakening episodes as evidences for the Uglies, is inspired by, but reverses, the example proposed by Robert Stalnaker in his paper Another Attempt to Put Sleeping Beauty to Rest.

    --Fourth and last step--

    We can now dispense with the Sleeping Uglies altogether since their epistemic situations, and the information that they are making use of (namely, that the Sleeping Beauty in their room awakens) are identical to those of the Sleeping Beauties themselves. The only difference is that there is a ten minute interval between the moment when the Speeping Uglies awaken and can make use of their evidence to update their credences, while the Sleeping Beauties can update their credences immediately upon awakening. Even this small difference can be wiped out by introducing a 10 minutes delay between the moment when the Sleeping Beauties are awakened (in all cases) and the moment when the interviewer shows up, with the proviso that when no interview is scheduled, the Beauties are put back to sleep rather than being interviewed, in which case their credences in P(Pop-1) momentarily drops to zero.
  • Pierre-Normand
    2.4k
    But she is only asked a question once in the whole year. One of the wakings is randomly selected to be the one where she is asked the question. On this randomly selected waking, she is asked the question "what is the probability that this randomly selected waking shows a heads." The answer is 1/3, as per Problem A in my previous post.PhilosophyRunner

    A Halfer might argue that Sleeping Beauty being posed such a question, along with the provided context of the question's delivery (i.e., through a random selection among all awakenings), indeed provides the grounds for Sleeping Beauty to update her initial credence P(H) from 1/2 to 1/3. However, they might also assert that this type of questioning doesn't exist in the original setup. Therefore, they might insist that, in the absence of such randomly assigned questioning, Sleeping Beauty should maintain her credence of 1/2.

    A Thirder might counter-argue by saying: The crucial element that turns the questioning into information, enabling Sleeping Beauty to update her credence, is the fact that it results from randomly selecting an awakening from all possible awakenings. Given that there are twice as many awakenings under the 'tails' condition than under 'heads,' a random selection is twice as likely to yield a 'tails' awakening. We must recognize that Sleeping Beauty doesn't necessarily require external assistance to isolate her current awakening in a manner that is both random and statistically independent of the coin toss result.

    Imagine an alternative method where an external agent, let's call her Sue, randomly selects awakenings from the complete set. Sue could examine a list of all scheduled awakenings, roll a die for each, and mark the awakening as evidence-worthy if the die lands on 'six'. The selected participants would then be equipped to update their credence P(H) to 1/3 after being presented with the evidence of their selection by Sue.

    Now, it doesn't matter who performs the die-rolling selection; what's important is that any awakening marked as evidence-worthy is selected randomly by a method independent of the coin toss outcome. The participants themselves, not Sue, could roll the die and, if it lands on 'six,' consider their current awakening to have been randomly selected (as it would indeed have been!) from the entire set of awakenings. This random selection allows Sleeping Beauty to single out the fact of her current awakening as evidence for updating her credence P(H) to 1/3.

    If the die doesn't land on 'six,' has Sleeping Beauty squandered an opportunity to identify her current awakening as a valuable piece of evidence? Actually, if the convention had been reversed to select awakenings by a die not landing on 'six', the chosen sample would still statistically represent all scheduled awakenings (with 1/3 of those being 'tails' awakenings). The Halfer's error is assuming that the mere occurrence of an awakening doesn't provide sufficient evidence for Sleeping Beauty. The participants' selection method, which involves identifying awakenings with the indexical expression "I am currently experiencing this awakening," is actually the most representative of all methods as it encompasses the entire population of awakenings!
  • JeffJo
    130

    I'm not going to wade through 14 pages. The answer is 1/3, and it is easy to prove. What is hard, is getting those who don't want that to be the answer to accept it.

    First, what most think is the problem statement, was the method proposed by Adam Elga (in his 2000 paper "Self-locating belief and Sleeping Beauty problem") to implement the experiment. The correct problem statement was:
    Some researchers are going to put you to sleep. During the [experiment], they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you [are awake], to what degree ought you believe that the outcome of the coin toss is
    Heads?

    The two changes I made do not affect anything in the problem, but they do show that Elga was already thinking of how he would implement his thirder solution. The first change is where he brought up "two days," and confused the continuation of the experiment with continuation of sleep. The second suggested that your (or SB's) information might change while you are (she is) awake, which is how Elga solved the problem piecewise.

    But what Elga added made the circumstances of the two (if there are to be two) wakings different. And it is this difference that is the root of the controversy that has occurred ever since.

    Patient: Doctor, Doctor, it hurts if I do this.
    Doctor: Then don't do that.

    But the difference Elga introduced was unnecessary. So don't do it; do this instead:
    1. Tell SB all the details listed here.
    2. Put SB to sleep.
    3. Flip two coins. Call them C1 and C2.
    4. Procedure start:
    5. If both coins are showing Heads, skip to Procedure End.
    6. Wake SB.
    7. Ask SB "to what degree do you believe that coin C1 is currently showing Heads?"
    8. After she answers, put her back to sleep with amnesia.
    9. Procedure End.
    10. Turn coin C2 over, to show its opposite side.
    11. Repeat the procedure.
    12. Wake SB to end the experiment.

    When SB is awake, she knows that she is in the middle of the procedure listed in steps 4 thru 9. Regardless of which pass thru these steps it is, she knows that in step 5 of this pass, there were four equally-likely combinations for what (C1,C2) were showing: {(H,H),(H,T),(T,H),(T,T)}. This is the "prior" sample space.

    She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3.

    The reason for the controversy, is that the difference Elga introduced between the first and (potential) second wakings obfuscates the prior sample space. This implementation has no such problem.

    But I'm positive that halfers will try to invent one. I've seen it happen too many times to think otherwise.
  • jgill
    3.9k
    They ask her one question after each time she awakens, however: What is the probability that the coin shows heads?

    This statement isolates SB from the coin toss. "What is the probability that the coin shows heads?" 1/2.
  • Srap Tasmaner
    5k
    Consider the die example: When the die lands on 'six', you can't distinguish whether this outcome is from the fair die or the loaded one.Pierre-Normand

    But there are two sources of randomness in this example, the die and the coin.

    Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one.

    The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.

    I'm undecided.
  • Pierre-Normand
    2.4k
    But there are two sources of randomness in this example, the die and the coin.

    Similarly for all analyses that treat SB's situation as describable with two coin flips. We only have one.
    Srap Tasmaner

    Indeed, in my examples (labelled "First step" through "Fourth step"), there's only a single source of randomness, which consists in the random assignment of individuals to either population Pop-1 or Pop-2 (awakened once or twice with white or red tulips).

    Halfers contend that Sleeping Beauty's awakening cannot serve as evidence indicating she is more likely to be part of Pop-2, as there's nothing that allows her to distinguish an awakening in Pop-1 from one in Pop-2. Yet, the same reasoning can be applied to the inability to distinguish a 'six' roll from a loaded die versus a 'six' roll from a fair die. Yet, the occurrence of a 'six' increases the likelihood that the die is loaded.

    You're correct in stating that there's only one source of randomness in Sleeping Beauty's case, unlike the dice scenario. However, the two situations share a strong resemblance. The reason a 'six' outcome increases the probability that a die is loaded is because loaded dice generate more instances of players confronting a 'six'. Similarly, being part of Pop-2 in Sleeping Beauty's setup leads to more instances of self-aware awakenings. This is simply an analogy - for a more compelling argument, refer back to my cases 1 through 4 in the post you quoted.

    The halfer position comes back to individuation, as you suggested some time ago. Roughly, the claim is that "this interview" (or "this tails interview" etc) is not a proper result of the coin toss, and has no probability. What SB ought to be asking herself is "Is this my only interview or one of two?" The chances for each of those are by definition 1 in 2.

    Indeed, the choice between a Halfer (P(Pop-1) = 1/2) and a Thirder (P(Pop-1) = 1/3) updated credence is a matter of individuation. While I focused on the individuation of events, you had seemed to suggest that different (more or less extended) conceptions of self might lead people towards one stance or another. This struck me as insightful, although personal psychological inclinations don't provide valid justifications. Currently, I don't identify as a Thirder or a Halfer. Rather, I believe that Thirders and Halfers are talking past each other because they each focus solely on one of two possible types of outcome distributions that could be considered in Sleeping Beauty's credence update. My previous "pragmatic" examples aimed at highlighting this duality (not a dichotomy!) When Sleeping Beauty wakes and considers her situation, is she weighing the opportunities to either evade or confirm her current situation (facing lions or crocodiles)? In this case, she should reason as a Thirder. Or is she weighing the opportunity to end, or validate, the nature of her ongoing predicament (and be rescued by Aunt Betsy) at the end of her current series of awakenings? If so, she should reason as a Halfer. The root question of what her credence should be upon awakening is inherently ambiguous, and the thought experiment is tailored to create this ambiguity.
  • Pierre-Normand
    2.4k
    She also knows that the fact that she is awake eliminates (H,H) as a possibility. This is a classic example of "new information" that allows her to update the probabilities. With three (still equally likely) possibilities left, each has a posterior probability of 1/3. Since in only one is coin C1 currently showing Heads, the answer is 1/3.JeffJo

    Your proposed scenario certainly provides an interesting variation, but it doesn't quite correspond to the structure of the situation typically discussed in literature, the one that seems to give rise to a paradox.

    In your scenario, there are four potential outcomes from the experiment, each of which is equally probable:

    HH (end) --> Never awakened
    HT HH --> Awakened once
    TH TT --> Awakened twice
    TT TH --> Awakened twice

    When Sleeping Beauty awakens, her credences corresponding to these four outcomes shift from {1/4, 1/4, 1/4, 1/4} to {0, 1/3, 1/3, 1/3}.

    However, in the scenario most frequently discussed, entire experimental runs in which Sleeping Beauty is awakened once are just as common as those where she is awakened twice. Furthermore, since there isn't any experimental run where Sleeping Beauty is not awakened at all, it's debatable whether her experiencing an awakening provides new information that would cause her to adjust her initial probabilities (as Halfers are inclined to argue).
  • Michael
    15.8k
    I'll throw in one last consideration. I posted a variation of the experiment here.

    There are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

    If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

    If Michael is woken then what is his credence that the coin landed heads?

    Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.

    And given that if woken he is 1 iff the coin landed heads, he ought to have a credence of P(Heads) = 1/3.

    Do we accept this?

    If so then the question is whether or not Sleeping Beauty's credence in the original experiment should be greater than Michael's credence in this experiment. I think it should.
  • Pierre-Normand
    2.4k
    I'll throw in one last consideration. I posted a variation of the experiment here.

    There are three beauties; Michael, Jane, and Jill. They are put to sleep and assigned a random number from {1, 2, 3}.

    If the coin lands heads then 1 is woken on Monday. If the coin lands tails then 2 is woken on Monday and 3 is woken on Tuesday.

    If Michael is woken then what is his credence that the coin landed heads?

    Michael's credence before the experiment is P(1) = 1/3, so if woken he ought to continue to have a credence of P(1) = 1/3 since he gains no new relevant evidence if he wakes up during the experiment.
    Michael

    In this variation, it seems to me that being awakened does provide Michael with relevant evidence. Given that the coin landing on tails results in one person being awakened, and the coin landing on heads results in two persons being awakened, on average, 1.5 out of three persons are awakened. Therefore, the prior probability that Michael will be awakened is P(MA) = 1/2. The conditional probabilities are P(MA|H) = 1/3 and P(MA|T) = 2/3 (and these are the same for Jane and Jill).

    Hence, when Michael awakens, it's more probable that the coin landed tails.

    P(T|MA) = P(MA|T)*P(T) / P(MA) = (2/3)*(1/2)/(1/2) = 2/3.

    And given that if woken he is 1 iff the coin landed heads, he ought to have a credence of P(Heads) = 1/3.

    Do we accept this?

    Yes, we do.

    If so then the question is whether or not Sleeping Beauty's credence in the original experiment should be greater than Michael's credence in this experiment. I think it should.

    I'd be curious to understand why you think so.

    Recently, I contemplated a similar variation wherein candidates are recruited as part of a team of two: Jane and Jill, for example. On Monday, either Jill or Jane is awakened (selected at random). On Tuesday, if a coin lands on tails, the person who wasn't awakened on Monday is now awakened. If the coin lands on heads, the experiment ends. (In this variation, as in yours, there's no need for an amnesia-inducing drug. It's only necessary that the subjects aren't aware of the day of their awakenings.)

    Just like in your variation, tails generates two awakenings (for two different subjects), while heads generates only one. On average, 1.5 out of two persons are awakened. Jane's prior is P(JA) = 3/4, and the conditional probabilities are P(JA|H) = 1/2 and P(JA|T) = 1.

    As before, Jane's awakening provides her with evidence that the coin landed tails.

    P(T|JA) = P(JA|T)*P(T) / P(JA) = (1)*(1/2)/(3/4) = 2/3.

    I would argue that this case is structurally identical to the one discussed in the original post (as well as in Lewis and Elga), with the sole exception that the relevant epistemic subjects are members of a team of two, rather than a single identifiable individual potentially being placed twice in the "same" (epistemically indistinguishable) situation. You could also consider a scenario where Jill falls ill, and her team member Jane volunteers to take her place in the experiment. In this instance, the amnesia-inducing drug would be required to maintain the epistemic separation of the two potential awakenings in the event that the coin lands heads.
  • JeffJo
    130

    Your proposed scenario certainly provides an interesting variation, but it doesn't quite correspond to the structure of the situation typically discussed in literature, the one that seems to give rise to a paradox.Pierre-Normand

    You seem to be confused about chickens and eggs, so let me summarize the history:

    • The original problem was formulated by Arnold Zuboff, and was shared amongst a small group. Its scenario lasted for a huge number of days (I've seen a thousand and a trillion). Based on the same coin flip we know of today, the subject would be wakened either every day, or once on a randomly selected day in this period.
    • The second version of the problem came out when Adam Elga put it in the public domain in 2000. In his statement of the problem (seen above), he reduced Zuboff's number of days to two. But he did not specify the day for the "Heads" waking. So it was implied that the order was still random.
    • But he did not address that problem directly. He changed it into a third version, where he did fix the day for the "Heads" waking on Monday.
    • He apparently did this because he could isolate the commonality between {Mon,Tails} and {Mon,Heads} by telling SB that it was Monday. And then the commonality between {Mon,Tails} and {Tue,Tails} by telling her that the coin landed on Tails.That is how he got his solution, by working backwards from these two special cases to his third version of the problem.
    • You could say that Elga created three "levels" of the probability space. The "laboratory" space, where the coin flip is clearly a 1/2:1/2 chance, the "awakened" space where we seek an answer, and the "informed" spaces where SB knows that {Mon,Tails}, and whichever other is still possible, are equally likely.
    • The "situation typically discussed in literature" is about how the informed spaces should relate to the awakened space. Not the problem itself.
    All the "situation typically discussed in literature" accomplishes is how well, or poorly, Elga was able to relate these "informed" spaces to the "awakened" space in the third version of the problem. And the controversy has more to do with that, than the actual answer to the second version.

    All I did, was create an alternative third version, one that correctly implements the second version. If you want to debate anything here, the issue is more whether Elga's third version correctly implements his second. If it does, then a correct answer (that is, the probability, not the way to arrive at the number) to mine is the correct answer to Elga's. It can tell you who is right about the solution to Elga's third version

    That answer is 1/3. And Elga's thrid verion is a correct version of his second, since he could have fixed the Heads waking on Tuesday and arrived at the same answer.

    In your scenario, there are four potential outcomes from the experiment, each of which is equally probable.Pierre-Normand
    And in the "scenario most frequently discussed," there is a fourth potential outcome that halfers want to say is not a potential outcome. SB can be left asleep on Tuesday. This is an outcome in the "laboratory" space whether or not SB can observe it. It needs to be accounted for in the probability calculations, but in the "frequent discussions" in "typical literature," the halfers remove it entirely. Rather than assign it the probability it deserves and treating the knowledge that it isn't happening as "new information."

    And the reason they prefer to remove it, rather than deal with it, is that there is no orthodox way to deal with it. That's why Elga created the two "informed" spaces; they are "awakened" sub-spaces that do not need to consider it.

    That's all I did as well, but without making two cases out of it. I placed that missing outcome, the fourth one you described, in a place where you can deal with it but can't ignore it.

    And there are other ways to do this. In the "scenario most frequently discussed" just change {Tue,Heads} to a waking day. But instead of interviewing her, take her on a shopping trip at Saks. Now she does have "new information" when she is interviewed, and her answer is 1/3. My question to you is, why should it matter what happens on {Tue, Heads} as long as it is something other than an interview?

    Or, use four volunteers. Three will be wakened each day. Each is assigned a different combination of {DAY, COIN} for when she would be left asleep. The one assigned {Tue, Heads} is the SB in the "scenario most frequently discussed."

    One each day, bring the three together and ask each what the probability is, that this is her only waking. None can logically give a different answer than the others, and only one of them fits that description. The answer is 1/3.

    And my point is that, while these "frequent discussions" might be interesting to some, there is a way to say which gets the right answer. Rather than arguing about why one should be called the tight solution. The answer is 1/3.
  • Michael
    15.8k
    And in the "scenario most frequently discussed," there is a fourth potential outcome that halfers want to say is not a potential outcome. SB can be left asleep on Tuesday. This is an outcome in the "laboratory" space whether or not SB can observe it. It needs to be accounted for in the probability calculations, but in the "frequent discussions" in "typical literature," the halfers remove it entirely. Rather than assign it the probability it deserves and treating the knowledge that it isn't happening as "new information."JeffJo

    What if the experiment ends after the Monday interview if heads, with the lab shut down and Sleeping Beauty sent home? Heads and Tuesday is as irrelevant as Heads and Friday.

    I think this is equivalent to the case where we don't consider days, and just say that if heads then woken once and if tails then woken twice. It doesn't make sense to consider heads and second waking as part of the probability space. It certainly doesn't have a prior probability of 1/4.
  • JeffJo
    130
    What if the experiment ends after the Monday interview if heads, with the lab shut down and Sleeping Beauty sent home? Heads and Tuesday is as irrelevant as Heads and Friday.Michael

    Then, in the "scenario most frequently discussed," SB is misinformed about the details of the experiment. In mine, the answer is 1/3.
  • Michael
    15.8k
    Then, in the "scenario most frequently discussed," SB is misinformed about the details of the experiment. In mine, the answer is 1/3.JeffJo

    So we have two different versions of the experiment:

    First
    1. she’s put to sleep, woken up, and asked her credence in the coin toss
    2. the coin is tossed
    3. if heads she’s sent home
    4. if tails she’s put to sleep, woken up, and asked her credence in the coin toss

    Second
    1. she’s put to sleep, woken on Monday, asked her credence in the coin toss, and put to sleep
    2. the coin is tossed
    3. if heads she’s kept asleep on Tuesday
    4. if tails she’s woken on Tuesday, asked her credence in the coin toss, and put to sleep

    I think the answer should be the same in both cases.

    It may still be that the answer to both is 1/3, but the reasoning for the second cannot use a prior probability of Heads and Tuesday = 1/4 because the reasoning for the first cannot use a prior probability of Heads and Second Waking = 1/4.

    But if the answer to the first is 1/2 then the answer to the second is 1/2.
  • Michael
    15.8k
    Regarding Bayes' theorem:



    Both thirders and double-halfers will accept that P(Heads | Monday) = 1/2, but how do we understand something like P(Monday)? Does it mean "what is the probability that a Monday interview will happen" or does it mean "what is the probability that this interview is a Monday interview"?

    If the former then P(Monday) = P(Monday | Heads) = 1.

    If the latter then there are two different solutions:

    1. P(Monday) = P(Monday | Heads) = 2/3
    2. P(Monday) = P(Monday | Heads) = 1/2

    I think we can definitely rule out the first given that there doesn't appear to be any rational reason to believe that P(Monday | Heads) = 2/3.

    But if we rule out the first then we rule out P(Monday) = 2/3 even though two-thirds of interviews are Monday interviews. This shows the weakness in the argument that uses the fact that two-thirds of interviews are Tails interviews to reach the thirder conclusion.

    There is, however, an apparent inconsistency in the second solution. If we understand P(Monday) to mean "what is the probability that this interview is a Monday interview" then to be consistent we must understand P(Monday | Heads) to mean "what is the probability that this interview is a Monday interview given that the coin landed heads". But understood this way P(Monday | Heads) = 1, which, if assuming P(Monday) = 1/2, would give us the wrong conclusion P(Heads | Monday) = 1.

    So it would seem to be that the only rational, consistent application of Bayes' theorem is where P(Monday) means "what is the probability that a Monday interview will happen", and so P(Monday) = P(Monday | Heads) = 1.

    We then come to this:



    Given that P(Awake | Monday) = 1, if P(Monday | Heads) = 1 then P(Awake | Heads) = 1 and if P(Monday) = 1 then P(Awake) = 1. Therefore P(Heads | Awake) = 1/2.
  • Pierre-Normand
    2.4k
    It may still be that the answer to both is 1/3, but the reasoning for the second cannot use a prior probability of Heads and Tuesday = 1/4, because the reasoning for the first cannot use a prior probability of Heads and Second Waking = 1/4.

    But if the answer to the first is 1/2 then the answer to the second is 1/2.
    Michael

    For the first case, we can use priors of P(H) = 1/2 and P(W) = 3/4, given that there are three awakenings in the four possible scenarios (H&Mon, H&Tue, T&Mon, T&Tue) where Sleeping Beauty can be. P(W|H) = 1/2, as she is only awakened on Monday when the coin lands heads.

    Consequently, P(H|W) = P(W|H)P(H)/P(W) = (1/2)(1/2)/(3/4) = 1/3.

    In the second case, we can set up a similar calculation: P(Unique|W) = P(W|Unique)*P(Unique)/P(W)

    P(Unique) is the prior probability that an awakening will be unique rather than part of two. P(Unique) = 1/3, as one-third of the experiment's awakenings are unique. P(W) is now 1, as Sleeping Beauty is awakened in all potential scenarios.

    We then find that P(Unique|W) = P(W|Unique)P(Unique)/P(W) = (1)(1/3)/(1) = 1/3.

    This second case calculation is straightforward, but the similarity between the two cases is illuminating. Bayes' theorem works for updating a belief in an outcome given new evidence, P(O|E), by increasing the prior probability of the outcome, P(O), in proportion to the ratio P(E|O)/P(E). This ratio quantifies how much more likely the outcome becomes when the evidence is known to be present.

    In both cases, Sleeping Beauty's evidence is that she is currently awake. In the first case, the relevant ratio is (1/2)/(3/4), which reflects how much more likely the coin is to land heads when she is awake. In the second case, the relevant ratio is (1)/(1), indicating how much more likely a unique awakening situation (due to the coin landing heads) is when she is awake. Both cases yield the same result (1/3), aligning with the ratio of possible H-awakenings ('unique awakenings') to total possible awakenings produced by the experimental designs.

    Another interpretation of P(H) is the proportion of entire experimental runs in which Sleeping Beauty ends up in an H-run ('unique awakening run'). According to this interpretation, the Halfer solution P(H) = 1/2 is correct. The choice between Thirders or Halfers' interpretation of P(H) should depend on the intended use: during individual awakenings (Thirders) or throughout the experiment (Halfers).
  • Michael
    15.8k
    P(Unique) = 1/3, as one-third of the experiment's awakenings are unique.Pierre-Normand

    This is a non sequitur. See here where I discuss the suggestion that P(Monday) = 2/3.

    What we can say is this:



    We know that P(Unique | Heads) = 1, P(Heads | Unique) = 1, and P(Heads) = 1/2. Therefore P(Unique) = 1/2.

    Therefore P(Unique|W) = 1/2.

    And if this experiment is the same as the traditional experiment then P(H|W) = 1/2.
  • JeffJo
    130
    So we have two different versions of the experiment:Michael

    I'm not quite sure why you quoted me in this, as the two version you described do not relate to anything I've said.

    To reiterate what I have said, let me start with a different experiment:

      You volunteer for an experiment. It starts with you seated at a table in a closed room, where these details are explained to you:
    1. Two coins will be arranged randomly out of your sight. By this I mean that the faces showing on (C1,C2) are equally likely to be any of these four combinations: HH, HT, TH, and TT.
    2. Once the combination is set, A light will be turned on.
    3. At the same time, a computer will examine the coins to determine if both are showing Heads. If so, it releases a sleep gas into the room that will render you unconscious within 10 seconds, wiping your memory of the past hour. Your sleeping body will be moved to a recovery room where you will be wakened and given further details as explained below.
    4. But if either coin is showing tails, a lab assistant will come into the room and ask you a probability question. After answering it, the same gas will be released, your sleeping body will be moved the same way, and you will be given the same "further details."
    So you sit in the room for a minute, the light comes on, and you wait ten seconds. A lab assistant comes in (so you weren't gassed, yet) and asks you "What is the probability that coin C1 is showing Heads?

    The answer to this question is unambiguously 1/3. Even tho you never saw the coins, you have unambiguous knowledge of the possibilities for what the combinations could be.

    Now, let the "further details" be that, if this is the first pass thru experiment, the exact same procedure will be repeated. Otherwise, the experiment is ended. Whether or not you were asked the question once before is irrelevant, since you have no memory of it. The arrangement of the two coins can be correlated to the arrangement in the first pass, or not, for the same reason.

    The point of my "alternate version" that I presented above is that it creates what is, to your knowledge, the same probability space on each pass. Just like this one.It exactly implements what Elga described as the SB problem. He changed it, by creating a difference between the first and second passes. The first pass ignores the coin, so only the second depends on it.

    What you describe, where the (single) coin isn't established until the second pass, is manipulating Elga's change to emphasize that the coin is ignored in the first pass. It has nothing to do with what I've tried to convey. The only question it raises, is if Elga's version correctly implements his problem.
  • Pierre-Normand
    2.4k
    This is a non sequitur.Michael

    My argument follows a consistent line of reasoning. Given Sleeping Beauty's understanding of the experimental setup, she anticipates the proportion of indistinguishable awakening episodes she will find herself in, on average (either in one or in many experimental runs), and calculates how many of those will be H-awakenings given the evidence that she will presently be awakened.

    What we can say is this:

    P(Unique|Heads)=P(Heads|Unique)∗P(Unique)/P(Heads)

    We know that P(Unique | Heads) = 1, P(Heads | Unique) = 1, and P(Heads) = 1/2. Therefore P(Unique) = 1/2.

    Therefore P(Unique|W) = 1/2.

    And if this experiment is the same as the traditional experiment then P(Heads|W) = 1/2.

    Yes, I fully concur with this calculation. It interprets Sleeping Beauty's credence P(Unique|W) = P(H|W) upon awakening as the proportion of complete experimental runs in which Sleeping Beauty expects to find herself in an H-run ('unique awakening run'). However, this doesn't negate the Thirder interpretation, which becomes relevant when Sleeping Beauty is focusing on the individual awakening events she is expected to experience, rather than on the individual experimental runs. This interpretation distinction is highlighted in various practical examples I've provided: for example, preparing to face lions or crocodiles while escaping relies on the Thirder interpretation, whereas being picked up by Aunt Betsy at the East Wing at the end of the experiment follows the Halfer interpretation, and so on.
  • Michael
    15.8k
    I think Bayes’ theorem shows such thirder reasoning to be wrong.



    If P(Unique) = 1/3 then what do you put for the rest?

    Similarly:



    If P(Monday) = 2/3 then what do you put for the rest?
  • Pierre-Normand
    2.4k
    I think Bayes’ theorem shows such thirder reasoning to be wrong.

    P(Unique|Heads)=P(Heads|Unique)∗P(Unique)/P(Heads)

    If P(Unique) = 1/3 then what do you put for the rest?
    Michael

    P(Heads|Unique) = 1 and P(Heads) = 1/3 (since 1/3 of expected awakenings are H-awakenings)

    P(Unique|Heads) is therefore 1, as expected.

    Similarly:

    P(Heads|Monday)=P(Monday|Heads)∗P(Heads)P(Monday)

    If P(Monday) = 2/3 then what do you put for the rest?

    P(Monday|Heads) = 1 and P(Heads) = 1/3.

    P(Heads|Monday) = 1/2, as expected.
  • Michael
    15.8k


    Previously you've been saying that P(Heads) = 1/2.
  • Pierre-Normand
    2.4k
    Previously you've been saying that P(Heads) = 1/2.Michael

    In earlier messages? Yes. I shouldn't have used this prior in the context of the Thirder intepretation of P(H). I was unclear between the individuation of events as they related to the two possible interpretations of P(H) for Sleeping Beauty. So, some of my earlier uses of Bayes' theorem may have been confused or inconsistent. It is, I now think, the very fact that P(H) appears intuitively (but misleadingly) to reflect Sleeping Beauty's epistemic relation to the coin irrespective of the manner in which she tacitly individuates the relevant events that generates the apparent paradox.
  • Michael
    15.8k


    Would you not agree that this is a heads interview if and only if this is a heads experiment? If so then shouldn't one's credence that this is a heads interview equal one's credence that this is a heads experiment?

    If so then the question is whether it is more rational for one's credence that this is a heads experiment to be 1/3 or for one's credence that this is a heads interview to be 1/2.
  • Pierre-Normand
    2.4k
    Would you not agree that this is a heads interview if and only if this is a heads experiment? If so then shouldn't one's credence that this is a heads interview equal one's credence that this is a heads experiment?Michael

    Indeed, I have long insisted (taking a hint from @fdrake and Laureano Luna) that the following statements are biconditional: "The coin landed (or will land) heads", "I am currently experiencing a H-awakening", and "I am currently involved in a H-run".

    However, it's important to note that while these biconditionals are true, they do not guarantee a one-to-one correspondence between these differently individuated events. When these mappings aren't one-to-one, their probabilities need not match. Specifically, in the Sleeping Beauty problem, there is a many-to-one mapping from T-awakenings to T-runs. This is why the ratios of |{H-awakenings}| to |{awakenings}| and |{H-runs}| to |{runs}| don't match.

    If so then the question is whether it is more rational for one's credence that this is a heads experiment to be 1/3 or for one's credence that this is a heads interview to be 1/2.

    Rationality in credences depends on their application. It would be irrational to use the credence P(H) =def |{H-awakenings}| / |{awakenings}| in a context where the ratio |{H-runs}| / |{runs}| is more relevant to the goal at hand (for instance, when trying to be picked up at the right exit door by Aunt Betsy) or vice versa (when trying to survive potential encounters with lions/crocodiles).
  • Michael
    15.8k
    Rationality in credences depends on their application. It would be irrational to use the credence P(H) =def |{H-awakenings}| / |{awakenings}| in a context where the ratio |{H-runs}| / |{runs}| is more relevant to the goal at hand (for instance, when trying to survive encounters with lions/crocodiles or when trying to be picked up at the right exit door by Aunt Betsy) and vice versa.Pierre-Normand

    I think you're confusing two different things here. If the expected return of a lottery ticket is greater than its cost it can be rational to buy it, but it's still irrational to believe that it is more likely to win. And so it can be rational to assume that the coin landed tails but still be irrational to believe that tails is more likely.

    However, it's important to note that while these biconditionals are true, they do not guarantee a one-to-one correspondence between these differently individuated events. When these mappings aren't one-to-one, their probabilities need not match. Specifically, in the Sleeping Beauty problem, there is a many-to-one mapping from T-awakenings to T-runs. This is why the ratios of |{H-awakenings}| to |{awakenings}| and |{H-runs}| to |{runs}| don't match.Pierre-Normand

    I'm not sure what this has to do with credences. I think all of these are true:

    1. There are twice as many T-awakenings as H-awakenings
    2. There are an equal number of T-runs as H-runs
    3. Sleeping Beauty's credence that this is a T-awakening is equal to her credence that this is a T-run
    4. Sleeping Beauty's credence that this is a T-run is 1/2.

    You seem to be disagreeing with 3 and/or 4. Is the truth of 1 relevant to the truth of 3 and/or 4? It's certainly relevant to any betting strategy, but that's a separate matter (much like with the lottery with a greater expected return).
  • Pierre-Normand
    2.4k
    I think you're confusing two different things here. If the expected return of a lottery ticket is greater than its cost it can be rational to buy it, but it's still irrational to believe that it is more likely to win. And so it can be rational to assume that the coin landed tails but still be irrational to believe that tails is more likely.Michael

    The rationality of Sleeping Beauty betting on T upon awakening isn't because this bet has a positive expected value. In fact, it's the other way around. The bet's positive expected value arises because she is twice as likely to win as she is to lose. This is due to the experimental setup, which on average creates twice as many T-awakenings as H-awakenings. It's because her appropriately interpreted credence P(T) =def P(T-awakening) = 2/3 that her bet on T yields a positive expected value, not the reverse. If she only had one opportunity to bet per experimental run (and was properly informed), regardless of the number of awakenings in that run, then her bet would break even. This would also be because P(T) =def P(T-run) = 1/2.

    The same logic applies in my 'escape scenario', where Sleeping Beauty's room is surrounded by crocodiles (and she awakens once) if the die doesn't land on 'six', and is surrounded by lions (and she awakens six times) if the die does land on 'six'. Given a rare chance to escape (assuming opportunities are proportional to the number of awakenings), Sleeping Beauty should prepare to face lions, not because of the relative utilities of encounters with lions versus crocodiles, but because she is indeed more likely (with 6/11 odds) to encounter lions. Here also, this is because the experimental setup generates more encounters with lions than it does with crocodiles.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.