• PhilosophyRunner
    302
    Let's look once again at two wordings of the SB problem, I have bolded what I see as the key part of them. Both are questions to SB:

    From Scientific American: "What is the probability that the coin shows heads"
    From wikipedia: "What is your credence now for the proposition that the coin landed heads?"

    From the thirder position, both of these questions are asking about something that is intrinsically linked to her mental state at a particular moment in time, what she can see, etc. From the halfer position they are only asking about the coin. The disagreement is about what exactly the question is asking.

    Let me rewrite these statements in more obvious ways, sorted into how the two camps interpret them.

    Thirder camp:

    "The coin is behind you. When you turn around and look at it, what is the probability that you will see heads at this very moment in time"
    "What is the probability that photons leaving the coin and entering your eyes at this very moment in time will encode an image of heads?"

    Halfer position

    "What is the probability that the coin landed on heads when it was flipped regardless of this moment in time when i am asking you the question?"
    "What is the probability that the coin landed on heads, as seen by an independent observer watching the whole experiment without sleeping."
  • PhilosophyRunner
    302
    Sorry for the repeated posts here is another thought experiment to see the difference between the two camps. Both camps are given a camera and told to take pictures of the coin at what they consider to be the important moments.

    The experiment is done twice and the coin lands on heads once and on tails once from the perspective of an independent observer.

    Halfer camp

    The coin is tossed and lands on heads on Sunday. They take a picture of it showing heads.
    The coin is tossed and lands on tails on Sunday. They take a picture of it showing tails.

    They come away with two pictures - one heads and one tails. This is what the question is asking for as far as they are concerned.

    Thirder camp

    The coin is tossed and lands on heads. They don't take a picture - they are not interested in the toss itself as sleeping beauty is asleep. Sleeping beauty wakes up on Monday, now they take a picture of the coin at that moment showing Heads.

    The coin is tossed and lands on tails. They don't take a picture - they are not interested in the toss itself as sleeping beauty is asleep. Sleeping beauty wakes up on Monday, now they take a picture of the coin at that moment showing Tails. Sleeping beauty wakes up again on Tuesday, now they take a picture again of the coin at that moment showing Tails

    They come away with three pictures - one heads and two tails. This is what the question is asking for as far as they are concerned.

    You can see the difference in what the halfers and thirders think is important based on when they took their pictures.
  • Pierre-Normand
    2.2k
    I've refined further @Michael's 'less-extreme' variation of the Sleeping Beauty problem in a way that highlights a pragmatist interpretation of the halfer/thirder dispute. Here it is with some comments by GPT-4:

    Hi GPT-4,

    Here is a new variation on the Sleeping Beauty problem that I imagined with the goal of highlighting a pragmatist interpretation of it.

    Sue Sitter has recently joined the team of sitters at the Sleeping Beauty Experimental Facility. In her role, like other sitters, she's responsible for attending to a participant on one single day during one of their experimental sessions. The nature of these sessions depends on a roll of a die: if the die lands on 'six,' the participant will be woken and interviewed six times over six days; otherwise, they will only wake once for an interview on the first day. As a sitter, Sue is assigned to a room randomly, without knowing whether the participant inside is scheduled for a single awakening or one of six.

    In the experiment facility, there are, on average, a 6/11 proportion of rooms occupied by participants waking six times (due to a die having landed 'six') and a 5/11 proportion of rooms occupied by participants waking only once (due to a die not having landed 'six'). Now suppose Sue had access to the participant registry where all active participants are listed. She spotted the name 'Jane Doe.' What are the chances she will be woken up six times? Sue's credence is 1/6, and this would also be the case for Jane Doe's credence before she undergoes the first sleep session.

    Now, suppose that by a stroke of luck, Sue is assigned to Jane Doe on that particular day. Her job is to measure her vitals as she awakens and get her breakfast as she waits for the interview. Sue arrives in Jane's room and wait for her to wake up. What are the chances that the die landed on 'six', indicating this would be one of six awakenings rather than the only one?

    Once Sue has been assigned to Jane Doe, her credence P('six') in this proposition should be updated from 1/6 to 6/11. This is because Sue was randomly assigned a room and her credence must now reflect the proportion of participants who wake up six times to the total number of participants who occupy the facility on that day. It can be argued that when Jane Doe awakens, her credence P('six') then aligns with Sue's because both their credences are targeting the exact same proposition, and both of them have the same epistemic access to it. But I'm not going to argue for this just now.

    From the point of view of Aunt Betsy (Jane Doe's aunt) who visits the facility and wants to see Jane, there is a 5/6 chance Jane's die didn't land on 'six' and she is being woken up only once whereas from Sue's point of view there is a 6/11 chance that the die landed on six and that Jane today is experiencing one in a sequence of six awakenings.

    Suppose when a visitor like Aunt Betsy is allowed to come to the facility to inquire about a participant, they get to meet at the cafeteria with their assigned sitter who will inform them of their wellbeing. Aunt Betsy and Sue Sitter then get to discuss there respective (and inconsistent) credences regarding the die throw that determined Jane's number of awakenings. Will either Aunt Betsy or Sue find some ground to update their credences?

    (Note that the circumstances of Aunt Betsy's meeting with Sue Sitter are somewhat underspecified.)

    GPT-4:

    This is a fascinating variation on the Sleeping Beauty problem! It certainly introduces an additional layer of complexity with the inclusion of Sue Sitter and Aunt Betsy, each with their own perspectives and credences.

    The original Sleeping Beauty problem is interesting because it's a question about how to update beliefs when you learn no new ordinary information, but only learn that it is now. This is often called "self-locating belief." Here, you have a situation where both Sue Sitter and Aunt Betsy have different self-locating beliefs about the situation, but also have different access to information about the experiment.

    Let's first clarify their perspectives:

    Sue Sitter: She knows the overall proportions of the experiment, so her credence that Jane Doe will wake six times is 6/11, given that she was randomly assigned a room.

    Aunt Betsy: She doesn't know the overall proportions of the experiment. She just knows that Jane Doe has been put into the experiment and that a die roll of six means Jane will wake up six times. So, her credence that Jane Doe will wake six times is 1/6.

    When Sue Sitter and Aunt Betsy meet, they have an opportunity to share their perspectives and possibly update their credences.

    The key question here is: do they have a reason to trust each other's perspective? Is there a reason why Sue should trust Aunt Betsy's credence of 1/6, or why Aunt Betsy should trust Sue's credence of 6/11?

    In a sense, they're operating under different contexts and different sets of information. Sue's context is the whole experimental facility where she knows the average proportions, while Aunt Betsy's context is just about Jane Doe and the die roll that determines her awakenings.

    So, when they meet and share their perspectives, they might be able to adjust their credences. But how they do so depends on what they learn from each other, their trust in each other's knowledge, and how they interpret the information. This is an open question in the philosophy of probability and epistemology, and different theories might suggest different ways of updating their beliefs.

    The pragmatist interpretation you've proposed seems to suggest that Sue Sitter and Aunt Betsy should stick to their own credences, because those credences are based on their own perspectives and contexts. But they could also potentially learn from each other and update their credences in light of new information.

    The exact credence updates would depend on various factors, including their prior beliefs about the reliability of their own and each other's information sources, the clarity with which they understand each other's perspectives, and their understanding of the underlying mechanics of the experiment. The problem does not provide a clear-cut, definitive answer to this question, illustrating the complexity and ambiguity inherent in the philosophy of probability and belief updating.

    PN:

    Let us start with the assumption that both Sue and Aunt Betsy are fully informed about the experimental setup and have complete trust in each other, although they may interpret their respective credences differently. Visitors like Aunt Betsy can arrange a visit if they notify the facility before the participant begins the experiment and starts their sleep schedule. Once the participant's sleep schedule is set, the facility manager organizes a meeting between the visitor and an assigned sitter. This meeting is scheduled for a randomly chosen day when the participant is due to awaken under the watch of this sitter. Importantly, the circumstances surrounding this meeting are designed in such a way that neither Sue nor Aunt Betsy receives any new information that might lead them to adjust their beliefs about the outcome of the die roll.

    GPT-4:

    Under these additional clarifications, it seems that both Sue Sitter and Aunt Betsy have complete knowledge of the experimental setup and trust each other's report. They also have no reason to believe that their meeting provides additional information about Jane Doe's situation. This makes the question of whether they should update their credences even more interesting.

    In this case, they might consider the following:

    Aunt Betsy could reason that Sue Sitter, being an employee of the facility, has a more informed perspective on the overall proportions of the experiment. Therefore, she might consider updating her credence to align with Sue's, which is 6/11.

    Sue Sitter could consider Aunt Betsy's credence, but since Sue's perspective is based on the overall proportions of the experiment, she might not find a compelling reason to change her credence based on Aunt Betsy's belief.

    What's interesting about this situation is that even though they both have complete knowledge of the experimental setup, their credences are based on their specific roles and perspectives within the experiment.

    While it's tempting to think that they should align their credences, it's not necessarily clear that they should. This highlights how the concept of "credence" in probability theory can be subjective and context-dependent, even when all parties have full knowledge of the underlying facts.

    Sue's credence of 6/11 is based on her perspective as a sitter, who knows the overall proportions of the experiment. Aunt Betsy's credence of 1/6 is based on her perspective as an aunt, who knows the die roll determines Jane Doe's awakenings.

    The pragmatist interpretation might suggest that each should stick to their own credence, as it is the most rational belief given their specific perspective and role. But other interpretations might suggest that they should try to align their beliefs in light of their shared knowledge and trust in each other.

    This scenario serves as an interesting case study in the philosophy of probability, illustrating how different perspectives can lead to different credences even with full knowledge of the facts.

    PN:

    Suppose that whenever a sitter and a visitor meet under such circumstances, they are allowed to put their money where their mouth is and make an even money bet on the outcome (i.e. on 'six' or 'not six'; or, equivalently, on six awakenings or one single awakening). Aunt Betsy would bet on 'not six' and expect to win five out of six times. Sue Sitter would bet on 'six' and expect to win six out of eleven times. And so would they both! How is that possible? (I already know the answer, but that's a challenge for you).

    GPT-4:

    The situation you've described is certainly intriguing. It seems paradoxical because Sue and Aunt Betsy have different credences about the same event and yet, given their respective perspectives, they both expect to win more often than not. This paradox arises from the fact that the event being wagered on ("six" or "not six") occurs within different sampling populations for Sue and Aunt Betsy.

    Aunt Betsy is betting on the outcome of a single, specific experiment involving Jane Doe, where the die is rolled once. She knows that in this single experiment, the chance of getting a six is 1/6, and the chance of not getting a six is 5/6. So, she expects to win her bet 5 out of 6 times.

    On the other hand, Sue Sitter is not betting on a single, specific experiment. Instead, she's betting on the outcome of a random sampling from all the experimental sessions taking place in the facility. In this larger population of experimental sessions, 6/11 of the sessions will involve six awakenings (a "six" outcome), and 5/11 will involve one awakening (a "not six" outcome). So, she expects to win her bet 6 out of 11 times.

    This apparent paradox is resolved when we realize that Sue and Aunt Betsy are not actually betting on the same event, despite appearances. Aunt Betsy is betting on the outcome of a specific experiment (Jane Doe's), while Sue is betting on the outcome of a random sampling from all experiments. The event they're betting on has a different probability in these two different contexts.

    This example demonstrates an important principle in probability theory: the probability of an event can depend on the context or sampling population in which it's considered. It also illustrates how different perspectives can lead to different assessments of probability, even when all parties have full knowledge of the underlying facts.

    PN:

    Your explanation is generally accurate, but it might be helpful to clarify that Aunt Betsy's and Sue's chances of 'winning' more often, despite betting on complementary outcomes, are based on their own individual long-term frequencies of winning. If they repeat their respective 'experiments' many times, Sue will interact with different participants' acquaintances, and Aunt Betsy will meet different sitters. Consequently, they won't both win more often against each other – a scenario that would indeed constitute a logical impossibility.

    GPT-4:

    Absolutely, your clarification is spot on. The betting scenario, as you've described it, involves many repetitions of the experiment. In the long run, both Sue Sitter and Aunt Betsy are betting in multiple rounds, with different people, based on their respective beliefs about the outcome of the experiment.

    Aunt Betsy, when betting with different sitters across different experiments, will expect to win 5 out of 6 times because she is considering each experiment as a single event where a die is rolled once.

    On the other hand, Sue Sitter, when betting with different visitors across different experiments, will expect to win 6 out of 11 times because she is considering the overall proportions of the experiment and the distribution of awakenings in the facility.

    They are not directly betting against each other over and over again. If they were, then it would indeed be a logical impossibility for both to win more often than they lose, since they're betting on complementary outcomes.

    This scenario is an interesting demonstration of how different perspectives can lead to different probabilities, and how those probabilities are borne out in the long term through repeated trials. It also shows how the same event can be perceived differently depending on the context and the observer's perspective.
  • Pierre-Normand
    2.2k
    Follow-up to my discussion with GPT-4 about the pragmatist interpretation of the Sleeping Beauty problem:

    PN:

    As I reflected about this thought experiment while taking a shower (always a propitious moment for philosophizing) I noticed a flaw in my initial conception of it. Due to the way Aunt Betsy and Sue's meeting is arranged, the mere occurrence of the meeting allows Sue to update her credence P('six') to match Aunt Betsy's. Can you see why?

    GPT4:

    Yes, I see the point you're making. The scheduling of the meeting between Aunt Betsy and Sue is tied to a participant's awakening, which in turn is tied to the outcome of the die roll. This means that the occurrence of the meeting itself does provide Sue with information relevant to her credence P('six').

    Specifically, since the meeting occurs on a randomly chosen day when the participant is scheduled to awaken, the very fact that the meeting is taking place tells Sue that the participant has indeed awakened at least once. Given that the chance of the participant awakening at all is higher if the die landed on 'six', this information should increase Sue's credence that the die landed on 'six'. However, because the die is rolled before the participant's sleep schedule is set, and the meeting is scheduled after this, the information doesn't completely determine the outcome of the die roll. Sue's credence should therefore align more closely with Aunt Betsy's, but it won't necessarily match it exactly.

    PN:

    I think you are now (uncharacteristically) not entirely on track. The reason why is that when Sue requests for there to be a meeting with (one of) Jane's assigned sitter(s), if the die didn't land on 'six', the unique sitter assigned to Jane will be booked for the meeting. Else, one of the six sitters assigned to Jane will be booked at random. Upon knowing that she was booked, Sue credence that she was the only one (rather than one among six) sitters assigned to Jane increases.

    GPT4:

    You're right, I misunderstood the key aspect of your thought experiment. When Aunt Betsy schedules a meeting, one of two scenarios will happen:

    If the die did not land on 'six,' one sitter was assigned to Jane, and that sitter will be booked for the meeting.

    If the die landed on 'six,' six sitters were assigned to Jane, and one of them will be randomly chosen for the meeting.

    Given this setup, when Sue is selected for the meeting, she gains new information: she now knows that she is either the sole sitter assigned to Jane (if the die did not land on 'six') or one among six (if the die landed on 'six').

    The fact that Sue was selected for the meeting therefore updates her credence in the proposition that she is the only sitter assigned to Jane. This is because being chosen for the meeting is more likely in the scenario where there is only one sitter (where the die did not land on 'six') compared to the scenario where there are six sitters (where the die landed on 'six').

    I apologize for the confusion, and I appreciate your patience in clarifying your thought experiment.

    PN:

    No trouble! I think we can easily use Bayesian updating to show that Sue's credence in P(not 'six') thereby increases from 5/11 to 5/6.

    GPT4:

    Yes, you're correct. Sue can use Bayesian updating to adjust her beliefs based on the new information she has received.

    The Bayes' theorem formula is as follows:

    P(A|B) = [P(B|A) * P(A)] / P(B)

    In this case, we can interpret:

    A as the event "the die did not land on 'six'"
    B as the event "Sue is selected for the meeting"
    Before the meeting, Sue's prior belief, P(A), is 5/11, reflecting the proportion of participants who wake up only once.

    P(B|A) is 1, because if the die did not land on 'six', Sue is the only sitter assigned to Jane and will definitely be selected for the meeting.

    P(B) is the total probability that Sue is selected for the meeting, which is the sum of the probabilities of her being selected under both possible outcomes of the die roll. This is (5/11 * 1) + (6/11 * 1/6) = 5/11 + 1/11 = 6/11.

    Plugging these values into the Bayes' theorem formula gives us:

    P(A|B) = [1 * (5/11)] / (6/11) = 5/6

    So, after learning that she has been selected for the meeting, Sue's updated belief in the proposition that the die did not land on 'six' increases to 5/6.

    PN:

    You got it right on the first try, congrats!

    GPT-4:

    Thank you! I'm glad the explanation met your expectations. Bayesian reasoning can be quite intricate but it's a powerful tool for understanding how to update our beliefs in light of new information. If you have any more questions or thought experiments, feel free to share!

    PN:

    I just want to note that this result does not invalidate the 'pragmatist' lesson from the thought experiment. It's still the case that if they were to bet on their credences prior to meeting, both Aunt Betsy and Sue would make money in the long run. Furthermore, after the meeting, Sue would alter her betting behavior and make even more money (winning five times out of six by betting on the opposite outcome). But in this case, she would only have the opportunity to make this more profitable bet when she is booked for a meeting, so her proportions of winnings (and hence, her credence) would reflect differently sampled populations.

    GPT4:

    Absolutely, you've hit the nail on the head. While Sue updates her credence after the meeting, this doesn't undermine the pragmatist interpretation that emerged earlier in the thought experiment. The betting behavior of both Aunt Betsy and Sue is still rational given their respective information and they would both make money in the long run if they were to bet on their initial credences.

    Furthermore, you're correct in noting that Sue's updated credence and altered betting behavior would indeed lead to more profitable outcomes for her in the instances where she is booked for a meeting. However, these instances represent a different sampling of the population (only when she is booked for a meeting), reinforcing the point that different agents with different information and sampling conditions can rationally hold different credences about the same event.

    This is a very insightful point and underscores the richness and complexity of the pragmatist interpretation of the Sleeping Beauty problem and its variations. It's a good reminder that our beliefs and predictions about the world should take into account not only the available information but also the ways in which we are sampling from the world.
  • Kizzy
    45
    https://thephilosophyforum.com/discussion/comment/782810 Any thoughts on this contribution to thread from The Lounge posted back in March titled, "Cinderella Problem. Anyone understand it?" My interpretation and take on this is coming from a clean slate as this was the first I have heard about the problem ...This view or twist I add to the problem involves not waking up on Wednesday at all, death is the outcome..but is it the only outcome?...The other outcome that WILL ALSO occur in this experiment is of a fair coin toss...two definite outcomes a H or T result and never waking up again.
    What if she was never suppose to wake up again after either just Monday (Heads on fair flip) or Monday and Tueday (tails on fair flip) How lucky would she be to get two days instead of one to be alive for one more moment? What if she couldn't wait to die?....unlucky gal.
  • Pierre-Normand
    2.2k
    It boils down to the following question. Should she risk a single draw from

    P(Flip = Heads | Today = Monday) = 1/2 (no amnesia)

    Or should she risk a single draw from

    P(Flip = Heads | Today = Monday Or Tuesday) = x (amnesia)

    According to halfers, x = 1/2, implying that she gains nothing by choosing amnesia.

    According to thirders, x = 1/3, indicating that she gains from choosing amnesia.
    sime

    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her misconstruing the nature of her predictions. A thirder, upon each awakening, assigns a credence of P(H) = 1/3 to the coin having landed heads, predicting that, over many awakenings within various runs of the experiment, the coin will land 'tails' 2/3 of the time. These are predictions about the frequency of outcomes across multiple awakenings, not about the specific outcome of the coin toss in a particular awakening.

    The initial bet, as you stipulated, is made prior to the start of the experiment, not at each waking event. As such, the payout of $1 is tied to the experimental run as a whole, not to each individual awakening. While a thirder may correctly predict the frequency of 'tails' outcomes across awakenings, these predictions do not increase the actual winnings from the initial bet.

    In sum, while the amnesia might increase the frequency of correct predictions, it does not lead to a higher payout. This is because only one payout occurs per experimental run, even when Sleeping Beauty correctly predicts a 'tails' outcome twice within a run. A thirder, fully informed about the conditions of the experiment and the nature of her predictions, would be aware of this.
  • sime
    1k
    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her conflating the probability of making a correct prediction upon awakening with the frequency of the actual payout from the initial bet.Pierre-Normand

    They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

    1) That the physical probability of winning the lottery is 1/2

    2) That amnesia doesn't influence lottery outcomes.

    3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?


    Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.
  • Michael
    14k
    Your suggestion that a thirder expects to gain from choosing amnesia would depend on her conflating the probability of making a correct prediction upon awakening with the frequency of the actual payout from the initial bet.Pierre-Normand

    I think a distinction needs to be made between the probability of making a correct prediction and the frequency of making a correct prediction. That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.

    Maybe thirders who use betting examples are simply frequentists?

    Perhaps there also needs to be a distinction made between the probability of making a correct prediction and the probability of the coin having landed tails. It could be that the answers are different. This might be especially true for frequentists, as the frequency of correct predictions is not the same as the frequency of coins landing tails (there can be two correct predictions for every one coin that lands tails).
  • Pierre-Normand
    2.2k
    They do indeed conflate them. Why would SB assign a credence of 1/3 to heads, if she knows

    1) That the physical probability of winning the lottery is 1/2

    2) That amnesia doesn't influence lottery outcomes.

    3) That had she not had amnesia, then she would either know the result of the lottery on waking up, or else refer to it's probabilities as being 1/2?
    sime

    Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.

    Since thirders make no principled distinction between causes and effects, they do indeed encourage avoidance-coping. For by their logic, one should raise one's credences for one's preferred outcomes by avoiding information.

    Neither halfers nor thirders typically make any reference to the potential causal impacts (e.g. winning bets) of them expressing meaningful credences. If they would be more careful about that, they would stipulate what population samples they are talking about when they say "I have credence P that O is true when I am in situation S." In that case, the way in which they choose how to define what constitutes S and O would reflect their interests in predicting the frequency of such Os (e.g. Heads) in such Ss (e.g. awakenings or experimental runs). And then, thirders and halfers would simply stop talking past each other.
  • sime
    1k
    Because her credence wasn't meant to reflect the proportion of winning bets but rather the proportion of awakening episodes that occur during an experimental run that results in the (initial) bet being won. Those two aren't the same. If Sleeping Beauty had instead been afforded the opportunity to bet on each awakening episode, then her winning frequencies would exactly match her credence that she is winning that bet.Pierre-Normand

    The question she is asked is in the context of a single trial. It is merely

    "What is your credence now for the proposition that the coin landed heads?"

    The problem refers to the outcome of a particular coin toss, with a sample space S = {H,T} describing two possible worlds. So the problem does not concern frequencies. Both halfers and Elga agree that the propensity of the coin to land heads is 1/2. So the measure M over S is

    M (H) = M(T) = 1/2

    We are given two causal implications with respect to S that characterise two random variables, namely

    Awaken_Mon (H) = True
    Awaken_Mon (T) = True

    and

    Awaken_Tue (H) = True
    Awaken_Tue (T) = False

    Both of which are "measured" by M to give

    P( Awaken_Mon = True) = 1
    P( Awaken_Tue = True ) = 1/2

    This fully describes the stated causal relationships of the Sleeping Beauty Problem, that go from the state of the coin to the states of two awakening events.

    How do you suppose that we

    (1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?

    and

    (2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

    Unless thirders can answer (1) and (2), their epistemic probabilities are meaningless
  • Pierre-Normand
    2.2k
    I think a distinction needs to be made between the probability of making a correct prediction and the frequency of making a correct prediction. That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.

    Maybe thirders who use betting examples are simply frequentists?

    Perhaps there also needs to be a distinction made between the probability of making a correct prediction and the probability of the coin having landed tails. It could be that the answers are different. This might be especially true for frequentists, as the frequency of correct predictions is not the same as the frequency of coins landing tails (there can be two correct predictions for every one coin that lands tails).
    Michael

    While Bayesians and frequentists might disagree about the meaning of probability assignments to potential outcomes of experiments, with the former asserting that they depend on degrees of belief, and the latter contending that they're based on expected long-run frequencies of such outcomes, the dispute between thirders and halfers seems more closely related to the class of events being predicted. The Bayesian framework is equally well-suited for deriving the thirder result, as long as the probability in question is the chance of finding oneself within a 'tails' experimental run, given the evidence that one is currently awake.
  • Pierre-Normand
    2.2k
    How do you suppose that we

    (1) relate the random variables, whose outcomes refer to events in the B series of time, to the indexical "Today" of the A series, whose relationship to B series and to the physics of coin-tosses isn't specified by the problem?
    sime

    This is actually straightforward. Imagine a spatial analogy. The 'B series of space' might be represented by Cartesian coordinates on a surface or by room numbers. The 'A series of space' could correspond to indexical names of locations like "here", or "this room". In the example I gave earlier, Sue Sitter is randomly assigned to a room in the Sleeping Beauty Experimental Facility. There are twice as many rooms allocated to participants who are scheduled for two awakenings (resulting from a coin landing tails), as opposed to one awakening (from a coin landing heads).

    How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.

    (2) How should we apply induction to "invert" the causal direction from "Today" to the state of the coin, assuming we have a satisfactory answer to (1) ?

    The causal flow goes from 'Coin toss results' -> 'Distribution of tails and heads assigned rooms' -> 'Evidence of being in a random room' -> 'Sue calculating her credence'. Therefore, there is no causal paradox.
  • Michael
    14k
    So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.Pierre-Normand

    I think that part in bold is key.

    There are two Sleeping Beauties, A and B. If the coin lands heads then A will be woken once and B twice, otherwise B will be woken once and A twice.

    When woken each is asked their credence that they have been or will be woken twice.

    Each of the three sitters is also asked their credence that their participant has been or will be woken twice.

    Now it may be tempting to argue that the sitters ought to reason as if their interview has been randomly selected from the set of all interviews, but that might not be technically correct. The only way I can think of this to be done is for one of the sitters to be randomly assigned an interview from the set of all interviews (of which there are 3), for one to be randomly assigned an interview from the set of all remaining interviews (of which there are 2), and then one to be assigned the final interview.

    So rather the sitters should reason as if they are randomly selected from the set of all sitters, and so their credence that their participant has been or will be woken twice will be 2/3.

    The question, then, is whether or not the participant should reason as if they are randomly selected from the set of all participants, and so their credence that they have been or will be woken twice is 1/2. I will continue to say that they should, given the propensities of the experimental setup to place them in the position to be woken twice.

    That the participant who is woken twice will be right twice as often (if she guesses that she has been or will be woken twice) isn't that each participant's credence should be that they are twice as likely to be woken twice.
  • sime
    1k
    How does Sue deduce that "here" indicates she is in a room occupied by a participant scheduled for two awakenings with a probability of 2/3? She draws this conclusion from the fact that twice as many such rooms exist in the facility compared to rooms allocated to participants with only one scheduled awakening. So, from Sue's perspective (based on the exact same evidence she shares with the participant), she concludes that the coin landed tails with a 2/3 probability, despite the coin having a 1/2 propensity to land tails. Sue's credence that the coin landed tails is a consequence of both the initial propensity of the coin to land tails and the propensities of the experimental setup to place her in a room that corresponds to a tails outcome.Pierre-Normand

    Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.

    So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2
  • sime
    1k
    Now suppose she doesn't make any assumptions regarding her room allocation. What then?

    For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

    1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

    but nevertheless

    2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
  • Pierre-Normand
    2.2k
    Two "tail" rooms refer to the same coin outcome. So when computing the probability that the coin of her room landed heads, she needs to take into consideration the other room she might have occupied.sime

    Have you had a look at the setup I had described here to provide a pragmatist account of the thirder/halfer dispute?

    On a particular day, when Sue is assigned to a random room in her role as a sitter, all the rooms are occupied by different participants in the study. Each of their schedules was initially determined by a separate coin. Participants scheduled for two consecutive days of awakening might be there for their first or second day (with equal probabilities). In this case, there is a "tail outcome" either tomorrow or yesterday, which you might wish to label as "the same outcome". However, this is irrelevant to the coin currently hidden under the cardboard on the nightstand next to the participant's bed, about which Sue is expressing her credence today based on the available information.

    So assuming that her supervisors assign her room using a draw from a uniform distribution, her credence that the coin of her room landed heads, should be 1/2

    Why would that be? Two thirds of the rooms occupied by participants on any given day are occupied by individuals who have been scheduled for two awakenings (on two consecutive days) because a coin landed tails.
  • Pierre-Normand
    2.2k
    Now suppose she doesn't make any assumptions regarding her room allocation. What then?

    For all she knows, she might have been hand-picked to babysit a "heads" room. In which case the probability that her room is of a heads coin, is 1. Due to possibilities such as this, she should

    1) Refrain from committing to a probability that her room is of a heads coin, until further information is given.

    but nevertheless

    2) Deduce that the probability of her room coin landing heads is 1/2, given that it is known information.
    sime

    If that were the case, the problem would transform into a different version of the Sleeping Beauty problem. This altered version would have Sleeping Beauty awakening only once regardless of the coin's outcome, or not being informed about the number of awakenings in either case. Indeed, in such setups, there would be no grounds for adopting a 'thirder' stance. However, in the problem statement in the original post, the information regarding the number of awakenings following the heads and tails outcomes is provided to Sleeping Beauty. Hence, the thirder stance remains relevant.
  • Pierre-Normand
    2.2k
    The question, then, is whether or not the participant should reason as if they are randomly selected from the set of all participants, and so their credence that they have been or will be woken twice is 1/2. I will continue to say that they should, given the propensities of the experimental setup to place them in the position to be woken twice.Michael

    Why should it be either one or the other? One of them expresses a credence that an awakening (the current one) is occurring as a result of a coin having landed tails (E1) while the other one expresses a credence that an experimental run (the current one) is occurring as a result of a coin having landed tails (E2). They are credences about different kinds of events (E1 and E2) that have different temporal and spatial distributions. They can both (and indeed are) true at the same time and can be appealed to to rationally justify different bets or other actions that are dependent on such events happening.

    There is a twist that comes from the fact that a biconditional holds between the two propositions "E1 is now occurring" and "E2 is now occurring". How can they therefore have different probabilities of occurrence? This puzzle is solved by attending to the practical implications of establishing effective procedures for verifying their truths, or to the means of exploiting what such truths afford.
  • Michael
    14k
    There is a twist that comes from the fact that a biconditional holds between the two propositions "E1 is now occurring" and "E2 is now occurring". How can they therefore have different probabilities of occurrence? This puzzle is solved by attending to the practical implications of establishing effective procedures for verifying their truths, or to the means of exploiting what such truths afford.Pierre-Normand

    I don't see the connection between credence in an outcome and practical implications. Proving that the optimum betting strategy over multiple games is to bet on tails doesn't verify that P(Tails|Awake) = 2/3 is true.

    If there's a lottery where the probability of winning is 1/1,000,000 but the cost is £1 and the prize is £2,000,000 then it can be profitable to play 1,000,000 times, but it is still the case that for each lottery one's credence in winning should be 1/1,000,000.

    Given that E1 iff E2 as you say, I would say that P(E1) = P(E2), and that the most appropriate credence for E2 (and so also E1) is 1/2, irrespective of practical considerations.
  • PhilosophyRunner
    302
    That a correct prediction of tails is twice as frequent isn't that a correct prediction of tails is twice as probable – at least according to Bayesian probability.Michael

    This is one part of halfers thinking I don;t understand. There is something in statistics called law of large numbers that allows just this inference.

    We have a disagreement over a dice. I say that it is more likely to land on 5, you say all numbers are equally likely to happen. What can we do?

    The answer is simple, do an experiment where we roll it many times. If it lands on heads 5 million times out of 10 million rolls, then I will have high credence to say that 5 is more likely to roll on the next one off roll we do.

    Halfers seem to be saying that we can;t use the results of the 10 million rolls to give credence to what we think is more probable for the next one off roll. I disagree, we certainly can.
  • Michael
    14k
    Take away the amnesia. Does it follow that because there are two interviews after every tails that a tails interview is twice as probable?

    Throwing in amnesia doesn't convert the increased frequency into an increased probability.
  • PhilosophyRunner
    302
    Yes, an individual tails interview event is twice as probable. A tails interview where Monday and Tuesday interviews are grouped together and not seen as different events is equally likely as a heads interview. It comes back to the language of the question and interpretation.
  • Michael
    14k
    Yes, an individual tails interview event is twice as probable. A tails interview where Monday ans Tuesday interviews are grouped together is equally likely as a heads interview. it comes back to the language of the question and interpretation.PhilosophyRunner

    That I believe is a bad interpretation of probability.

    The probability of the coin landing heads is 1/2, leading to one interview.
    The probability of the coin landing tails is 1/2, leading to two interviews.

    The probability that there will be a heads interview is 1/2.
    The probability that there will be a tails interview is 1/2.

    This is the correct interpretation.
  • Pierre-Normand
    2.2k
    I don't see the connection between credence in an outcome and practical implications. Proving that the optimum betting strategy over multiple games is to bet on tails doesn't verify that P(Tails|Awake) = 2/3 is true.Michael

    In the Sleeping Beauty case, betting on tails at each awakening isn't merely profitable over the long run. It's also more likely to be a winning bet at each occasion of betting. The likelihood that it will be a winning bet is exactly 2/3. Similarly, in the "less-extreme" variation, each occasion of betting is more likely to result in a win (6 out of 11 bets are winning bets).

    Following Pradeep Mutalik's suggestion, '[a]ccording to the Bayesian "Dutch Book argument" I mentioned, "a degree of certainty" or "degree of belief" or "credence" is, simply, your willingness to wager*. Specifically, if you have a "degree of certainty" of 1/n then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.'

    In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.
  • PhilosophyRunner
    302


    For you the Monday heads interview is an A event. The Monday tails interview is a B event, the Tuesday tails interview is a B event as well (it is the same tails event as far as you are considered.)

    For me the Monday heads interview is an A event. The Monday tails interview is a B event, the Tuesday tails interview is a C event (they are different interviews as far as I am concerned).

    You care about the coin only, I care about the interview moment combined with the coin.

    For me the coin showing tails on Monday is a completely different showing to it showing tails on a Tuesday. For you it is the same flip so it is the same showing.

    What is the probability of the Mona Lisa showing? For you it will be 100% as we know it has been painted. For me it would depend on the time of day and the date. The probability of it showing during the day is higher than the probability of it showing at night, as the Louvre is only open on some special nights.

    Different interpretations of what it means for the Mona Lisa to be showing.
  • Michael
    14k
    Following Pradeep Mutalik's argument, according to the Bayesian "Dutch Book argument", "a degree of certainty" or "degree of belief" or "credence" is essentially your willingness to wager. Specifically, if you have a "degree of certainty" of 1/n, then you should be willing to accept a bet that offers you n or more dollars for every dollar you bet.

    In that case, it's not merely the expected value of the bet that determines the credence. Rather, it's your degree of certainty, 1/n, in the outcome being wagered on that makes you rationally justified in accepting a bet with such odds.
    Pierre-Normand

    Then apply this to my case of tossing the coin one hundred times, and where the experiment is only run once.

    Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.

    That I would be woken up a large number of times if it did land heads 100 times in a row just doesn't affect my credence or willingness to bet that it did at all.

    And the same if I was one of 2100 participants taking part. One person might win, but I will almost certainly not be the winner.

    Only if I got to repeat the experiment 2100 times would I bet that it did. But not because my credence for any particular experiment has increased; it's because my credence is that I'm likely to win at least once in 2100 attempts, and that the winnings for that one time will exceed the sum of all my losses.
  • PhilosophyRunner
    302
    Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.Michael

    Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one - maximizing your expected value when betting is the rational position.
  • Michael
    14k
    Fair enough, but then a person betting that it did land on heads 100 times in a row will have a greater expected value for their winning (as long as the winnings for heads are greater than 2^100 than for tails). And their position would be the rational one.PhilosophyRunner

    It can be rational in the sense that it can be profitable to bet when the expected value is greater than the cost, much like a lottery that costs £1 with a prize of £2,000,000 and a probability of winning of 1/1,000,000.

    But it's not rational to believe that (especially when playing once) that I am most likely to win betting that it landed heads 100 times. You're not most likely to win. The odds of winning are .
  • Pierre-Normand
    2.2k
    Will you bet that the coin landed heads 100 times in a row? I wouldn't. My credence is that it almost certainly didn't land heads 100 times in a row, and that this is almost certainly my first and only interview.Michael

    @PhilosophyRunner's comment is on point. You awakening once while the coin landed tails is akin to a showing of the Mona Lisa on a particular day. The coin having landed tails rather than heads is akin to Leonardo da Vinci having produced a portrait of Mona Lisa rather than Marco Luciano.
  • PhilosophyRunner
    302
    It would be rational also in the sense that you are more likely to win on a particular guess (which would not be the case in a normal large prize 2,000,000 and a probability of winning of 1/1,000,000 at £1 each, as you point out).

    If you repeated the experiment a trillion times, and kept a note of whether you guess was correct or not each time, and I did the same. We would find that I got it correct more than you. By the law of large numbers that would mean the outcome I guessed for was more probable than yours.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.