• Michael
    14.2k
    For instance, suppose you offer me the opportunity to purchase a $100 lottery ticket that carries a one in a septillion chance of winning me $200 septillion. Despite the expected value being positive, it may not be reasonable for me to purchase the ticket. However, it would be a logical fallacy to extrapolate from this example and conclude that it would also be unreasonable for me to buy a $100 lottery ticket with a one in ten chance of winning me $2000. Given I'm not in immediate need of this $100, it might actually be quite unreasonable for me to pass up such an opportunity, even though I stand to lose $100 in nine times out of ten.Pierre-Normand

    It would be unreasonable of you to believe that you are most likely to win, even if it’s financially reasonable to play.
  • Pierre-Normand
    2.3k
    It would be unreasonable of you to believe that you are most likely to win.Michael

    Consider this scenario in the context of the Sleeping Beauty problem where a ten-sided dice is thrown. In this situation, it would be rational for me to believe that I am more likely to win an even money bet placed on the outcome being a 10 during the current awakening. Despite the fact that in 9 out of 10 experimental runs I would lose my only bet, in the remaining run, I would win ten times my bet by the time the experiment concludes. Thus, on average, I'd win 10 out of 19 times. And this is because my credence that the outcome is 10, P(10) = 10/19, would exactly match, in the long run, the proportion of times the outcome indeed was 10 when I awoke.
  • Michael
    14.2k
    Yes, it is rational to believe that if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win.
  • Pierre-Normand
    2.3k
    Yes, if you repeat the game enough times then you will win more than you lose, but it is still irrational to believe that if you play the game once then you are most likely to win.Michael

    Indeed, it's true that if you play the game once, it's not most likely that you'll win during that particular experimental run. However, when I suggest it's more likely than not that the outcome is 10 during the current awakening, I'm not claiming that it's most likely that the current experimental run will result in a win for me. These are two distinct statements, despite their truth values being perfectly correlated. This seems to be the point that was made in the conclusion of the paper that @fdrake mentioned. Furthermore, these claims hold true irrespective of whether the experiment is run once or multiple times.

    It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated. But that's just because the one gerrymanders the other.
  • fdrake
    5.9k
    and that both outcomes are perfectly correlatedPierre-Normand

    Perhaps it's a misnomer to call them correlated, because there's no meaningful notion of a joint event of both occurrences within the same sample space. As an analogy, it's like having one coinflip labelled with outcomes Heads and Tails, and another labelled Flibberty and Gibbet, then asking what's the probability that the Heads/Tails outcome coin comes out Flibberty or Gibbet. You can analogise the mechanisms by blurring the eyes, but that commits a category error. If that paper is to be believed.
  • Michael
    14.2k
    It seems quite counterintuitive that if my credence concerns the outcome of the experimental run I'm in, it is P(10) = 1/10, and if it's the outcome of the present awakening, it's P(10) = 10/19, and that both outcomes are perfectly correlated.Pierre-Normand

    So this goes back to what I said before. Either we reason as if we’re randomly selected from the set of all participants, and so P(10) = 1/10, or we reason as if our interview is randomly selected from the set of all interviews, and so P(10) = 10/19.

    Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants.

    How we chose to bet just has no bearing on one’s credence that one is likely to win. With your lottery example we play even if we know that we’re most likely to lose (and in fact I play the lottery even though the expected value of winning is less than the cost). And with this example I might bet on 10 even if my credence is that it is less likely, simply because I know that I will win in the long run (or, if playing one game, I’m just willing to take the risk because of the greater expected value).
  • Pierre-Normand
    2.3k
    Perhaps it's a misnomer to call them correlated, because there's no meaningful notion of a joint event of both occurrences within the same sample space.fdrake

    What I mean is that whenever the coin landed heads during a particular awakening, then it also landed heads during the particular experimental run this awakening is a part of, and vice versa. But Sleeping Beauty's credences in those two different outcomes (centered = 1/3 and uncentered = 1/2) is different, they are both right, and they both perfectly tack the frequencies of those corresponding outcomes when the experiment is repeated.
  • fdrake
    5.9k
    What I mean is that whenever the coin landed heads during a particular awakening, then it also landed heads during the particular experimental run this awakening is a part of, and vice versa.Pierre-Normand

    Aye I see what you mean. Thank you for your clarification. I really like the paper's use of the idea that the claim "X iff Y" works completely differently from "event X coincides with event Y", because event X and event Y both belong in different sample spaces. And we usually leave that difference unattended to.
  • Pierre-Normand
    2.3k
    Given that the experiment doesn’t work by randomly selecting an interview from the set of all interviews, I don’t think it rational to reason as if it is. The experiment works by rolling a dice, and so it is only rational to reason as if we’re randomly selected from the set of all participants.Michael

    Your assertion that I'm reasoning "as if" someone selected an interview from all possible interviews isn't quite accurate. Sleeping Beauty is fully aware that the current experimental run comprises either one or multiple awakenings, depending on the outcome of a coin toss. She doesn't assume that someone arbitrarily selected an awakening of hers. Rather, she knows she's currently experiencing an awakening and isn't certain whether it's the only one in the present run, or one among many.

    She's asked about her credence regarding the outcome of the coin toss. Given the known setup of the experiment, she can choose to calculate her credence in either the centered or non-centered heads outcome. She could even calculate both. If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3. If she decides to track her experimental runs (non-centered), her credence in heads is 1/2. These are credences in distinct outcomes, and both can serve different purposes.

    For example, in my hostage scenario, one approach maximizes one's chance of survival during an escape, while the other maximizes one's chances of being rescued by the police when communicating probable location. (I believe I may have made a mistake in my earlier analysis of the police rescue situation).
  • Michael
    14.2k
    If she opts to track her awakenings (centered possible worlds), her credence in heads is 1/3.Pierre-Normand

    How does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews?
  • Pierre-Normand
    2.3k
    And how does one do this if not by reasoning as if one's interview is randomly selected from the set of possible interviews?Michael

    If by "reasoning as if one's interview is randomly selected from the set of possible interviews" you're referring to Sleeping Beauty's avowal of her inability to single out any one of those possible awakenings as more or less likely than another, then yes, she is reasoning in that way. However, this form of reasoning is guided by her need to assess her credence in a heads-awakening outcome (her centered possible world), not by the setup of the experiment.

    The design of the experiment allows participants to assess both their centered and non-centered credences. What they decide to track can be influenced by various factors. For instance, they might be trying to optimize their expected value or their chances of winning; they could be trying to maximize their chances of escaping alive versus maximizing their chances of being rescued. Their choice could also be guided by their interests, such as whether they want to track the frequency of heads-awakenings or the frequency of heads-experimental-runs. In other words, the choice is driven by the participants' individual needs and interests, not dictated by the structure of the experiment itself.
  • Michael
    14.2k
    Sleeping Beauty's inability to single out any one of those possible awakenings as more or less likely than anotherPierre-Normand

    Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, and so it is a non sequitur to argue that because Tails awakenings are twice as frequent in the long run they are twice as likely.

    So how does the thirder argue that they are equally likely if not by first reasoning as if an interview is randomly selected from the set of possible interviews?
  • Pierre-Normand
    2.3k
    Well, that’s the very thing being debated. A halfer might say that a Monday & Heads awakening is twice as likely as a Monday & Tails awakening, [...]Michael

    Would not a halfer say that they are equally as likely?
  • Michael
    14.2k
    Would not a halfer say that they are equally as likely?Pierre-Normand

    Equally likely to happen, such that P(Monday & Heads) = P(Monday & Tails) = P(Tuesday & Tails) = 1/2, as per that earlier Venn diagram, but not equally likely that today is that interview, because if P(Heads) = 1/2 then P(Monday & Heads) = 1/2, and so P(Monday & Tails) + P(Tuesday & Tails) = 1/2, therefore if P(Monday & Tails) = P(Tuesday & Tails) then P(Monday & Tails) = P(Tuesday & Tails) = 1/4.

    This was Lewis' reasoning in his paper.
  • Michael
    14.2k
    This scenario doesn't accurately reflect the Sleeping Beauty experiment. Instead, imagine that one bag is chosen at random. You are then given one ball from that bag, but you're not allowed to see it just yet. You then drink a shot of tequila that causes you to forget what just happened. Finally, you are given another ball from the same bag, unless the bag is now empty, in which case you're dismissed. The balls are wrapped in aluminum foil, so you can't see their color. Each time you're given a ball, you're invited to express your credence regarding its color (or to place a bet, if you wish) before unwrapping it.Pierre-Normand

    I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

    If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.

    If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.

    If I don't know anything then I should reason as if my ball was randomly selected from one of the two bags, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4 (or just P(R) = P(B) = 1/2).

    This contrasts with your reasoning as if my ball is randomly selected from a pile such that P(R) = P(B1) = P(B2) = 1/3.

    At the very least this shows how halfers can be double halfers to avoid Lewis' P(Heads | Monday) = 2/3.
  • Pierre-Normand
    2.3k
    I didn't properly address this but I actually think it illustrates the point quite clearly. I'll amend it slightly such that the blue balls are numbered and will be pulled out in order, to better represent Monday and Tuesday.

    If I'm told that this is my first ball (it's Monday) then P(R) = P(B1) = 1/2.

    If I'm told that this is a blue ball (it's Tails) then P(B1) = P(B2) = 1/2.

    If I don't know whether this is my first ball or is a blue ball (neither know it's Monday or tails) then I should assume I randomly select a ball from a bag, and so P(R) = 1/2 and P(B1) = P(B2) = 1/4.

    This contrasts with your reasoning where we randomly select a ball from a pile such that P(R) = P(B1) = P(B2) = 1/3.
    Michael

    Your revised scenario seems to neglect the existence of a state where the player is being dismissed. This isn't quite equivalent to Milano's 'blind state' in the Sleeping Beauty context – where the 'blind state' refers to the case in which Sleeping Beauty is not awakened on Tuesday and thus never experiences it – but it functions in a similar way for the purpose of this analysis. In this state, the time comes to pull a second ball from the bag, but there's no such ball because the randomly selected bag only contained one. Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.

    I now concur with you that there are scenarios – contingent upon specific purposes or interests – where Sleeping Beauty would want her credence P(H|W) to track the possibility of her awakening during a Heads-experimental run. In such scenarios, since the 'blind state' Tuesday&asleep isn't an independent possibility but is considered merely a feature of the relevant centered state (i.e., the state where she is in a Heads experimental run), it can be assigned the same prior as the probability that she will be awakened (either once or twice) during a Heads-experimental-run. This prior is 1/2. In this case, P(H|W) is calculated to be 1/2, but the interpretation of states H and W differs. The space of possibilities is parameterized differently, corresponding to 'wider' centered possible worlds (i.e., centered states that span a whole experimental run).

    I've previously suggested cases where the participant's goals make them more interested in tracking narrower centered states. As revealed in the analysis by Laureano Luna (following Groisman), it can be misleading to assume that outcomes perfectly correlated with centered 'H & awake' states – regardless of how broadly or narrowly they are being defined – must also share the same probabilities. That doesn't mean the frequencies and probabilities don't track each other, a point Milano also makes. The (Bayesian) updated credences still track the frequencies of the relevantly defined centered states, which is why they can underpin rational betting behaviors, even though maximizing EV is just one among many goals an agent can have when making use of their credences.
  • Pierre-Normand
    2.3k
    Here is a excerpt of a discussion I've had with GPT-4. I don't think its last suggestion about the error Lewis might be making regarding Beauty's updating her credence in H in light of the fact that today is Monday is plausible. Despite that, and other small mistakes, I'm nevertheless quite impresses with its ability to grasp formal arguments.

    Reveal
    PN:

    Thank you! I've now made use of the stipulations above, assuming a prior P(W) = 3/4 that Sleeping is awakened on any given day, and that W, and H represent narrow possible worlds centered on Beauty's potential awakening events (or on the event of where remaining asleep on Tuesday). I've calculated P(H|W) using Bayes' theorem and arrived at the expected answer. Would you kindly do the same, step by step, explaining your reasoning as you go along, to see if our intuitions match?

    GPT-4:

    Sure! We can use Bayes' Theorem to calculate P(H|W) step by step. Let me first explain the process.

    Bayes' Theorem states that P(H|W) = [P(W|H) * P(H)] / P(W), where:

    • P(H|W) is the probability of hypothesis H (that the coin landed heads) given evidence W (that Sleeping Beauty is awake),
    • P(W|H) is the probability of evidence W given hypothesis H,
    • P(H) is the prior probability of hypothesis H, and
    • P(W) is the prior probability of evidence W.
    Now, let's start by establishing our priors:

    • P(H) = 1/2 (because the coin is fair),
    • P(W) = 3/4 (as given in the problem; Sleeping Beauty is awakened on any given day with this probability).

    Next, we need to figure out P(W|H):

    • If H is true (the coin landed heads), Sleeping Beauty is awakened only once (on Monday), so P(W|H) =
    1.

    We can now plug these values into Bayes' theorem to get P(H|W):

    P(H|W) = [P(W|H) * P(H)] / P(W)
    = [1 * 1/2] / 3/4
    = 1/2 / 3/4
    = 2/3.

    So, P(H|W) = 2/3, which means that given that Sleeping Beauty is awake, there is a 2/3 probability that the coin landed heads. This answer contrasts with the common answer of 1/3 often provided in the literature, but our version of the problem has altered the prior probabilities, which has in turn influenced the outcome.

    PN:

    I myself arrived at 1/3, and the prior seems consistent with what thirders also assume. I think your interpretation of P(W|H) may be questionable. P(W|H) is the probability that Beauty is in a narrow centered world where she is awakened, conditioned on the outcome H. When the outcome is H, she is awakened in the world Monday&H, and not awakened in the world Tuesday&H. P(W|H) is therefore 1/2, and P(H|W) is calculated to be 1/3, as expected.

    GPT-4:

    You're right, and I apologize for the confusion. My interpretation of P(W|H) was incorrect. Let's correct that.

    P(W|H) should indeed be the probability of W given H, which is the probability that Sleeping Beauty is awake on a given day given that the coin landed heads. In the case where the coin lands heads, Sleeping Beauty is only awakened on Monday, not Tuesday. Therefore, P(W|H) is indeed 1/2.

    Given that, let's recalculate:

    P(H|W) = [P(W|H) * P(H)] / P(W)
    = [1/2 * 1/2] / 3/4
    = 1/4 / 3/4
    = 1/3.

    You're correct. Given that Sleeping Beauty is awake, there is a 1/3 probability that the coin landed heads, which is the commonly accepted solution among "thirders". Thank you for your patience and for catching that mistake.

    PN:

    No worry! Unlike you, I always have the opportunity to reread my drafts and scour for mistakes!

    GPT-4:

    Thank you for your understanding! It's true that as a machine learning model, I provide responses in real time without the ability to review or edit my responses after they are generated. I rely on the accuracy of the information I was trained on and the rules of inference I use to generate responses. I always strive for precision, so I appreciate your patience and understanding when occasional errors do occur. Let me know if you have any more questions or problems to solve - I'm here to help!

    PN:

    Interestingly, my approach also can warrant the halfer solution when the suitable priors are chosen consistently with the focus being displaced on wider centered worlds that span complete experiment runs. (This is the approach Michael favors although he is inexplicit about his choice of centered worlds). Lewis's approach, though, has the untoward consequence that upon learning that toady is Monday, Beauty must update her credence in H to 2/3. Intuitively, it should remain 1/2. In the light of our previous discussion, can you surmise what mistake Lewis might be making?

    GPT-4:

    Lewis's approach is a prominent variant of the halfer position in the Sleeping Beauty problem. According to his view, Sleeping Beauty should assign a credence of 1/2 to the coin landing heads because the initial probability, before she went to sleep, was 1/2 and she doesn't get any new information upon waking up that would lead her to update this credence.

    However, as you've pointed out, upon learning that it is Monday, Lewis suggests that Sleeping Beauty should update her credence in the coin having landed heads to 2/3. This has been criticized as counterintuitive. From Sleeping Beauty's perspective, the fact that it's Monday doesn't seem to provide her with any information that would make the coin landing heads more probable.

    One potential error in Lewis's reasoning might be related to the point we've discussed earlier, namely the definition and interpretation of the events and the kind of possible world one chooses to center upon. When Sleeping Beauty wakes up, her situation is not symmetrical with respect to the possible worlds "Heads & Monday" and "Tails & Monday". While it's true that these two worlds are equally likely given that it's Monday, Sleeping Beauty's evidence upon waking up isn't just that it's Monday, but rather something like "it's an awakening day and I don't know which day it is". If one interprets the evidence in this way, it seems more reasonable to say that her credence in heads should remain at 1/2 upon learning that it's Monday.

    Additionally, an error in Lewis' argument could also arise from a misunderstanding of conditional probabilities, specifically the probability of it being Monday given that the coin landed heads, P(Monday|H). This would naturally be 1, because if the coin landed heads, she is only awakened on Monday. The halfer could potentially misinterpret this as saying the coin is more likely to have landed heads given that it's Monday, which isn't necessarily the case.

    As always in these sorts of debates, much depends on how exactly we interpret the notions of evidence, updating, and conditional credence. Different interpretations can lead to different solutions to the problem.
  • Michael
    14.2k
    Your revised scenario seems to neglect the existence of a state where the player is being dismissed.Pierre-Normand

    It doesn't. You're dismissed after red or the second blue.

    It is still the case that if I don't know whether this is Monday or Tails then I reason as if my ball is randomly selected from one of the two bags, such that P(R) = 1/2 and P(B1) = P(B2) = 1/4 (or just P(R) = P(B) = 1/2). It better reflects how the experiment is actually conducted.

    I don't reason as if my ball is randomly selected from a pile such that P(R) = P(B1) = P(B2) = 1/3.
  • Michael
    14.2k
    Even though the player is dismissed (as opposed to Sleeping Beauty, who is left asleep), a prior probability of P(Dismissed) = 1/4 can still be assigned to this state where he loses an opportunity to bet/guess. Upon observing the game master pulling out a ball, the player updates his prior for that state to zero, thus impacting the calculation of the posterior P(Red|Opp). If we assign P(Dismissed) = 1/4, it follows that P(Red|Opp) = 1/3.Pierre-Normand

    If you're going to reason this way then you also need to account for the same with blue. You reach in after the second blue and pull out nothing. So really P(Dismissed) > 1/4.

    I just don't think it makes sense to reason this way.

    If it helps, consider no tequila after the final ball. After being given a ball and asked your credence you're then dismissed if it's either red or second blue.
  • Pierre-Normand
    2.3k
    It doesn't. You're dismissed after red or the second blue.Michael

    That's cute but rather point missing. Sleeping beauty also is dismissed on the third day no matter what. But if we ignore the narrow centered case where the day is Tuesday and the coin landed Heads, we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.
  • Michael
    14.2k
    we are changing the structure of the problem and making it unintelligible that we should set the prior P(W) to 3/4.Pierre-Normand

    That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

    That P(Heads & Tuesday (or second waking)) consideration is a distraction that leads you to the wrong conclusion.
  • Pierre-Normand
    2.3k
    If you're going to reason this way then you also need to account for the same with blue. You reach in after the second ball and pull out nothing. So really P(Dismissed) = 2/5.Michael

    Under the narrow centered worlds parameterization of the problem space, the possibility (indeed, certainty) of Sleeping Beauty being dismissed on the third day is irrelevant. The third day isn't a day of possible interviews. And likewise in the ball picking analogy, if we wish to make it mirror the structure of the Sleeping Beauty problem.
  • Pierre-Normand
    2.3k
    That's precisely the point, and why I suggested ignoring days and just saying that if heads then woken once and if tails then woken twice. P(W) = 1.

    That (Heads & Tuesday (or second waking)) is a distraction that leads you to the wrong conclusion.
    Michael

    There are two possible conclusions depending on the way you chose to define the temporal span of your centered possible worlds (the states in the problem space). The conclusions are different because the questions are different. Of course, if you are unclear about the manner in which you define your states, then you might get confused and answer the question about the narrowly defined states (possible awakenings) with an answer that is suitable for the widely defined states (possible experiment runs), or vice versa. But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.
  • Michael
    14.2k
    But what determines the right question to ask isn't the statement of the Sleeping Beauty problem as such but rather your interest or goal in asking the question. I gave examples where either one is relevant.Pierre-Normand

    Are you referring to the safehouse and escape? That's a different scenario entirely.

    I flip a coin. If heads then I flip again. If heads you win a car, otherwise you win nothing. If the first flip is tails then I flip again. If heads you win a motorbike, otherwise I flip again. If heads you win a motorbike, otherwise you win nothing.

    I do this in secret and then tell you that you've won a prize. Given that you're more likely to win a prize if it's tails then it's reasonable to believe that it was most likely tails.

    Now consider a similar scenario, but if heads then you win a car and if tails then you win two motorbikes. I do this is in secret and tell you that you've won at least one prize. It is not reasonable to believe that it was most likely tails.

    Your safehouse and escape example is analogous to that first case. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing analogous to an escape opportunity with which to reassess the probability of the coin toss.
  • Pierre-Normand
    2.3k
    Your safehouse and escape example is analogous to that first case, not the second. Your introduction of opportunities to escape changes the answer, and so it isn't comparable to the Sleeping Beauty case where there is nothing comparable to it (and so if anything the second case above is closer to the Sleeping Beauty problem).Michael

    My safehouse and escape example – before I introduce the idea of escape possibilities – is designed in such a way that the prisoner's epistemic situation exactly mirrors Sleeping beauty's epistemic situation. Upon awakening on any given day, the prisoner's credence that they are in safehouse #1 is the same as Sleeping Beauty's credence that the coin landed heads. In both case, 'H' or 'being-in-safehouse-#1' are states that admit of both narrow centered possible worlds or wide centered possible worlds interpretations. If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events). Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.

    In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 5/6 position).
  • Michael
    14.2k
    Introducing the concept of escape possibilities was intended to illustrate that what is at stakes in maximizing the accuracy of the expressed credences can dictate the choice of the narrow versus wide interpretations of the states that they are about.

    In the safehouse and escape example: if the prisoner's goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' on any given awakening day, they should adopt the 'thirder' position (or a 5/11 position). If their goal is to maximize their chances of correctly predicting 'being-in-safehouse-#1' for any given kidnapping event (regardless of its duration), they should adopt the 'halfer' position (or a 6/11 position).
    Pierre-Normand

    Introducing the concept of escape possibilities simply changes the answer. You're more likely to have an opportunity to escape in safehouse #2, and so if given an opportunity to escape then you are more likely in safehouse #2. P(Safehouse #2|escape opportunity) > P(Safehouse #1|escape opportunity)

    This answer isn't relevant to a scenario where there are no opportunities to escape, where P(Safehouse #1|no escape opportunity) > P(Safehouse #2|no escape opportunity).

    If the agents expression of their credences are meant to target the narrow states, then they are trying to track frequencies of them as distributed over awakening episodes. If they are meant to target the wide states, then they are trying to track frequencies of them as distributed over experimental runs (or kidnaping events).Pierre-Normand

    I don't understand the connection between Sleeping Beauty's credence that the coin landed heads and the tracked frequency of heads-awakenings. It's a non sequitur to claim that because tails-awakenings are twice as frequent over repeated experiments then a coin toss having landed tails is twice as likely in any given experiment.

    Sleeping Beauty is being asked "in this current, one-off experiment, what is the probability that the coin I tossed on Sunday evening landed heads?".

    She's not being asked to guess if it's heads or tails and then being rewarded for each successful guess.

    Her choice of guess in the latter has nothing to do with what her answer would be to the former.

    If I were Sleeping Beauty I would answer "1/2" and guess tails.
  • Pierre-Normand
    2.3k
    It doesn't make sense to me to say that the answer in this scenario is relevant to a scenario where there are no opportunities to escape.Michael

    The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation.

    I discussed this a little further with GPT-4:
    Reveal

    PN:

    Just to give you the idea (since your context window is narrower than Michael's) in this variations, a hostage is taken to one of two safehouses based on the throw of a die. If the die doesn't land on six, the hostage is taken to safehouse #1. If the die lands on six, the hostage is taken to safehouse #2 for six consecutive days. He also is being administered an amnesia inducing drug on each night. The hostage is informed of this protocol. The question posed is: "On any given day, what is your credence in the proposition 'I am at safehouse #1'?" The credence here refers to the hostage's subjective probability, or belief, in being at safehouse #1. The answer provided in the discussion suggests that the hostage's credence should be 5/11 rather than 5/6, despite the fact that there is a 5/6 chance of the die landing on six. This reasoning is based on the fact that the hostage gets to guess six times regarding the same die throw when the result is six.

    GPT4:

    Thank you for providing the context, it's indeed a crucial piece of information for this discussion. I now see why the 'thirder' position, or the 5/11 in this case, comes into play. It's a fascinating way to illustrate the Sleeping Beauty problem. And it makes perfect sense to apply the reasoning here in a similar way, given the similar structure of the problems.

    When the hostage awakes, they have to consider not only the initial 5/6 chance of being in safehouse #1 but also that, should they be in safehouse #2, they could be guessing their location six different times on the basis of the same die roll. Hence, despite the initial die roll favoring safehouse #1, the multiplicity of questioning events when in safehouse #2 adjusts the credence to 5/11.

    Your explanation above about how the hostage's goals can dictate whether they adopt a 'thirder' or 'halfer' perspective is clear and logical, in light of this information.

    PN:

    In a further refinement I suggested that safehouse #1 is surrounded by a crocodile-infested moat, while safehouse #2 is surrounded by a forest filled with lions. While escaping, the hostage can carry either a large wooden plank (to bridge the moat) or a torch (to ward off the lions), but not both.

    I suggested that the prisoner should maximise their survival chances by carrying the torch, in line of their 6/11 credence of being in safehouse #2. But this is true only of their chances of finding such an escape opportunity is proportional to the number of days they are held captive such that this finding doesn't yield an updating of their credence.

    Likewise, if they wish to direct the police to rescue them (by communicating the address of one safehouse), and know that they can only be afforded one opportunity to do so per kidnaping event, they should direct them to safehouse #1. One twist here is that under those conditions, finding the unique opportunity to communicate with the police would lead the prisoner to update their credence to 5/6 that they are in safehouse 1. But if they can ensure that their message will be received by sending it every day, and that the police will not arrive before the end of the "kidnaping event" (but before they are being moved overseas, say) then their credence will not be updated and it will be rational for them to direct the police to warehouse #1 despite their credence being that they are in warehouse #2 with 6/11 probability. Can you see why?

    GPT4:

    This scenario adds another layer of complexity to the problem, but your explanations are clear and make sense.

    When it comes to choosing between the plank and the torch, if the hostage's chances of finding an escape opportunity are proportional to the number of days they are held captive, and finding such an opportunity doesn't lead to an updating of their credence about which safehouse they're in, it's rational for them to choose the torch, aligning with their 6/11 credence of being in safehouse #2. This is because they will have more opportunities to use it.

    On the other hand, if they can only communicate with the police once per kidnapping event, it would be more rational to direct the police to safehouse #1, even though their credence is that they are in safehouse #2 with a 6/11 probability. If finding this unique opportunity to communicate leads the hostage to update their credence to 5/6 that they are in safehouse 1, then it makes more sense to direct the police to safehouse #1. This is because, after the update, the chances that they are in safehouse #1 is higher.

    However, if the hostage is able to send a message every day, and they know that the police will not arrive until the end of the kidnapping event, it would still be rational to direct the police to safehouse #1, despite the 6/11 credence of being in safehouse #2. The reason is that, in this scenario, the credence will not be updated every day because the opportunity to send a message doesn't provide new information about their location. In this case, the best strategy would be to guide the police to the location they're most likely to be in over the course of the whole kidnapping event, which would be safehouse #1 due to the higher probability of being taken there initially, despite having fewer chances to send a message from there.

    PN:

    Yes, indeed. It seems counterintuitive that the hostages thinks that they are more likely than not to presently be located in safehouse #2 but that they are being rational in directing the police to safehouse #1. But that's because even though they are thereby more likely to be sending a message that will direct the police to the wrong warehouse, those wrong message will arrive in batches of 6 and will only misdirect the police once!

    GPT4:

    Yes, exactly! This counterintuitive scenario arises from the peculiar structure of the problem, which combines the probabilities of being in a particular safehouse on a given day with the probabilities of being there across the entirety of the kidnapping event. The messages, when sent in batches, will indeed only misdirect the police once, even though there are six of them. And even though on any given day the hostage thinks they're more likely in safehouse #2, across the entirety of the kidnapping event, they're more likely to be in safehouse #1. Therefore, it makes sense to direct the police to safehouse #1, which seems counterintuitive when considering daily probabilities but becomes logical when taking into account the entire event's probabilities.
  • Michael
    14.2k
    The opportunity to escape just enables the prisoner to put their credence to good use, and to chose how to most appropriately define the states that those credences are about. It doesn't change their epistemic situation.Pierre-Normand

    It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape.

    You're more likely to win a prize if it's tails, therefore upon being offered a prize you reason that tails is more likely.

    You're more likely to win the opportunity to escape if in safehouse #2, therefore upon being offered the opportunity to escape you reason that safehouse #2 is more likely.
  • Michael
    14.2k
    To set out the scenario:

    Dice roll 1-5 (safehouse #1): day 1, 50% opportunity to escape
    Dice roll 6 (safehouse #2): day 1, 50% opportunity to escape; day 2, 50% opportunity to escape; day 3, 50% opportunity to escape; day 4, 50% opportunity to escape; day 5, 50% opportunity to escape; day 6, 50% opportunity to escape

    It's quite straightforward that P(Dice roll 6|opportunity to escape) > P(Dice roll 1-5|opportunity to escape), but P(Dice roll 1-5|no opportunity to escape) > P(Dice roll 6|no opportunity to escape).

    And so when there's a 0% opportunity to escape on each day, as with the traditional problem, P(Dice roll 1-5) > P(Dice roll 6).
  • Pierre-Normand
    2.3k
    It does change the epistemic situation. It's exactly like the scenario with the coin tosses and the prizes, where in this case the prize is the opportunity to escape.Michael

    Which is why I had included the proviso that the (rare) opportunities be proportional to the number of days the hostage is held captive. Under those conditions, they carry no information to the hostage.

    Likewise, enabling Sleeping Beauty to bet on H on each awakening provides no information to her, provided only the payouts are delivered after the experiment is over.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.