• fdrake
    6.7k
    When looking at a scenario with randomness in it, it's quite interesting to look at what sources of randomness there are and how they interact.

    (1) In Michael's analysis, the only thing random is the value in the other envelope. The sample space of the random variable is {X/2,2X}

    (2) In every analysis which deals with many possible values for what is contained in an envelope, the amount for one envelope is random, then it is random whether that envelope is X or 2X. The sample space for X is (0,infinity).

    (3) In Jeramiah's analysis, the random thing is which envelope contains which amount. The sample space is {X,2X}.

    (4) In my analysis, which pair of envelopes consistent with an amount U was random, then whether the envelope was U=X or U = 2X was random. The sample space for the first distribution was { {X,2X}, {X,X/2}} and the sample spaces for the conditional distributions were {X,2X} and {X/2,X}.

    These analyses are related through some conditioning operations. In principle, any distribution on the value of X as in (2) can be conditioned upon. This yields case (1) if what Michael is doing with the sample spaces is vindicated or case (3) or (4) if it is not.

    Analysing whether to switch or not comes down to questions on what distribution to consider. This distribution can be different from the one considered in cases (1)->(4).

    In case (1), Michael believes P(X|U=10) distributes evenly over his sample space. In case (2) as far as the thread is considered a diffuse uniform distribution is placed on (0,M) where M is large or the flat prior (y=1) is placed on (0,infinity); this translates to (something like) an equiprobability assumption on the possible values of X - this can be combined with an equiprobability assumption on which envelope the person receives to (wrongly) achieve (1) or to achieve (4) by conditioning. In case (3) an equiprobability assumption is used on which envelope the person receives but not on which set of envelopes the person receives. The conditional distributions for scenario (4) given {X,X/2} or {X,2X} are equivalent to those considered by Jeramiah in case (3).

    The choice to switch in case (2) makes the choice based on an intuition of P(received amount is the smaller value) where the amounts are uniformly distributed in [0,M]; for sufficiently large M this suggests a switch. The choices in (1),(3),(4) instead base the decision on the probability of envelope reception. (1), (3) and (4) are fully consistent with (2) so long as the appropriate P(received amount is smaller value||specific envelope amount) (envelope reception distribution) is considered rather than P(received amount is the smaller value) (amount specification distribution).

    There aren't just disagreements on on how to model the scenario, there are disagreements on where the relevant sources of randomness or uncertainty are. The fundamental questions distinguishing approaches detailed in this thread are (A) where does the randomness come from in this scenario? and (B) what model is appropriate for that randomness?
  • Pierre-Normand
    2.4k
    If there's £10 in my envelope then the expected value for switching is £12.50, and the expected value for switching back is £10.Michael

    That's only true if £10 isn't at the top of the distribution. When the bounded and uniform distribution for single envelope contents is for instance (£10, £5, £2.5, £1.25 ...) then the expected value for switching from £10 (which is also a unique outcome) is minus £5 and, when it occurs, it tends to wipe out all of the gains that you made when you switched from smaller amounts. Even if you play the game only once, this mere possibility also nullifies your expected gain. Assuming your goal merely is to maximise your expected value, you have not reason to favor switching over sticking.
  • Michael
    15.8k
    That's only true if £10 isn't at the top of the distribution. When the distribution for single envelope contents is for instance (£10, £5, £2.5, £1.25 ...) then the expected value for switching from £10 (which is also a unique outcome) is minus £5 and, when it occurs, it tends to wipe out all of the gains that you made when you switched from smaller amounts. Even if you play the game only once, this mere possibility also nullifies your expected gain.Pierre-Normand

    There's also the possibility that £10 is the bottom of the distribution, in which case the expected value for switching is £20.

    Assuming your goal merely is to maximise your expected value, you have not reason to favor switching over sticking.Pierre-Normand

    Which, as I said before, is equivalent to treating it as equally likely that the other envelope contains the smaller amount as the larger amount, and so it is rational to switch.
  • Michael
    15.8k
    If there is no reason to favour sticking then ipso facto there is reason to favour switching.
  • Pierre-Normand
    2.4k
    There's also the possibility that £10 is the bottom of the distribution, in which case the expected value for switching is £20.Michael

    Sure. We can consider a distribution that is bounded on both sides, such as (£10,£20,£40,£80), with envelope pairs equally distributed between ((£10,£20),(£20,£40),(£40,£80)).

    The issue is this: are you prepared to apply the principle of indifference, and hence to rely on an expected value of 1.25X, for switching for any value in the (£10,£20,£40,£80) range? In the case where you switch from £10, you will have underestimated your conditional expected gain by half. In the cases where you switch from either £20 or £40, your expectations will match reality. In the case where you switch from £80 then your expected loss will be twice as large as the gain that you expected. The average of the expected gains for all the possible cases still will be zero.

    Assuming your goal merely is to maximise your expected value, you have not reason to favor switching over sticking.
    — Pierre-Normand

    Which, as I said before, is equivalent to treating it as equally likely that the other envelope contains the smaller amount as the larger amount, and so it is rational to switch.

    It is not rational since you are ignoring the errors that you are making when the envelope contents are situated at both ends of the distribution, and the expected loss at the top wipes out all of the expected gains from the bottom and the middle of the distribution. When this is accounted for, the principle of indifference only tells you that while it is most likely that your expected gain is 1.25X, and it might occasionally be 2X, in the cases where it is 0.5X, the loss is so large that, on average, you expected gain from switching still remains exactly X.
  • Michael
    15.8k
    The issue is this: are you prepared to apply the principle of indifference, and hence to rely on an expected value of 1.25X, for switching for any value in the (£10,£20,£40,£80) range?Pierre-Normand

    If I don't know the distribution, then yes. For all I know, the distribution could be (£10,£20,£40,£80,£160).

    It is not rational since you are ignoring the errors that you are making when the envelope contents are situated at both ends of the distribution, and the negative error at the top wipes out all of the expected gains from the bottom and the middle of the distribution. When this is accounted for, the principle of indifference only tells you that while it is most likely that your expected gain is 1.25X, and it might occasionally be 2X, in the cases where it is 0.5X, the loss is so large that, on average, you expected gain from switching still remains exactly X.Pierre-Normand

    But we're just talking about the rational decision for a single game. And my point remains that if I have no reason to believe that it's more likely that the other envelope contains the smaller amount than the larger amount (or vice versa) then I am effectively treating both cases as being equally likely, and if I am treating both cases as being equally likely then it is rational to switch.
  • Pierre-Normand
    2.4k
    then I am effectively treating both cases as being equally likely, and if I am treating both cases as being equally likely then it is rational to switch.Michael

    Yes, you are justified in treating the "cases" (namely, the cases of being dealt the largest or smallest envelope within the pair) as equally likely but you aren't justified in inferring that, just because the conditional expected gain most often is 1.25X, and occasionally 2X, when X is at the bottom of the range, therefore it is rational to switch rather than stick. That would only be rationally justified if there weren't unlikely cases (namely, being dealt the value at the top of the distribution) where the conditional loss from switching (0.5X) is so large as to nullify the cumulative expected gains from all the other cases.
  • Michael
    15.8k
    That would only be rationally justified if there weren't unlikely cases (namely, being dealt the value at the top of the distribution) where the conditional loss from switching (0.5X) is so large as to nullify the cumulative expected gains from all the other cases.Pierre-Normand

    But we're just talking about a single game, so whether or not there is a cumulative expected gain for this strategy is irrelevant.

    If it's more likely that the expected gain for my single game is > X than < X then it is rational to switch.

    Or if I have no reason to believe that it's more likely that the other envelope contains the smaller amount then it is rational to switch, as I am effectively treating a gain (of X) as at least as likely as a loss (of 0.5X).
  • Jeremiah
    1.5k


    You created a double standard. You can try to bury that in text, but that is what happened.
  • Efram
    46


    If there is no reason to favour sticking then ipso facto there is reason to favour switching.Michael

    I've noticed you express this sentiment a couple of times in the thread; could you elaborate further? I'm not sure how you're arriving at "A isn't favourable so B must be" - bypassing "neither A nor B is favourable."

    If I were to borrow your logic, I could say that if given the choice between being shot in the head and having my head cut off, if I conclude that there's no advantage to having my head cut off, I must surely want to be shot in the head? :p

    (Edited to actually add the quote)
  • Srap Tasmaner
    5k
    Here's the OP:

    Problem A
    1. You are given a choice between two envelopes, one worth twice the other.
    2. Having chosen and opened your envelope, you are offered the opportunity to switch.
    3. You get whichever envelope you chose last.

    Here's a slight variation on Problem A:

    Problem B
    1. You are given a choice between two envelopes, one worth twice the other.
    1.5 Having chosen but before opening your envelope, you are offered the opportunity to switch.
    2. Having chosen and opened your envelope, you are offered the opportunity to switch.
    3. You get whichever envelope you chose last.

    Here's a slight variation on Problem B:

    Problem C
    1. You are given a choice between two envelopes, one worth twice the other.
    1.25 Having chosen but before the envelope being designated "yours" for the next step, you are offered the opportunity to switch.
    1.5 Having chosen but before opening your envelope, you are offered the opportunity to switch.
    2. Having chosen and opened your envelope, you are offered the opportunity to switch.
    3. You get whichever envelope you chose last.

    And we can keep going. Wherever you have been offered the opportunity to switch, we can add a step in which you are offered the opportunity to switch back before moving on to the next step. The number of steps between being offered the first choice and getting the contents of (or receiving a score based on) your last choice can be multiplied without end. At some points, there may be a change in labeling the envelopes (now we call this one "your envelope" and that one "the other envelope"); and at one point, you are allowed to learn the contents of an envelope.

    Suppose you wanted to make a decision tree for the OP, Problem A. You'd have to label things somehow to get started.
    envelope_tree_a.png
    How to label the nodes after the first choice? We could switch to "YOURS" and "OTHER"; later, once a value has been observed, we could switch to, say, "Y ∊ {a}" and "U ∊ {a/2, 2a}" for labels.

    But of course all we're doing is relabeling. This is in fact only a "tree" in a charitable sense. There is one decision, labeled here as a choice between "LEFT" and "RIGHT", and there are never any new decisions after that -- no envelopes added or removed, for instance -- and each step, however described, is just the same choice between "LEFT" and "RIGHT" repeated with new labels. You can add as many steps and as many new labels between the first choice and the last as you like, but there is really only one branching point and two possibilities; it is the same choice between the same two envelopes, at the end as it was at the beginning.
  • Michael
    15.8k
    I've noticed you express this sentiment a couple of times in the thread; could you elaborate further? I'm not sure how you're arriving at "A isn't favourable so B must be" - bypassing "neither A nor B is favourable."

    If I were to borrow your logic, I could say that if given the choice between being shot in the head and having my head cut off, if I conclude that there's no advantage to having my head cut off, I must surely want to be shot in the head?
    Efram

    Say you have £10 in your envelope. If the other envelope contains the larger amount then it contains £20 and if it contains the smaller amount then it contains £5. If it is equally likely to contain the larger amount as the smaller amount then the expected value for switching is £12.50, and so it is rational to switch.

    If you have no reason to believe that it is more likely to contain the smaller amount and no reason to believe that it is more likely to contain the larger amount, then you are effectively applying the principle of indifference, and treating it as equally likely to contain the larger amount as the smaller amount (which is why some are saying that it doesn't matter if you stick or switch). But given the above, if you are treating each outcome as equally likely then you are treating the expected value of the other envelope to be £12.50, and so it is rational to switch.

    For it to be the case that the rational decision is indifference you need to calculate a 2/3 chance that the other envelope contains the smaller amount and a 1/3 chance that the other envelope contains the larger amount. But you can't do that if the only information you have is that your envelope contains £10 and that one envelope contains twice as much as the other.
  • JeffJo
    130
    I'm not really sure that this addresses my main question. ... There's a 50% chance of picking the lower-value envelope, and so after having picked an envelope it's in an "unknown state" that has a 50% chance of being either the lower- or the higher-value envelope?Michael
    When all you consider is the relative chances of "low" compared to "high," this is true. When you also consider a value v, you need to use the relative chances of "v is the lower value" compared to "v is the higher value." This requires you to know the distribution of possible values in the envelopes. Since the OP doesn't provide this information, you can't use your solution. and no matter how strongly you feel that there must be a way to get around this, you can't.

    +++++
    Let's step back, and try to establish this in a simpler way. Compare these two solutions:

    1. Say your envelope contains v.
      • There is a 50% chance that the other envelope contains v/2.
      • There is a 50% chance that the other envelope contains 2v.
      • The expected value of the other envelope is (v/2)/2 + (2v)/2 = 5v/4.
      • So switching has an expected gain of 5v/4 - v = v/4.
    2. Say the total amount of money in the two envelopes is t.
      • There is a 50% chance that your envelope contains t/3, and the other contains 2t/3.
      • There is a 50% chance that your envelope contains 2t/3, and the other contains t/3.
      • The expected value of your envelope is (t/3)/2 + (2t/3)/2 = t/2.
      • The expected value of the other envelope is (2t/3)/2 + (t/3)/2 = t/2.
      • So switching has an expected gain of t/2 - t/2 = 0.

    The only difference in the theory behind these two solutions, is that #1 uses an approach that implies different sets of values in the two envelopes, where #2 uses an approach that implies the same set of values.

    The property of probability that you keep trying to define, is that this difference shouldn't matter. But the paradox established in those two solutions proves that it does.
  • JeffJo
    130
    Statistics is a data science and uses repeated random events to make inference about an unknown distribution. We don't have repeated random events, we have one event. Seems like a clear divide to me. You can't learn much of anything about an unknown distribution with just one event.Jeremiah
    Statistics uses repeated observations of outcomes from a defined sample space, to make inference about the probability space associated with that sample space.

    In the OP, we don't have an observation. Not even "one event," if that term were correct to use in your context. Which it isn't.

    With one exception, this is why every time you have used words like "statistics" or "statistical" in this thread (which is what I mean by "advocating" it; I know you never advocated it as a solution method, but you did advocate associating the word with distributions in the discussion) has been incorrect.

    The exception is that you can use statistics to verify that your simulation correctly models a proven result. The repeated observations are the results of a single run of the simulation.
    Why you think you should do that is beyond me, but apparently you did.

    The OP deals with a conceptual probability problem. There is no no observational data possible. "Statistics" does not belong in any discussion about it. Nor does "Bayesian," "Frequentist," "objective," "subjective," "inference," and many others.
  • Jeremiah
    1.5k
    Statistics uses repeated observations of outcomes from a defined sample space, to make inference about the probability space associated with that sample space.JeffJo

    I just said that. That is exactly what I said.

    Not even "one event," if that term were correct to use in your context.JeffJo

    I already posted the definition of an event from one of my books, which I will refer to over you. I will always go with my training over you.

    The OP deals with a conceptual probability problem. There is no no observational data possible. "Statistics" does not belong in any discussion about it.JeffJo

    I also said that.


    One thing I was taught in my first stats class was that the lexicon was not standardized. They told me to expect different people to use different terminology. I think perhaps you should take a page from that book. In the meantime, in my personal usage, I am sticking with the terminology in my books, no matter how much you protest.
  • Jeremiah
    1.5k


    I noted towards the start of the thread that this had more to with defining the sample space. Calculating expected returns is a futile effort if we cannot agree on the underlying assumptions. The natural device for such a situation would be the Law of Parsimony, but I can't really say my approach makes fewer assumptions than Michael's. I do think, however, Occam's razor does cut additional assumptions about the distribution.
  • Michael
    15.8k
    The only difference in the theory behind these two solutions, is that #1 uses an approach that implies different sets of values in the two envelopes, where #2 uses an approach that implies the same set of values.JeffJo

    The other way to phrase the difference is that my solution uses the same value for the chosen envelope (10) and your solution uses different values for the chosen envelope (sometimes 10 and sometimes 20 (or 5)).

    But I’m asking for the rational choice given that there’s 10 in the chosen envelope. It doesn’t make sense to consider those situations where the chosen envelope doesn’t contain 10.

    We should treat what we know as a constant and what we don’t know as a variable. In this case, what we know is that there’s 10 in our envelope and what we don’t know is the value of the envelope that doesn’t contain 10. We consider all the possible worlds where the chosen envelope contains 10, assign them a probability, and then calculate the expected value. And I think that in lieu of evidence to the contrary it is rational to treat them as equally probable.
  • JeffJo
    130
    Statistics uses repeated observations of outcomes from a defined sample space, to make inference about the probability space associated with that sample space. — JeffJo

    I just said that. That is exactly what I said.
    Jeremiah

    What you said was: "... uses repeated random events to make inference about an unknown distribution." Since an event is a set of possible outcomes, and you used the word to mean an experimental result, what you said had no meaning. The point is that it refers to a probability space itself, and not the actions that produce a result described by such a space.

    Since this difference is critical to identifying the misinformation you have been spouting, I corrected it to what you thought you meant. Using correct terminology in place of your incorrect terminology. Which would not have been necessary if what you said meant exactly what I wrote.

    What you refuse to understand, is that it is this misuse of terminology that I've been criticizing. Not your solution, which I have repeatedly said was correct.

    I already posted the definition of an event from one of my books, which I will refer to you over. I will always go with my training over you.
    You may well have. If you did, I accepted it as correct and have forgotten it. If you want to debate what it means, and why that isn't what you said above, "refer it to me over" again. Whatever that means.

    But first, I suggest you read it, and try to understand why it was the wrong word to use in your sentence above.

    One thing I was taught in my first stats class was that the lexicon was not standardized.
    Maybe true in some cases. But "event" is not one of them. Look it up again, and compare it to what I said.

    The first ones I found in a google search (plus one I trust) were:

    • Wikipedia: "an event is a set of outcomes of an experiment (a subset of the sample space) to which a probability is assigned."
    • mathisfun.com: When we say "Event" we mean one (or more) outcomes.
    • icoashmath.com: "An Event is a one or more outcome of an experiment."
    • ucdavis.edu: An event is a specific collection of sample points.
    • intmath.com: Event: This is a subset of the sample space of an experiment.
    • statisticshowto.com: An event is just a set of outcomes of an experiment
    • faculty.math.illinois.edu: any subset E of the sample space S is called an event.
    • mathworld.wolfram.com: An event is a certain subset of a probability space. Events are therefore collections of outcomes on which probabilities have been assigned.
    None of these refer to an event as an actual instance of the process. It is always a way to describe the potential results, if you were to run it. As a potential result, there is no way to apply "repeat" to it.
  • JeffJo
    130
    The other way to phrase the difference is that my solution uses the same value for the chosen envelopeMichael

    But your expectation uses the value in the other envelope, so this is an incomplete phrasing. That's why it is wrong.

    It doesn’t make sense to consider those situations where the chosen envelope doesn’t contain 10.
    And you are ignoring my comparison of two different ways we can know something about the values.

    Again: if it is valid to use a method that considers one value only, while ignoring how that one value restricts the pair of values, then it is valid to use that method on either the value v in your envelope, or the value t of the total in both.That means that they both should get the same answer.

    They don't. So it isn't valid. If you want to debate this further, find what is wrong with the solution using "t".

    We should treat what we know as a constant and what we don’t know as a variable.
    True. But when that variable is a random variable, we must consider the probability that the variable has the value we are using, at each point where we use one.
  • Jeremiah
    1.5k
    I don't really think you know what I said. If you need clarification just ask [remainder of post removed by mod].
  • Pierre-Normand
    2.4k
    The other way to phrase the difference is that my solution uses the same value for the chosen envelope (10) and your solution uses different values for the chosen envelope (sometimes 10 and sometimes 20 (or 5)).Michael

    There is another difference between the two methods @JeffJo presented, as both of them would be applied to the determinate value that is being observed in your envelope. The first one only is valid when we dismiss the known fact, in the case where the distribution is merely known to be bounded above (even though the upper bound is unknown), that the loss incurred from switching when the value v happens to be at the top of the distribution cancels out the expected gains from switching when the value v isn't at the top of the distribution. The second method doesn't make this false assumption and hence is valid for all bounded distributions.
  • Pierre-Normand
    2.4k
    But we're just talking about a single game, so whether or not there is a cumulative expected gain for this strategy is irrelevant.Michael

    Unless we are considering that the player's preference is being accurately modeled by some non-linear utility curve, as @andrewk earlier discussed, then the task simply is to choose between either sticking or switching as a means to maximizing one's expected value. It's irrelevant that the game is being played once rather than twice, or ten, or infinitely many times. If a lottery ticket costs more than the sum total of the possible winnings weighed by their probabilities, then it's not worth buying such a ticket even only once.

    If it's more likely that the expected gain for my single game is > X than < X then it is rational to switch.

    By that argument, it would be irrational to purchase a $1 lottery ticket that gives you a 1/3 chance to win one million dollars since it is more likely that you will lose $1 (two chances in three) than it is that you will gain $999999. Or, maybe, you believe that it is only irrational to buy such a ticket in the case where you can only play this game once?

    Or if I have no reason to believe that it's more likely that the other envelope contains the smaller amount then it is rational to switch, as I am effectively treating a gain (of X) as at least as likely as a loss (of 0.5X).

    Sure, and in some cases, when your value v=X happens to be at the top of the distribution, you are making an improbable mistake that will lead you to incur a big loss. You are arguing that you can "effectively" disregard the size of this improbable potential loss on the ground that you only are playing the game once; just like, in the lottery case, presumably, you should entirely disregards the size of the improbable jackpot if your argument were sound.
  • Michael
    15.8k
    By that argument, it would be irrational to purchase for $1 a lottery ticket that gives you a 1/3 chance to win one million dollars since it is more likely that you will lose $1 (two chances in three) than it is that you will gain $999999.Pierre-Normand

    No, because the expected gain is $333,334, which is more than my $1 bet (333,334X), and so it is rational to play.
  • Pierre-Normand
    2.4k
    No, because the expected gain is $333,334, which is more than my 333,334X.Michael

    But then, if you agree not to disregard the amount of the improbable jackpot while calculating the expected value of the lottery ticket purchase, then, likewise, you can't disregard the improbable loss incurred in the case where v is it a the top of the bounded distribution while calculating the expected value of the switching decision.
  • Srap Tasmaner
    5k
    Here's a reasonable way to fill out the rest of the decision tree.
    envelope_tree_b2_1.png

    Either you observed value a, and then you stand to gain a by switching, or you observed b, and you stand to lose b/2 by switching.

    If you chose the X envelope, and observed its value, you stand to gain X by switching; if you chose the 2X envelope, and observed its value, you stand to lose X by switching.
  • Michael
    15.8k
    But then, if you agree not to disregard the amount of the improbable jackpot while calculating the expected value of the lottery ticket purchase, then, likewise, you can't disregard the improbable loss incurred in the case where v is it the top of the bounded distribution while calculating the expected value of the switching decision.Pierre-Normand

    But I don't know if my envelope contains the upper bound. Why would I play as if it is, if I have no reason to believe so?
  • Pierre-Normand
    2.4k
    But I don't know if my envelope contains the upper bound. Why would I play as if it is, if I have no reason to believe so?Michael

    Which is why you have no reason to switch, or not to switch. It may be that your value v is at the top of the distribution, or that it isn't. The only thing that you can deduce for certain, provided only that the distribution is bounded, if you are entirely ignorant of the probability that v might be at the top of the distribution, is that, whatever this probability might be, it always is such that the average expected values of switching, conditional on having been dealt some random envelope from the distribution, is the same as the average expected value of sticking. Sure, most of the time, the conditional expected value will be 1.25v. But some of the times it will be 0.5v.
  • Michael
    15.8k
    Sure, most of the time, the conditional expected value will be 1.25v. But some of the times it will be 0.5v.Pierre-Normand

    And some of the time it will be 2v, because it could also be the lower bound. So given that v = 10, the expected value is one of 20, 12.5, or 5. We can be indifferent about this too, in which case we have 1/3 * 20 + 1/3 * 12.5 + 1/3 * 5 = 12.5.
  • Pierre-Normand
    2.4k
    And some of the time it will be 2v, because it could also be the lower bound. So given that v = 10, the expected value is one of 20, 12.5, or 5. We can be indifferent about this too, in which case we have 1/3 * 20 + 1/3 * 12.5 + 1/3 * 5 = 12.5.Michael

    No. The conditional expected values of switching conditional on v = 10, and conditional on 10 being either at the top, bottom or middle of the distribution aren't merely such that we are indifferent between the three of them. They have known dependency relations between them. Although we don't know what those three possible conditional expected values are, for some given v (such as v = 10), we nevertheless know that, on average, for all the possible value of v in the bounded distribution, the weighted sum of the three of them is v rather than 1.25v.
  • Michael
    15.8k
    we nevertheless know that, on average, for all the possible value of v in the bounded distribution, their weighted sum is zero rather than 1.25v.Pierre-Normand

    This still looks like you're considering what would happen if we always stick or always switch over a number of repeated games. I'm just talking about playing one game. There's £10 in my envelope. If it's the lower bound then I'm guaranteed to gain £10 by switching. If it's the upper bound then I'm guaranteed to lose £5 by switching. If it's in the middle then there's an expected gain of £2.50 for switching. I don't know the distribution and so I treat each case as equally likely, as per the principle of indifference. There's an expected gain of £2.50 for switching, and so it is rational to switch.

    If we then play repeated games then I can use the information from each game to switch conditionally, as per this strategy (or in R), to realize the .25 gain over the never-switcher.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment