• JeffJo
    130

    From Wikipedia:
    Bayesian inference is a method of statistical inference in which Bayes' theorem is used to update the probability for a hypothesis as more evidence or information becomes available.
    ...

    H stands for any hypothesis whose probability may be affected by data (called evidence below). Often there are competing hypotheses, and the task is to determine which is the most probable.

    Pr(H), the prior probability, is the estimate of the probability of the hypothesis H.

    Since there is no provision for "data/information/evidence" in the OP, only a one-time thought problem, Bayesian inference does not, and cannot, apply.

    But if you try, you need an hypothesis, and an estimate - which I called a "guess" - of what the probability of that hypothesis is. And an informed prior is just an educated guess.

    Yes, I have read this thread; and no, it contains no hypothesis from the OP that can be tested this way.
  • fdrake
    5.8k
    I agree. I think it is most convincing to present multiple angles. But it isn't (just) the sample space that is the issue. It is the probability space which includes:JeffJo

    Yeah. In my analysis this shows up when you ask what the raw probability of receiving X is, not the conditional probability of receiving X given that you just looked in the envelope. Michael correctly intuited that this is unknown. Some unknowns are benign, this one makes the probability space wrong.
  • Jeremiah
    1.5k
    You are misinterpreting what I said, what your link says and your source is Wikipedia.
  • Srap Tasmaner
    4.6k
    If the initial set up calls for randomly assigning values for the two envelopes in the finite range ((1,2),(2,4),(4,8)) for instance, then, in that case, assuming the player knows this to be the initial set up (and hence uses it as his prior) then the posterior probability conditionally on observing any value of M that isn't either 1 or 8 (that is, conditionally on values 2 or 4 being observed) p will indeed be 1/2.Pierre-Normand

    In one sense, yes, because we can say E(N | M=a) = (3*E(p) +1)/2, where p = P(S=a | M=a).

    But how do we calculate E(p)? I think the player in your example can, but can a player with a lot less information than yours?
  • JeffJo
    130
    You are misinterpreting what I said, what your link says and your source is Wikipedia.Jeremiah

    You have not interpreted a single thing I have said correctly; in fact, you've replied to very few of them. Most significantly this time, how Bayesian Inference is inapplicable to the OP. And you can find similar information in any source about Bayesian Inference - it isn't wrong just because it is in Wikipedia.
  • Srap Tasmaner
    4.6k
    The point is that, in the correct version for your calculation E=($V/2)*P1 + ($2V)*P2, the probability P1 is not the probability of picking the larger value. It is the probability of picking the larger value, given that the larger value is $10. In my example, that 90%. In the OP, you do not have information that will allow you to say what it is.JeffJo

    This is absolutely right. I think the confusion comes when you switch from

    E(other) = (larger)P(picked smaller) + (smaller)P(picked larger)

    where the probabilities of picking smaller and larger are equal, to

    E(other | picked = a) = (2a)P(picked smaller | picked = a) + (a/2)P(picked larger | picked = a)

    because it's tempting to think these conditional probabilities are equal, just like the unconditional probabilities above, but this we do not know.

    (Philosophical aside: I think this is close to my concern that there is a difference between "There's a 1/2 chance of my picking the smaller envelope" and "There's a 1/2 chance that the value of the envelope I picked is the smaller.")

    What is true is that

    P(picked smaller | smaller = c) = P(picked larger | smaller = c) = 1/2

    but that's completely different.

    But averaged over all possible values of V, there will be no expected gain.JeffJo

    I would still like to know more about how this works, though it may be over my head.
  • Srap Tasmaner
    4.6k
    It still feels to me like we're circling around the difference between

    P(picking larger)

    and

    P(I picked larger | I picked)

    All of us agree the first is just 1/2.** But the second is troublesome. Once you've picked, you definitely have the larger or the smaller, but you don't know which. It might be safe to continue to treat this the same as just the chance of picking larger, so long as you don't use the observed value of what you picked. But if you want to use the observed value, you have to be very careful to avoid saying things that amount to there being a 1/2 chance that 10 > 20.


    ** Although maybe it needn't be. Suppose you actually had data on individuals picking, and one individual is consistently "lucky". We don't need to know why or how, but we could still say this individual's chance of picking the larger is better than average.
  • Jeremiah
    1.5k
    What you think I have issue with is not what I have issue with. I literally mean you are misinterpreting me. Furthermore what you are harping on is something I already commented on towards the start of the thread.
  • JeffJo
    130

    I have agreed that what you said near the beginning of this thread was right. Did you read that?

    You have been very reticent to point out what it is you think I have not read, or have misinterpreted. Or you point one out, then say later that it wasn't important. Yet every time you did (and it was ambiguous), I have pointed out why you are wrong, or it is irrelevant. Did you read that?

    There are really only two conclusions that can be drawn about the the OP:

    • If you don't look in the envelope, it is proven that there is no expected gain by switching.
    • If you do look, THERE IS PROBABLY AN EXPECTED GAIN OR LOSS, but you have no information that would let you calculate it. This is different from knowing it is 0.

    I keep "harping on" this because you keep implying there are other conclusions that may be possible, and there are not. But you refuse to reply to these facts. I have said this many times. Did you read them?
  • JeffJo
    130
    I think the confusion comes when you switch from

    E(other) = (larger)P(picked smaller) + (smaller)P(picked larger)

    where the probabilities of picking smaller and larger are equal, to

    E(other | picked = a) = (2a)P(picked smaller | picked = a) + (a/2)P(picked larger | picked = a)

    because it's tempting to think these conditional probabilities are equal, just like the unconditional probabilities above, but this we do not know.
    Srap Tasmaner

    But we do know that it can't be true. That's the point.

    Either can't be true for all values of this "a", or there are impossible values in the set of possible a's and selection is impossible:
    • If there is a minimum value for a, then
      P(picked smaller | picked = amin) = 1 and P(picked larger | picked = amin) = 0
    • If there is not, then a can be arbitrarily small, which is impossible (well, impractical).
    • If there is a maximum value for a, then
      P(picked smaller | picked = amax) = 0 and P(picked larger | picked = amax) = 1
    • If there is not, then a can be arbitrarily large. Which is impossible.
  • Jeremiah
    1.5k
    Did you read that?JeffJo

    I stopped there.
  • Jeremiah
    1.5k
    If you do look, THERE IS PROBABLY AN EXPECTED GAIN OR LOSS, but you have no information that would let you calculate it. This is different from knowing it is 0.JeffJo

    So since you don't know which case you are in after seeing Y and they are not equal you can't really calculate the expected value. Now if you never opened A and never saw Y, that is a different story.Jeremiah

    You did not read the thread.
  • Jeremiah
    1.5k
    @JeffJo this is why I am stonewalling you, as you keep trying to argue points with me that I agree with and have already commented on. It is a clear indication to me that either you did not read the thread or you skimmed through it. Whatever the case, you have no clue what my actual position is, even though I have posted extensively on it.
  • andrewk
    2.1k
    No, it gives you a strategy that works on your assumed prior, not necessarily on reality.JeffJo
    Where I differ from that perspective is that I reject the notion that there is such a thing as a 'real' probability (aka 'true', 'raw', 'correct', 'absolute' or 'observer independent' probability).
  • Jeremiah
    1.5k


    Some probabilistic models are more reliable and accurate than others.
  • Jeremiah
    1.5k
    The more unnecessary assumptions you add to your model the more it will inflate your uncertainty; however, this uncertainty will not show up in your calculations. This is the price you pay for those assumptions.
  • andrewk
    2.1k

    Isn't this the same as:Michael


    Provided that all outcomes in the probability space have either M=S or M<=2S*, Yes it is.

    In that case both are applications of the law of total probability, which says that:



    In this instance event A is the set of all outcomes where M=a and, for srap's version, event B is the set of outcomes where M=S. So event ##\sim B## is the set of outcomes where ##. For your version B is the set of outcomes where 2S=a.

    * I haven't been following closely enough to know whether that is the case with this notation. I haven't seen letters N, S or M used before. Last time I was reading closely it was X, Y and U
  • JeffJo
    130
    If you do look, THERE IS PROBABLY AN EXPECTED GAIN OR LOSS, but you have no information that would let you calculate it. This is different from knowing it is 0. — JeffJo

    So since you don't know which case you are in after seeing Y and they are not equal you can't really calculate the expected value. Now if you never opened A and never saw Y, that is a different story. — Jeremiah

    You did not read the thread.
    Jeremiah
    I did read the thread. You did not read my replies. Like this one, where I said "you have no information that would let you calculate [the expected gain or loss]" and you replied with "you can't really calculate the expected value" as if I hadn't just said the same thing.

    In one of those replies you ignored, I explained to you how your own half-normal simulation will show that there is an expected gain if the value you find is less than $13.60, and an expected loss if it is greater. I'm not claiming that you should know such things in general - in fact, I explicitly said you don't - but THERE MUST BE SOME VALUES WHERE THE EXPECTED GAIN IS NOT ZERO.

    Now, you can repeat your unfounded assertions as often as you want. I have proven them to be incorrect.
  • JeffJo
    130
    I differ from that perspective is that I reject the notion that there is such a thing as a 'real' probability (aka 'true', 'raw', 'correct', 'absolute' or 'observer independent' probability).andrewk

    Why? I think you confuse the fact that probabilities are intangible with being unreal.

    Probability is, essentially, a measure of our uncertainty about a result. If I know enough about a coin's weight distribution, the forces applied to it, the aerodynamics in the room, and the landing surface, then We can calculate whether it will land on Heads or Tails. As you repeat the flips, some of these details change which will end up in a 50:50 frequency distribution. If we don't know these details, it is a 50:50 probability distribution each time. These are just different, and very real, properties of the same process.

    Similarly, it is because we don't know anything except ($D, $2D) about the circumstances behind the amounts in the envelopes that we must treat it as a probability distribution. Could your envelope contain $10? There is no reason to believe that it can't, and it seems to be a reasonable value. Is it certain that it contains $10? Absolutely not. These two facts make it describable by a probability. Don't confuse not being able to determine the probability, with inapplicability of probability.

    And what you seem to be avoiding with that attitude, is that the expectation formula (v/2)/2 + (2v)/2 is already assuming:

    • That there are at least three values of v that are possible, namely v/2, v, and 2v.
    • That neither of the possible pairs is certain. So there is probability distribution.
    • That distribution says Pr(v/2,v) = Pr(v,2v), and
    • By transitivity, every value of the form v*2^N, where N is any integer in -inf<N<inf, is possible.
    • And all the possible pairs have the same probability. Which must be 0.
  • Jeremiah
    1.5k
    You did not read my replies. Like this one, where I said "you have no information that would let you calculate [the expected gain or loss]" and you replied with "you can't really calculate the expected value" as if I hadn't just said the same thing.JeffJo

    That was me quoting myself from earlier in the thread. I posted that before you even joined the conversation.
  • Srap Tasmaner
    4.6k
    And if I see £10 then I stand to gain £10 and I stand to lose £5.Michael

    If you see £10 then either you stand to gain £10 or you stand to lose £5, but not both.

    I have two pairs of envelopes A = {5, 10} and B = {10, 20}. I'm going to offer you a choice from one pair or the other. What is your chance of getting 10?

    P(10 | A) = P(10 | B) = 1/2

    if we assume you're an average chooser without "10 seeking" talent. Clear enough.

    But what is P(10)?

    P(10) = P(10 | A)P(A) + P(10 | B)P(B) = P(10 | A)P(A) + P(10 | B)(1 - P(A)) = P(A)/2 + 1/2 - P(A)/2 = 1/2.

    So your chance of picking 10 is 1/2, no matter what P(A) is. P(A) drops out.

    What's your chance of picking 5?

    P(5) = P(5 | A)P(A) = P(A)/2

    What's your chance of picking 20?

    P(20) = P(20 | B)(1 - P(A)) = (1 - P(A))/2

    No idea in either case, because P(A) does not drop out.

    If you got a 10, what's your expected value for the other envelope U? You can go two ways here. You could say

    E(U | 10) = 5 * P(A)/2 + 20 * (1 - P(A))/2

    and that would be true, but it somewhat misses the point. I choose before you choose. "The other envelope" is not well-defined until I have chosen A or B, at which point you can say P(A) = 1 or P(A) = 0. You never get to pick from all four envelopes; you only get to pick from the pair I have chosen. We ignored this when calculating P(10) because my choice didn't matter. Now it does.

    E(U | A, 10) = 5 and E(U | B, 10) = 20.

    You'll still want to do this

    E(U | 10) = E(U | A, 10)P(A) + E(U | B, 10)(1 - P(A))

    and then say that since you know nothing about P(A), you can only apply the principle of indifference and assume P(A) = 1/2. You might be wrong; I may be unaccountably inclined toward A to the tune of 1000:1 but you have no way of knowing that and the rational thing to do is go with indifference.

    But this only makes sense because I've told you that I was choosing from two sets of envelopes in the first place. What if I didn't tell you that? What if there only ever was one pair? What if there were thousands? Maybe some of those have duplicate amounts, maybe not. Maybe there's only a single pair with 10 in it. (This is @JeffJo's thing, and it's worth thinking about. You can't really even count on using 1 - P(A), much less assume P(A) = 1/2.)


    Here's a real life version. Suppose I have some cash and two envelopes, and I'm going to split my cash in such a way that one envelope has twice as much as the other. Suppose I have one $10, one $5 and 6 $1's. What are my options?



    There are some combinations I can't make because I don't have enough of the right denominations.

    We could talk about this table as we talked about the collection {{5, 10}, {10,20}}. If you knew how much money I had and in what denominations, there are several cases in which, upon opening an envelope, you'd already know whether you have the larger or the smaller.

    But let's suppose you don't know any of that. You could also figure that if you got an odd number it must be the smaller (because I'm only using bills, no coins) so I'll cleverly not choose one of those; I'll choose only from



    If I choose B, and you draw the 2, you can reason that I would have excluded {1, 2}, so the other must be 4. Similarly, if I choose E, and you draw the 6, then you can reason that I would have excluded {3, 6} and so the other must be 12. Ah well. I'd have to have more money to make the game perfect.

    But what if I choose B and you draw the 4? 8 is mathematically possible, but there's no {4, 8} here. Similarly, if I choose E and you draw the 12; 24 is mathematically possible, but there's no {12, 24} here.

    So what is your expectation before getting an envelope? Unknown. Something less than half the total cash I have on me, which you don't know, but there are other constraints based on the denominations and some gamesmanship.

    Again, there's no game until I choose. Say I choose B. You don't know it, but the average value of B is 3. If you draw the 2, trading gains you 2; if you choose the 4, trading costs you 2. Say I choose E. You don't know it, but the average value of E is 9. If you choose 6, trading gains you 6; if you choose 12, trading costs you 6.

    Once I have chosen, what you stand to gain or lose by switching is always a fixed amount, without regard to how you choose. Even taking the god's-eye-view of all the possibilities, as we did above with {{5, 10}, {10, 20}}, there is no case in which you stand both to gain and to lose.

    You may still think it's rational to assume there is. That is, on drawing 4, to assume the envelopes might very well be {4, 8} rather than {2, 4}, and even to assume the chances of {2, 4} and {4, 8} are equal.

    That's a lot of assuming. (And it will convince you trade your 4, which is a mistake.) You could instead recognize that all of your choices are conditional on my choice: my choice determines how much is to be gained or lost; your choice determines whether you gain or lose. There are some cases where you can guess whether you have the lower value or the higher, but that's just a guess. (If you draw a 6, do you know for a fact that the envelopes aren't {3, 6}? Of course not. I may have chosen {3, 6} just on the chance that you wouldn't expect me to include any odd numbers.)

    So what is the rational expectation for the other envelope, given that I have chosen and given that you have chosen? There is no chance left once we've both chosen, though there is knowledge on my side and ignorance on yours. Does the other envelope contain either half or twice the amount in yours? Yes, of course. Are there non-zero chances of both? No. Should you assume there are anyway? No. You should recognize that I have fixed the amount you will gain or lose by switching; you cannot know whether you chose the larger or the smaller, so you cannot know whether you will gain or lose that fixed amount by switching, so there is no reason either to switch or to stick.

    (Note also that we get here without assigning P(A) or P(B) or P(any letter) a value. I choose, then you choose. That's it.)

    EDIT: Table typos.
  • Michael
    14k
    You should recognize that I have fixed the amount you will gain or lose by switching; you cannot know whether you chose the larger or the smaller, so you cannot know whether you will gain or lose that fixed amount by switching, so there is no reason either to switch or to stick.Srap Tasmaner

    Sure there is. If there's £10 in my envelope and I know that the other envelope contains either £5 or £20 because I know that one envelope contains twice as much as the other then I have a reason to switch; I want an extra £10 and am willing to risk losing £5 to get it.
  • Srap Tasmaner
    4.6k

    You may entertain yourself by switching and call that a reason, but there is no expected gain from switching.
  • Srap Tasmaner
    4.6k
    If there's £10 in my envelope and I know that the other envelope contains either £5 or £20 because I know that one envelope contains twice as much as the other then I have a reason to switch; I want an extra £10 and am willing to risk losing £5 to get it.Michael

    I think you're making two assumptions you shouldn't:

    1. there is non-zero chance C that the other envelope contains twice the value of your envelope;
    2. the odds Not-C:C are no greater than 2:1, so that you can expect at least to break even.

    There is no basis whatsoever for either of these assumptions.

    In my example, if I choose B, you stand to gain 2 or to lose 2, depending on which envelope you chose, and whether you switch. The amount you gain or lose was determined by me in choosing B. If I choose E, you stand to gain 6 or to lose 6, depending on which envelope you chose, and whether you switch. The amount you gain or lose was determined by me in choosing E.
  • Michael
    14k
    You may entertain yourself by switching and call that a reason, but there is no expected gain from switching.Srap Tasmaner

    We've already established that the expected gain is



    The objective probabilities of and depend on how the host selects the value of .

    If he selects it at random from a distribution that includes and then the objective probability of is and the objective probability of is . So there would be an objective expected gain.

    If he selects it at random from a distribution that includes but not then the objective probability of is and the objective probability of is . So there would be an objective expected gain.

    If he selects it at random from a distribution that includes but not then the objective probability of is and the objective probability of is . So there would be an objective expected loss (and it's because of these cases that if we always switch then we earn as much as the never-switcher, and it's these cases that my conditional switching strategy (and those of others) accounts for).

    The epistemic probabilities of and , however, only depend on our knowledge, and given the principle of indifference, we will calculate the epistemic probability of as being and the epistemic probability of as being , and so an epistemic expected gain. This is my motivation for switching (at least for single-play games, and only if I'm willing to lose).

    Edit: "objective" might not have been the best term to use, as by "epistemic probability" I am referring to objective Bayesian probability.
  • Michael
    14k
    There is no basis whatsoever for either of these assumptions.Srap Tasmaner

    It's the principle of indifference.
  • Srap Tasmaner
    4.6k

    What I don't understand is what your argument is against the alternative analysis. Which of these do you not accept?

    1. The envelopes are valued at X and 2X, for some unknown X.
    2. You have a 1/2 chance of picking the 2X envelope.
    3. If you trade the 2X envelope, you lose X.
    4. You have a 1/2 chance of picking the X envelope.
    5. If you trade the X envelope, you gain X.
  • andrewk
    2.1k
    Probability is, essentially, a measure of our uncertainty about a result.JeffJo
    In broad terms I do not disagree with that characterisation. But there is often more than one way to represent uncertainty, and these lead to different probability spaces. I have referred previously to the observation that in finance many different, mutually incompatible probability spaces can be used to assign a value to a portfolio of derivatives. To try to mount an argument that a particular probability space is the sole correct probability space for analysing a problem, one would have to make a bunch of assumptions to start and, as we see from the length of this interesting thread, those assumptions are rarely uncontroversial.
    And what you seem to be avoiding with that attitude, is that the expectation formula (v/2)/2 + (2v)/2 is already assuming:JeffJo
    I am not an advocate for that expectation formula, so I don't see why you'd think I am avoiding those objections to it.
  • Michael
    14k
    What I don't understand is what your argument is against the alternative analysis. Which of these do you not accept?

    1. The envelopes are valued at X and 2X, for some unknown X.
    2. You have a 1/2 chance of picking the 2X envelope.
    3. If you trade the 2X envelope, you lose X.
    4. You have a 1/2 chance of picking the X envelope.
    5. If you trade the X envelope, you gain X.
    Srap Tasmaner

    I accept all of them. I reject the implicit conclusion that the gain and loss are symmetric. If my envelope contains £10 then 3. and 5. are:

    3. If I trade the 2X = 10 envelope, I lose X = 5.
    5. If I trade the X = 10 envelope, I gain X = 10.
  • Srap Tasmaner
    4.6k
    I accept all of them. I reject the implicit conclusion that the gain and loss are symmetric. If my envelope contains £10 then 3. and 5. are:

    3. If I trade the 2X = 10 envelope, I lose X = 5.
    5. If I trade the X = 10 envelope, I gain X = 10.
    Michael

    Then you reject 1, because those are two different values of X.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.