• JeffJo
    130

    I've looked at them all,and seen no actual data. By which I mean examples where actual envelopes have been filled with actual money, and presented to an actual person. That's the only way statistics ("data science) matter.
  • JeffJo
    130

    If, at that point, you postulate a value in an envelope, you need to postulate a probability distribution that covers all possible ways that value could be in an envelope. Even if it is unknown. Did you miss the part where I said that, if you don't know the value, you technically have to integrate over all possible sets? But that you can prove both envelopes have the same expectation with any legitimate distribution?

    And if, at that point, you postulate a value for any of these functionally equivalent quantities:

    • The lower of the two envelopes.
    • The higher of the two envelopes.
    • The absolute value of the difference.
    • The sum.
    ... then your solution only uses a probability for one, and it divides out. But if you look in it, you need two.

    Honestly, my conclusions are much the same as yours. I just explain them correctly. Why are you arguing? Do you even know what you are arguing about?
  • Srap Tasmaner
    4.6k
    If, at that point, you postulate a value in an envelope, you need to postulate a probability distribution that covers all possible ways that value could be in an envelope. Even if it is unknown.JeffJo

    I'm just trying to understand the "need to" in that sentence.

    Did you miss the part where I saidJeffJo

    Don't be that guy. I'm reading your posts and asking about what I don't understand.

    (Btw, MathJax is available here, so it's possible to make your equations more readable.)
  • Srap Tasmaner
    4.6k

    Let me put it a different way.

    If you can show that we can, using the perhaps unknown value of our selected envelope, get the same answer we get by ignoring that value and doing some simple algebra, then that would count as a solution to the two envelopes problem. So you're offering a solution and I'm really not.

    We haven't actually been talking about solutions much in this thread because some participants did not accept that the simple algebraic solution is actually right.
  • Jeremiah
    1.5k
    You didn't look at them all. If you had you would had noticed I raised the same objections. Then you would have noticed I simulated a data set based on the conditions of the problem. Also statistics definitely does empirical investigations via simulation.
  • JeffJo
    130
    Sorry, I don't know MathJax. With having broken my foot on Friday (did you notice a gap in my responses?), it is taking all the effort I can spare to reply here. If you can point out an on-line tutorial, I'll look at it in my copious "free time."


    I'm just trying to understand the "need to" in that sentence.
    In short, the "need to" include the probability distribution is because the formulation of the "simple algebraic solution" is incorrect if you exclude it.

    I'm assuming that you understand the difference between conditional and unconditional probabilities. The expected value of the other envelope, given a certain value in your envelope, requires that you use probabilities conditioned on your envelope having that value. So even if X is unknown:

    1. The unconditional probability that the other envelope contains $X/2 is not 1/2, it is Pr(L=$X/2)/2.
    2. The unconditional probability that the other envelope contains $2X is not 1/2, it is Pr(L=$X)/2.
    3. If you have $X, those are the only possible cases.
    4. To get conditional probabilities, you need to divide the probability of each possible case by the sum of them all.

    So the expected value of the other envelope, given that you have $X, is:

      [($X/2) * Pr(L=$X/2) + (2*$X) * Pr(L=$X)] / [Pr(L=$X/2) + Pr(L=$X)]

    Note how this can be different, for different values of X, if Pr(*) varies. I'm not trying to "be that guy," but apparently nobody read my first post either. I pointed out examples of how that can happen: https://thephilosophyforum.com/discussion/comment/196299

    The "simple algebraic solution" assumes that the probability of any two possible values of L is the same, which makes this reduce to:

      [($X/2) + (2*$X)] / 2]

    Note that this assumption means that the probability of getting $10 is the same as getting $5, which is the same as getting $2.50, or $1.25, or $0.625, or $0.3125, etc. And it also implies that any two possible high values are the same, which means that getting $10 is just as likely as $10,240, or
    $10,485,760, or $10,737,418,240,etc. THERE IS NO UPPER BOUND TO THE VALUES IMPLIED BY THE "SIMPLE ALGEBRAIC SOLUTION."

    This is another reason (besides failing the requirement that the cases be equivalent except in name) that the Principle of Indifference can't be applied.
  • JeffJo
    130

    I did look. Maybe you didn't read mine: "I mean examples where actual envelopes have been filled with actual money, and presented to an actual person. That's the only way statistics matter."

    If you simulate the problem, then all you are testing is your interpretation of the problem, warts and all. And applying statistics to the results may show how statistics work, but nothing about whether your interpretation is right or about the original problem. Micheal also simulated the problem.

    Whether or not you want to address it, the distribution of the possible values in the envelopes is required for the problem. Ignoring it is making a tacit assumption that includes an impossible distribution.
  • Jeremiah
    1.5k
    Feel free to look at my simulations on page 26 and point out specifics.
  • JeffJo
    130

    Right. That was after my first post. I read it before the rest of the thread, so I didn't think that was what you were referring to. I didn't want to go into this amount detail about it (hoping that my explanations would suffice), but you insist.

    Let X be a set of positive dollar values, and x0 be the lowest value in X.

    Let F(x) be a probability distribution function defined for every x in X, and for x0/2 as well. But F(x0/2)=0 since x0/2 isn't in X.

    Let Y be chosen uniformly from {1,2}.

    Define your random variable A = X*Y. So X=A/Y, where the denominator can be 1 or 2. This makes the probability distribution for A equal to F(A/1)/2+F(A/2)/2 = [F(A)+F(A/2)]/2. Call this F1(A).

    Define your random variable B = X*(3-Y). So X=B/(3-Y), where the denominator can be (3-1) or (3-2). This makes the probability distribution for B equal to F(B/(3-1))/2+F(B/(3-2)/2. But simple rearrangement shows that this F1(B).

    The point is that we can prove that the distributions for A and B are the same. And all your simulation shows, is that if you can correctly derive the result of a process, then a correct simulation of that process statistically approximates the result you derived. It says nothing about the OP; certainly not that the distribution is unimportant.

    And btw, the distribution you used in the version you posted is a discretization of the Half-normal distribution. And it is possible that it could put $0 in both envelopes.

    +++++

    And your simulation does nothing to explain why the expectation formula E = (X/2)/2 + (2X)/2 = 5X/4 is wrong. It is because the correct formulation, if someone picks your envelope A and it contains X, is:

      E = [(X/2) * F(X/2) + (2*X) * F(X)] / [F(X/2) + F(X)]

    Your simulation does nothing to show this. But you can in various ways. Choose any of your distributions, but fixing that "zero" problem.

    One way to test it, is to chose a range of values like between $10 and $11. If you have an amount in that range (ignore values outside it), your average gain should be about $2.50 if you switch. And this will be true whether you start with envelope A, or B. I can't say the exact amounts, because it depends on how F(10) differs from F(20). The exact amounts are the kind of thing you can find with a statistical simulation like yours.

    Or, just calculate the average percentage gain if you switch. You will find that it is about 25%, whether you switch from A to B, or from B to A.
  • Jeremiah
    1.5k
    I am well aware 0 was a possible outcome, the code just runs better without the loops, and it was not significant enough to care.

    Your simulation does nothing to show this.JeffJo

    Sorry my simulation proves you wrong.
  • JeffJo
    130

    I am well aware 0 was a possible outcome, the code just runs better without the loops, and it was not significant enough to care.

    Use a "ceiling" function instead of a "round" function. Or just add 0.0005 before you round.

    Sorry my simulation proves you wrong.

    Any statistical analysis addresses a question. Yours addresses "Is the distribution of A the same as the distribution as B?" And it can only show what the answer most likely is, not prove it.

    But, the answer to that question is pretty easy to prove. As I did. So there wasn't much point in the statistical analysis, was there?

    What you didn't address, but could, is whether the two random variables are independent. Which they are not. Since having two identical distributions is meaningless if they are not independent, your simulation did not address anything of significance to the OP.

    Or what the expectation of the other envelope is relative to yours, and why the naive apparent gain can't be realized.Answer: With any bounded distribution, there is a smallest possible value where you will always gain a small amount by switching, and a largest value where you will always lose a large amount by switching. In between, you typically gain about 25% by switching once, and lose that 20% (remember, the baseline is now the switched-to value, so the dependency is reversed) by switching back. But over the entire distribution, the large potential loss if you had the largest value makes the overall expectation 0.

    Those are the issues that are important to the OP, and your simulation doesn't provide any useful information.
  • Jeremiah
    1.5k


    Or I could just not worry about an outcome of zero since it has zero impact.
  • Jeremiah
    1.5k
    @JeffJo I have already stated I am done debating this problem. So I am sorry if you have an issue with my correct solution.
  • JeffJo
    130

    If you want to address the original problem, it matters that by your methods, ignoring their other faults, the two envelopes can contain the same amount. I understand that you are more interested in appearing to be right, than in actually addressing that problem. But I prefer to address it.


    Exactly what do you think "[your] correct solution" solves? I told you what it addresses, but you decided you were done before hearing it.

    I've tried to explain to you why the issue you addressed is not the OP, but you have chosen not to accept that. If you don't want to "debate in circles," then I suggest you accept the possibility that the contributions of others may have more applicability than yours.
  • Jeremiah
    1.5k



    I am happy with my correct results.
  • Jeremiah
    1.5k
    Going a little chop happy on my posts there.
  • Jeremiah
    1.5k
    Why would you even edit out the fact that I think my solution is correct? Am I not allowed to have that view?
  • Pierre-Normand
    2.3k
    I just came upon this thread and didn't read though all of it. I did read the first few and the last few pages. It seems to me that @andrewk and @JeffJo have a correct understanding of the problem, broadly consistent with mine.

    The paradox seems to me to stem a vacillation between assuming that the player possesses (and can thereafter make use of) some knowledge of the bounded probability distribution of the possible contents of the two envelopes, which can be represented by a joint prior probability distribution, and the alternative assumption (inconsistent with the first) that after opening up one envelope the posterior probability distribution of the content of the unopened envelope necessarily remains equal to 0.5 for the two remaining possible values of its content. This can only occur if the player disregards his prior knowledge (or happens by cheer luck upon a value such that the posterior probabilities for the two remaining possible values or the content of the unopened envelope are 0.5).
  • Srap Tasmaner
    4.6k
    some knowledge of the bounded probability distribution of the possible contents of the two envelopesPierre-Normand

    I'm having trouble imagining what the source of this knowledge might be.
  • Pierre-Normand
    2.3k
    I'm having trouble imagining what the source of this knowledge might be.Srap Tasmaner

    Since it's incomplete knowledge, or probabilistic knowledge, that is at issue, all that is needed is the lack of total ignorance. Total ignorance might (per impossibile) be represented by a constant probability density for all finite values, and hence a zero probability for all finite value intervals. The prior probability that the content of the first envelope (which represents your knowledge before opening it) is smaller than ten billion times the whole UK GDP would be zero, for instance. Any other (reasonable) expectation that you might have in a real world instantiation of this game would yield some probabilistic knowledge and then, therefore, lend itself to a Bayesian analysis whereby the paradox doesn't arise.
  • JeffJo
    130
    The point is that there must be a prior distribution for how the envelopes were filled, but the participant in the game has no knowledge of it. I express it as the probability of a pair, like Pr($5,$10) which means there is $15 split between the two. There is also a trivial prior distribution for whether you picked high, or low; it is 50% each.

    The common error is only recognizing the latter.

    If you try to calculate the expected value of the other envelope, based on an unknown value X in yours, then you need to know two probabilities from the unknown distribution. The probability that the other envelope contains X/2 is not 1/2, it is Pr(X/2,X)/[Pr(X/2,X)+Pr(X,2X)]. The 50% values from the second distribution are used to get this formula, but they divide out.

    The problem with the OP, is that we do not know these values, and can't make any sort of reasonable guess for them. But it turns out that the "you should switch" argument can be true:
      In Jeremiah's half-mormal simulation:
    • The probability that X=$5 is 0.704%, and that X=$10 is 0.484%.
    • The probability that A=$10 is (0.704%+0.484%)/2=0.594%.
    • Given that A has $10, the probability that B has $5 is (.704%/2)/0.594% = 59.3%
    • Given that A has $10, the probability that B has $20 is (.484%/2)/0.594% = 40.7%
    • Given that A has $10, the expected value of B is (59.3%)*$5 + (40.7%)*$20 = $11.11.
    • So it seems that if you look in your envelope, and see $10, you should switch.
    • In fact, you should switch if you know that you have less than $13.60. And there is a 66.7% chance of that. (Suggestion to Jeremiah: run your simulation three times: Always switch from A to B, always keep A, and switch only if A<$13.60. The first two will average - this is a guess - about $12, and the third will average about $13.60.)
    • It is the expected gain over all values of X that is $0, not an instance where X is given.

      The naive part of Jeremiah's analysis, is that knowing how A and B have the same distribution is not enough to use those distributions in the OP. He implicitly assumes they are independent, which is not true.

    So the OP is not solvable by this method. You can, however, solve it by another. You can calculate the expected gain by switching, based on an unknown difference D. Then you only need to use one probability from the unknown distribution, and it divides out.

    Conclusions:
    1. If you don't look in your envelope, there can be no benefit from switching.
      • This is not because the distributions for the two envelopes are the same ,...
      • ... even though it is trivial to prove that they are the same, without simulation.
      • The two distributions are not independent, so their equivalence is irrelevant.
      • It is because the expected gain by switching from the low value to the high, is the same as the expected loss from switching from high to low.
    2. If you do look in your envelope, you need to know the prior distribution of the values to determine the benefit from switching.
      • With such knowledge, it is quite possible (even likely) that switching will gain something. If Jeremiah could be bothered to use his simulation, he could prove this to himself.
      • But it is also possible that you could lose, and in the cases where you do, the amounts are greater. The average over all cases will always be no change.
  • Jeremiah
    1.5k
    I specifically said that my simulation is not about finding expected results as that entire argument is flawed. Which I pointed out in a different post, that you likely didn't read. Also, I have said a number of times why the knowledge of Y is not relevant, but I am assuming you have not read that either.
  • Jeremiah
    1.5k
    I don't think @JeffJo even understands the other argument.
  • Jeremiah
    1.5k
    Notice how I have yet to post another thread. I have another problem lined up, but with the mods hacking out relevant content and the general performance of some people in this thread, I just am not sure anymore that these forums are the right place. The solution to this problem is so simple and straight forward and the fact that so many have missed it is discouraging.
  • JeffJo
    130

    I am happy with my correct results.
    And again, you won't say what results you mean.

    Your solution from page 1, that ...
    If you have X and you switch then you get 2X but lose X so you gain X; so you get a +1 X. However, if you have 2X and switch then you gain X and lose 2X; so you get a -1 X.
    ... is a correct solution to the original problem when you don't look in the envelope. The problem with it, is that it doesn't explain to why his program doesn't model the OP. That is something you never did correctly, and you refuse to accept that I did.

    Your conclusion from page 26, that...
    the possible distribution of A is the same as the possible distribution of B
    ... is also correct, although it is easier to prove it directly, But it is still irrelevant unless you determine that the two distributions are independent. AND THEY ARE NOT.

    It is this conclusion from page 26:
    the distribution in which X was selected from is not significant when assessing the possible outcome of envelope A and B concerning X or 2X.
    ... that is incorrect, as I just showed in my last post. The probability that A has the smaller value depends on the relative values of two probabilities in that distribution, so it is significant to the question you address here.

    Averaged over the entire distribution, there is no expected gain. Which you can deduce from your page-1 conclusion. For specific values, there can be expected gains or losses, and that depends on the distribution.
  • Jeremiah
    1.5k


    I have already comment on all of this, read the thread. Don't just lie about reading it, actually read it.
  • Jeremiah
    1.5k
    There was an entire section of the thread in which we debated the role of Y. Clearly @JeffJo you didn't read it.
  • Jeremiah
    1.5k
    It is painfully obvious @JeffJo that you have not read the thread and as a result don't understand where these arguments are coming from.
  • JeffJo
    130

    The point of this demonstration is to show that the possible distribution of A is the same as the possible distribution of B. ... So we see with a D test statistics of 0.0077 and a 0.92 p-value we don't have strong enough evidence to support the alternative hypothesis that the two distributions are reasonably different.
    It is provable without simulation that the the two distributions are the same, so this is pointless. We can accept that the distributions are the same. And it is is obvious you didn't read my posts describing why the simulation is pointless. In short, the "data" you apply "data science" to pertains only to how well your simulation addresses the provable fact.

    It is also provable that the distributions are not independent. Since you technically need to use a joint probability distribution for any analyses using two random variables, and you can only separate a joint distribution into individual distributions when they independent, this conclusion can have no bearing on the original problem. It is also obvious that you did not read my posts that explain this.

    You conclude that I did not read your posts, because I didn't comment on them. By not reading mine, you missed the fact that I don't need to comment. Conclusions drawn from evidence that has already been discredited do not need to be addressed.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.