## Mathematical Conundrum or Not? Number Six

• 1.1k
This is why I became disillusioned with philosophy. I wanted real answers, not made up ones
• 1.1k
Indeed, and that's where utility curves come in. If a parent has a child who will die unless she can get medicine costing M, and the parent can only access amount F, the parent should switch if the observed amount is less than M-F and not switch otherwise.

Agreed. Alternatively, some player's goal might merely be to maximise the expected value of her monetary reward. In that case, her choice to stick with the initial choice, or to switch, will depend on the updated probabilities of the two possible contents of the second envelope conditional on both the observed content of the first envelope and on some reasonable guess regarding the prior probability distribution of the possible contents of the first envelope (as assessed prior to opening it). My main argument rests on the conjecture (possibly easily proven, if correct) that the only way to characterize the problem such that the (posterior) equiprobability of the two possibilities (e.g. ($10,$20) and ($10,$5)) is guaranteed regardless of the value being observed in the first envelope ($10 only in this case) is to assume something like a (prior) uniform probability distribution for an infinite set of possible envelope contents such as (... ,$0.5, $1,$2, ...).
• 1.1k
It absolutely makes sense to ask if it is correct, and that should be the first question you ask yourself whenever you model something.

You may call one interpretation the correct one in the sense that it provides a rational guide to behavior given a sufficient set of initial assumptions. But it this case, as is the case with most mathematical or logical paradoxes, the initial set of assumptions is incomplete, inconsistent, or some assumptions (and/or goals) are ambiguously stated.
• 1.1k
By this do you just mean that if we know that the value of X is to be chosen from a distribution of 1 - 100 then if we open our envelope to find 150 then we know not to switch?

That's one particular case of a prior probability distribution (bounded, in this case) such that the posterior probability distribution (after one envelope was opened) doesn't satisfy the (posterior) equiprobability condition on the basis of which you had derived the positive expected value of the switching strategy. But I would conjecture that any non-uniform or bounded (prior) probability distribution whatsoever would likewise yield the violation of this equiprobability condition.
• 1.1k

Like making a bunch of assumptions based on Y.
• 1.9k
Unfortunately, we don't (and can't) know the probabilities that remain. For some values of v, it may be that you gain by switching; but then for some others, you must lose. The average over all possible values of v is no gain or loss.

What you did, was assume Pr(X=v/2) = Pr(X=v) for every value of v. That can never be true.

My first post in this thread three weeks ago:

You're right that seeing $2 tells you the possibilities are {1,2} and {2,4}. But on what basis would you conclude that about half the time a participant sees$2 they are in {1,2}, and half the time they are in {2,4}? That is the step that needs to be justified.

But I still need help with this.

Yesterday I posted this and then took it down:

\small \begin{align} O(X=a:X=\frac{a}{2}\mid Y=a)&=O(X=a:X=\frac{a}{2})\frac{P(Y=a\mid X=a)}{P(Y=a\mid X=\frac{a}{2})} \\&=O(X=a:X=\frac{a}{2}) \end{align}

Bayes's rule in odds form, which shows that knowing the value of the envelope selected (Y), provides no information at all that could tell you whether you're in a [a/2, a] situation or [a, 2a].

I took it down because the Always Switcher is fine with this, but then proceeds to treat all the possible values as part of one big sample space, and then to apply the principle of indifference. This is the step that you claim is illegitimate, yes? Not the enlarging of the sample space.

Essentially, we include the impossible values that may come up in calculations in the range, and make them impossible in the probability distribution.

Is this the approach that makes it all work?

I kept thinking that Michael's mistake was assuming the sample space includes values it doesn't. (That is, upon seeing Y=a, you know that a or a/2 is in the sample space for X, but you don't know that they both are.) But I could never quite figure out how to justify this -- and that's because it's a mistaken approach?

Unfortunately, we don't (and can't) know the probabilities that remain. For some values of v, it may be that you gain by switching; but then for some others, you must lose. The average over all possible values of v is no gain or loss.

Right and that's what I saw above -- the odds X=a:X=a/2 are still whatever they are, and still unknown.

Your last step, averaging across all values of V, I'm just trusting you on. Stuff I don't know yet. Can you sketch in how you handle the probability distribution for V?
• 6.8k
So then the puzzle is what to do about the Always Switch argument, which appears to show that given any value for an envelope you can expect the other envelope to be worth 1/4 more, so over a large number of trials you should realize a gain by always switching. This is patently false, so the puzzle is to figure out what's wrong with the argument.

In half the games where you have £10 you switch to the envelope with an expected value of £12.50 and walk away with £20. In half the games where you have £20 you switch to the envelope with an expected value of £25 and walk away with £10. The gain from the first game is lost in the second game.

It's because of this that you don't gain in the long run (given that some X is the highest X and every game where you have more than this is a guaranteed loss), and it's with this in mind that I formulated my strategy of only switching if the amount in my envelope is less than or equal to half the maximum amount seen, which I've shown does provide the expected .25 gain (which is no coincidence; the math I've been using shows why the gain is .25). And if we're not allowed to remember previous games then the formula here shows the expected gain from some arbitrarily chosen limit on when to switch, peaking at .25.

So as a simple example, you switch when it's £10 into an envelope with an expected value of £12.50 and walk away with £20 and stick when it's £20 because you don't want to risk losing your winnings from the first game. In total you have £40. The person who sticks both times has £30. You have .25 more than him.

Also, that article you provided refers to this paper which shows with some simulations that the expected gain from following their second switching strategy averages at .25 (thick black lines at top):

• 1.1k
No amount of "math" will ever change the actual contents of the envelopes. Some of you are just pushing numbers around on the page, but math is not magic, it can't change the actual dollar values.

And I know I am just speaking in the wind at this point, but we could physically set this game up, using various dollar amounts and you'd never get the expected gain some here are predicting. You can't just think about the numbers, you have to think about the actual outcome.
• 6.8k
And I know I am just speaking in the wind at this point

Well, you did say here that "[you are] already convinced [your] approach is correct... [you] have no doubt about it, and [you] no longer care about arguing or proving that point".

So I don't know why you keep posting or why you'd expect anyone to address you further.
• 1.1k
Because I like reminding you that you are wrong.
• 1.1k
If you are not willing to justify your model empirically, well then that says it all. You should be willing to empirically justify your theory.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal