• Jeremiah
    1.5k
    It needs a simulation with a distribution but gives no new information of the distribution. Perhaps by scrambling the possible outcome than assigning random arbitrary labels. If the distribution is not known then you can't approximate it and get accurate results. You'd still have the problem of bounds though. Maybe scramble those as well.
  • Srap Tasmaner
    4.6k
    The formula just explains why the gain from the strategy is 0.25; the expected value of the other envelope is 5Y/4.Michael

    Not all Sometimes Switch strategies produce an expected gain of Y/4.

    None of them calculate their expected gain using your formula.
  • Jeremiah
    1.5k
    The best way to sovle this is still the simple algreba, it introduces no new assumptions. However, if you are going to simulate this it should be done in a way were the distribution and bounds remain as unknowns, instead of being set beforehand, if that could be done than it could become a useful tool for mapping unknowns distributions.
  • Jeremiah
    1.5k
    A "switching strategy" is essentially machine learning. It is also out of scope of the OP, there is no guarantee such methods are applicable.
  • Jeremiah
    1.5k
    It is an interesting idea in terms of exploring unknown distributions; however, no one has actually justified this approach with respects to the OP. People are making far too many assumptions that they have no way of confirmimg.
  • Jeremiah
    1.5k
    We went from being totally uninformed at the start of the thread to the knowledge gained after thousands of iterations with predefined conditions never included in the OP. Did you really map the OP, or did you map your assumptions?

    This is one of the reasons we need to constantly question, "Is this correct?"
  • fdrake
    5.8k


    ITT Jeremiah forgets that frequentist expectation calculations are asymptotic.
  • Jeremiah
    1.5k
    I really do not care about this Classical vs Baysian nonsense you all have going on in these forums. That mind set didn't even exist to me before you all started in about it. Which makes me constantly question what other bad habits I could pick up here.
  • Jeremiah
    1.5k
    As I said many, many times algreba is the correct tool to use here. Not Baysian or Classical statistics, but simple algreba. Simply because I talk about the stats doesn't mean I have changed my mind on that.
  • fdrake
    5.8k


    You could pick up a bad habit like charitable reading or obtain some appreciation towards the other half of statistics, or worse even, philosophy in general - which you regularly express contempt for.

    As soon as you apply an expectation operator to the random variable in the envelope, you're already required to apply the context of long term behaviour and the independent repeatability of the trial. Regardless of the interpretation of the sample space and sampling mechanism, the concordance of the expected gain calculation with the long run sample gain echos the assumption. It's literally set up that way. It's in the algebra. If you terminated the simulation when you still have a small sample, you can analytically calculate the finite sample properties - which don't have to coincide with the expectation as you well know, and rarely exactly do.

    Michael's calculations are actually correct if his assumptions are, his algebra is right, it's the modelling of the situation that differs. It's just algebra.
  • Jeremiah
    1.5k
    philosophy in general - which you regularly express contempt for.fdrake

    I picked up my contempt for philosophy from these forums many years ago. I was a member of the old forums.

    As soon as you apply an expectation operator to the random variable in the envelope, you're already required to apply the context of long term behaviour and the independent repeatability of the trial.fdrake

    Which is why you need to stop and ask yourself, "Is this correct?" You need to make sure what you are doing is applicable. It is math, not magic, you need to make sure your approach fits.

    if his assumptions arefdrake

    Talk about "charitable reading" . . . I think I have fully expressed that my main problem is with the assumptions.
  • Srap Tasmaner
    4.6k
    One interesting point about the Arbitrary Cutoff strategy is that Never Switch and Always Switch can be seen as the degenerate cases: Never sets the cutoff to 0; Always to, well, "infinity".

    (Btw, good find on the article, @Jeremiah. Addressed my bafflement over how assigning variables leads to trouble.)
  • fdrake
    5.8k
    Which is why you need to stop and ask yourself, "Is this correct?" You need to make sure what you are doing is applicable. It is math, not magic, you need to make sure your approach fits. God forbid I question the legitimacy of this approach.Jeremiah

    This is exactly why andrewk and Pierre's comments are on point. They found ambiguities in the problem (the equiprobability assumption, which as stated is not part of the problem!), then showed another way of filling in the blanks in andrewk's case, and gestured towards the general difference in approach in Pierre's case.
  • Jeremiah
    1.5k


    The whole distribution thing is faulty. I agree in the strictness sense of the definition there is an unknown distribution, but as far was we know it was whatever was in his pocket when he filled the envelopes. We can't select a distribution to use, as we have no way to check it. We don't know what the limits are and have no way to check that. So it is not that I disagree with making assumptions, it is making unnecessary assumptions; aka Occam's razor.
  • Jeremiah
    1.5k
    If you introduce a known distribution then you are making assumptions about the distribution. Classical or Bayesian, it does not matter, that is true for both. If you insist on defining it on these terms then why are we not adhering to the Law of Parsimony?
  • Jeremiah
    1.5k
    The simplest explanation is that our gain/loss is either x or-x.
  • fdrake
    5.8k


    Why is equiprobability simple but other priors aren't?

    There's no simple reason to assume equiprobability. Doing so requires appealing to more sophisticated mathematical arguments with assumptions that probably don't hold anyway. Things like limiting distributions of independent draws or entropy maximisation.

    How I would actually approach the problem:

    If I deem the amount of money in my envelope tiny (say <100) I'd switch to be able to afford some nice things.

    If I deem the amount of money in my envelope around some threshold that would give me some more financial security, I'd walk away with the money in my envelope. I think 10,000 is about right. That's a down payment on a house, over a year's rent etc.

    If it's a ridiculously large amount of money, I'd switch again out of greed and that the loss wouldn't be felt so much because I'd use it for financial security/savings. I'd still be able to do what I wanted with the middle amount or the smaller amount.
  • Jeremiah
    1.5k
    Why is equiprobability simple but other priors aren't?fdrake

    Equiprobability is unbiased.
  • fdrake
    5.8k


    That's a principle of indifference then. Equally possible things are equally probable. A rule for assigning subjective probabilities.
  • Jeremiah
    1.5k


    If you don't use a random sampling method your event could become skewed by observational bias. I am sure there are cases to use weighted selection methods, but you need to justify their use, not just throw them out there just because. The goal is to mitigate observational bias; however, it can never be fully scrubbed. Still we need to try and let the distribution reveal itself instead of injecting it full of our opinions. We live in a house of subjectivity, and I just want to get as close to the objective as I can, even if that is an impossible task. Maybe you didn't pick up on it, but I use to have a significant interest in philosophy.
  • Jeremiah
    1.5k
    I should note a credit to the philosophers, I have found, after crossing over to science, that the science types do seem to have problems with the whys of their doings.
  • fdrake
    5.8k


    If you assume equiprobability as a prior then you're only not influencing the results if your prior is a pointmass on 0.5 and the actual value is 0.5. This is a terrible prior, as it fixes the posterior to have p=0.5 despite anything and everything. In a situation of uncertainty and ambiguity, assuming infinite certainty (no prior variance) is completely nuts.
  • Jeremiah
    1.5k


    Over the long run an equiprobability as a prior has the least amount of drag. Unless you can justify using a weighted selection method it is the best approach.
  • Jeremiah
    1.5k


    I'll tell you what next time I do a class project I won't use a random sampling method and we'll see if I get an F or an A. Good thing I am learning how to do statistics on these forums and not in the classroom.
  • fdrake
    5.8k


    Random sampling's good. p's also random since it's uncertain. This is the point.
  • Jeremiah
    1.5k
    [Consider] the philosophy in Bayesian statistics of using an uninformative prior.
  • fdrake
    5.8k


    'Uninformative' priors usually aren't uninformative, and sometimes they aren't even probability functions. For example, the equiprobability 'prior' you use on the probabilities is actually infinitely informative for the probability! What you gain in 'uninformativeness' on the level of outcomes you spend in infinite informativeness on the level of the (in the problem unspecified) probability parameter. There's uncertainty you're not modelling with equiprobability.

    I suppose I'll stop antagonising you now.
  • Jeremiah
    1.5k


    Ask yourself why they generally are using equiprobability as a prior when they are uninformed. There is a reason why, [and] why random samples use equiprobability.
  • Jeremiah
    1.5k
    [ I suggest that readers and contributors] check the definition of a random sample. It has a very interesting definition in this context, which I actually already posted in this thread.
  • Andrew M
    1.6k
    It seems to me that there are some claims about the Two Envelopes problem that everyone might agree on.

    1. If the player does not know the amount in the chosen envelope then the expected gain from switching is zero.

    2. If the player knows the amount in the chosen envelope and also knows everything about the distribution then the expected gain can be calculated and used to strategically switch on (i.e., switch if the expected gain is positive else stick).

    For example, if the player knows that there are only two equally likely envelope pairs of { $5, $10) and { $10, $20 } respectively and sees $10 then they would know that there is a $2.50 expected gain from switching. Whereas if they see $20, they would know that there is a $10 expected (and actual) loss from switching.

    3. If the player knows the amount in the chosen envelope but knows nothing else about the distribution then there is no general strategy that would increase their expected gain above zero.

    4. If the player knows the amount in the chosen envelope and knows or can estimate information about the distribution then there are strategic switching strategies (e.g., Cover's strategy) that can increase their expected gain above zero.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.