• Michael
    15.5k
    Note, yet again, that all the values that could be in the cases are known from the start. There is no speculation about possible outcomesSrap Tasmaner

    What difference does that make? Replace the contestant with someone who doesn’t know anything about the opened boxes. What’s the expected value of his box? Should he decline the offer? Remove the unopened box that isn’t his and just tell him that either the Banker’s offer of £10,000 is X and his box is 2X or the Banker’s offer of £10,000 is 2X and his box is X. What’s the expected value of his box? Should he decline the offer?
  • Srap Tasmaner
    4.9k
    What’s the expected value of his box?Michael

    Unknown.

    In DOND, after each case is opened I can tell you the total value and the average value of all the remaining cases. To the penny. With no guessing and no variables.
  • Srap Tasmaner
    4.9k

    When I calculate the total value of our envelopes to be U + 10, and the average to be U/2 + 5, I'm right. Whatever U turns out to be, these calculations will turn out to be correct.

    How do you calculate the total and average value of all the envelopes, including your 10? What are the numbers?
  • Dawnstorm
    242
    Okay, we have envelopes that contain a certain value. This thread has used X for the values in the envelope and [X, 2X] for the sample space of an envelope. This thread has also used Y for the value of an envelope. Here's the thing:

    We can define the value in the envelope in relation to each other, and we get A [X, 2X], B [X, 2X], where X is the smaller of the two values. (Should we decide to make X the bigger of the two values we get A[X/2, X], B[X/2,X].)

    But we can also define the envelopes in relation to each other. We get:

    A [Y], B[Y/2, 2Y]

    Note that this defines the relationship of the envelopes, in a way the other notation doesn't:

    A[X, 2X], B[X, 2X] allows:

    A = X, B = X
    A = 2X, B = 2X.

    We need additional restriction (such as A=/= B) to rule these out. We need no additional restrictions if we're looking at the contents of the envelope directly, rather than looking at the values first and then wondering what is in which envelope.

    A [Y], B[Y/2, 2Y]

    is a shorter and more complete way to look at "One envelope contains twice as much money as the other", than A[X, 2X], B[X,2X].

    We're not making additional assumptions, we're just using different variables as our basis.

    A[X, 2X], B[X, 2X] -- The values in the envelope defined in relation to each other.

    A[Y], B[Y/2, 2Y] -- The envelopes defined in relation to each other, according to their relative value.

    If we know that one of the values is 10, but not in which envelope it is, we get:

    A[X[5,10],2X[10,20]], B[X[5, 10],2X[10,20]]

    or

    A[10], B[5, 20]

    It's exactly the same thing, looked at from two different perspectives. There are no new assumptions. In both cases, we don't know whether 10=X or 10=2X. In the former notation we have to enter 10 in both envelopes and wonder which one picked. In the latter we just enter 10 in the letter we picked (obviously, since it's the one we've seen), and wonder what's in the other. And both notations have three values: 5, 10, 20. They're just organised into different either/or structures, because the notations define the letters differently (interchangable; defining in one in term of the other).
  • Srap Tasmaner
    4.9k

    You have an average total value of 22.5 and an average envelope value of 11.25. Both of those values always turn out to be wrong.
  • Srap Tasmaner
    4.9k
    average total valueSrap Tasmaner

    That we're driven to use such a phrase is apparently the whole problem.

    Can this sort of thing be done rigorously? What would we have if we did?
  • Dawnstorm
    242
    So how's this:

    A, B = two envelopes; X = the smaller of two values, 2X = the greater of two values; Y = the known value of one envelope

    P (A=X and B= 2X) + P (A=2X and B=X) = 1

    Corollary: P (A=X and B=X) + P (A=2X and B=2X) = 0

    This merely describes the set-up.

    P (Y=X) + P (Y=2X) = 1

    This describes the fact that if we know one value, we cannot know whether it's the samller or the bigger value (but it has to be one).

    From this we get:

    P (A=Y and B=2Y) + P (A=2Y and B=Y) + P (A=Y/2 and B=Y) + P (A=Y and B=Y/2) = 1

    Corollary: P (A=Y and B=Y) + P (A=2Y and B=2Y) + P (A=Y/2 and B=2Y) + P(A=2Y and B=Y/2) = 0 [At least one value is by definition Y, and because of the set-up, both can't be Y.]

    Now we look into envelope A and discover Y. This renders all the probabilities 0 where A=/=Y, so we get:

    P (A=Y and B=2Y) + P (A=Y and B=Y/2) = 1

    Corollary: P (A=Y and B=Y) + P (A=Y/2 and B=2Y) + P(A=2Y and B=Y/2) + P (A=2Y and B=Y) + P (A=Y/2 and B=Y) = 0

    Did I make a mistake anywhere here? To me, this proves that saying both envelopes have to include either X or 2X and that if one envelope contains Y the other has to contain either Y/2 or 2Y are the same thing from a different perspective.
  • andrewk
    2.1k
    there's a 50% chance that the other envelope contains £20 and a 50% chance that the other envelope contains £5.Michael
    The resolution of the apparent paradox is that the probabilities are not 50:50 for most values of Y.

    Either the player has not adopted a Bayesian prior distribution for X, in which case she has no basis for assigning any probabilities to the options of U=X and U=2X, or she uses the prior distribution to calculate the probabilities. (U is the value in the unopened envelope)

    It is fairly straightforward to show (and I did so in my note) that when she does that, regardless of the prior distribution adopted, the probability of U=2X depends on the observed value Y and will be more than 50% up to a certain calculable critical point after which it will be less than 50%.

    The case where the prior distribution is X=5 or 10 with equal probability demonstrates this. If Y=5 then it is certain that U=2X=10. If Y=20 it is certain that U=X=10. If Y=10 then the odds are 50:50 that U=2X, ie 5 or 20.

    There are many different ways the calculations for this can be approached, and we've seen several of them in this thread. But whatever approach one is using, one should subject it to a hard critical eye when a 50:50 assumption is made because in many cases, and possibly in all cases when it's about what's in the unopened envelope, that assumption will not be justified.
  • Jeremiah
    1.5k
    The more I read the responses to this thread, the more appreciation I grow for a good quality book.
  • Jeremiah
    1.5k
    What is the point if you are just modeling yourself? Math and science should help us diverge from the self and step closer to the truth.
  • Efram
    46
    25 pages of reading later...

    I'm going to try to come at this from a different angle to try to break the stalemate. I'm in a curious position because I can somewhat see where both sides are coming from.

    I think it would help to clarify one thing first: The rule that you could potentially win more than you risk losing holds true regardless of the amount in the envelope - so opening the chosen envelope is irrelevant because a) it doesn't physically change anything and b) you don't learn anything objectively significant that you don't already know.

    So we can temporarily take opening the envelope and learning any amounts out of the equation. I include this provision because I think it helps dispel the illusions / the mental trickery / the human intuition, etc. that comes from thinking about fixed amounts of money (I'll elaborate on this later).

    So having chosen envelope A, you could say that if envelope B contains twice the amount, you potentially gain more than you'd potentially lose by switching - but importantly, you can also say that if A contains twice the amount as B, you potentially gain more than you'd lose by staying. So logically/mathematically/statistically, there's no advantage to either strategy.

    So I think the mistake here was only applying this logic to switching, without realising it applies to staying. Also, I think the promise of 2X a known amount creates the illusion and the desire to chase money that potentially doesn't even exist.

    (You may also say that any statistical method that searches for an objectively superior strategy and depends on opening the envelope, must be inherently flawed - because as explained above, it's an insignificant step revealing objectively useless information - so if you're somehow making it significant, you're doing something wrong.)

    Now to revisit the idea of knowing the amount in the envelope: I think using amounts like £5/10/20 is misleading because £5 intuitively feels like a throwaway amount that anyone would be happy to lose. Instead, what if your chosen envelope contained a cheque for £10 million? Would you throw away £5m chasing an additional £10m that may not even exist?

    And here it gets interesting for me because... given a £10 envelope, I really would switch because a £5 loss is nothing. Given a £10m envelope, I'd stay. So I think there is an argument to be made that, on an individual basis, depending entirely on the circumstances surrounding the person (their financial situation, their priorities and such) and the amount of money on offer, in some cases they may choose to gamble away their known amount chasing a higher amount, accepting that this is a purely subjective decision and that it doesn't increase their chances of maximising their profit, it's not an inherently superior strategy, etc. It's purely, "In this instance, I'd be happy to lose £x in the pursuit of potentially winning £y."

    ... So I think another flaw here was this assumption/assertion that a gamble with a 2:1 payout and 50% chance of winning is always worth taking. Again, I would not bet £5m on the 50% chance of getting £10m back. You could in fact draw up many scenarios in which the gamble would be stupid (e.g. where the amount you're gambling away would be life-changing or where losing that money would be life-threatening, whereas the higher amount you could potentially win would have diminishing returns (again, I could have some fun with £5m, a lot of fun with £10m, but wouldn't even know what to do with £20m))

    In summary:

    I disagree that you can make the absolute claim that switching is always the better strategy, in the sense that it's either always in the person's best interests (which is subjective and may be wrong, such as in the personal example I gave) or on the basis that it is somehow statistically/logically/strategically superior (which isn't true at all). But I do agree that an individual in a real world situation may choose to gamble and it may be the "right" choice for them specifically.

    (I may yet change my mind on all of this after I've wrapped my head around it a bit more)
  • Benkei
    7.7k
    Funny, I was just thinking how inserting an amount just confuses things. 10 GBP is as much a meaningless placeholder as Y or A to denote the value of the envelope in that respect. Let's stick to what we know, we don't know the value of either envelope but we do know one is twice as much as the other. So one is X and the other is 2X for a total of 3X for both envelopes.

    Let's name the envelopes Y and Z (note, they do not denote amounts). The expression "if Y = X then Z is 2X or X/2" only adds up to 3X in one instance, the rest results in false conclusions as it contradicts the premise that the total should always be 3X. Knowing that Y is either X or 2X, we get four possibilities:

    If Y = X then Z = 2X for a total of 3X is true.
    If Y = X then Z = X/2 for a total of 1.5X is false.
    If Y = 2X then Z = 2X for a total of 4X is false.
    If Y = 2X then Z = X/2 for a total of 2.5X is false.

    This suggests that replacing the variable of one envelope with a fixed amount or a fixed placeholder messes up things. I'm not sure why. Maybe @andrewk can tell me.
  • Michael
    15.5k
    "if Y = X then Z is 2X or X/2" only adds up to 3X in one instance, the rest results in false conclusions as it contradicts the premise that the total should always be 3X.Benkei

    You're conflating. When you say "if Y = X then Z is 2X or X/2" you're defining X as the value of Y, but when you say that the total should be 3X you're defining X as the smaller amount, which might not be Y.

    What you should say is:

    If Y = X, where X is the smaller amount, then Z = 2X, and if Y = 2X, where X is the smaller amount, then Z = X.

    Now when we introduce Y = 10 we have two conditionals:

    1. If Y = 10 and Y = X, where X is the smaller amount, then Z = 2X and Z = 20
    2. If Y = 10 and Y = 2X, where X is the smaller amount, then Z = X and Z = 5

    One of these antecedents is true and one of these is false. I assign a probability of 50% to each, either because I know that the value of X was chosen at random from a distribution that includes 5 and 10 or because I have no reason to prefer one over the other (the principle of indifference). So there's a 50% chance that Z = 20 (if X = 10 was selected) and a 50% chance that Z = 5 (if X = 5 was selected).
  • Benkei
    7.7k
    That's how you proposed it originally.

    In the above you have an inherent contradiction in your conditionals as X and 2X are both 10. As a result and as you correctly state one of them is false by necessity. For probability both outcomes should be at least possible. Otherwise, one outcome carries the probability of 0% and the other 100%. That you don't know which one is true or false does not result in an equal probability.
  • Michael
    15.5k
    In the above you have an inherent contradiction in your conditionals as X and 2X are both 10.Benkei

    Obviously. That's how different conditionals work. Let's say I toss a coin and record the result R:

    1. If R = heads then ...
    2. If R = tails then ...

    1 and 2 have contradictory antecedents. But I'm not saying that both the antecedent of 1 and the antecedent of 2 are true. One is true and one is false, with a 50% chance of each being true. And the same with my example with the envelopes:

    1. If X = 10 then ...
    2. If 2X = 10 then ...

    Or:

    1. If X = 10 then ...
    2. If X = 5 then ...

    Or:

    1. If the £10 envelope is the smaller envelope then ...
    2. If the £10 envelope is the larger envelope then ...
  • Benkei
    7.7k
    1. If R = heads then ...
    2. If R = tails then ...

    1 and 2 have contradictory antecedents. But I'm not saying that both the antecedent of 1 and the antecedent of 2 are true. One is true and one is false, with a 50% chance of each being true. And the same with my example with the envelopes:

    1. If X = 10 then ...
    2. If 2X = 10 then ...
    Michael

    Well, you're turning it around here. To illustrate with coins it would be something weird like:

    If R = heads then
    If 2R = heads then

    The correct one is:

    1. If X = 10 then ...
    2. If X = 5 then ...
    Michael

    If X = 10 then the other is 20
    If X = 5 then the other is 10

    In both cases you have now assumed you're opening the smaller envelope and the other HAS to be the bigger envelope. Both outcomes are in principle possible as the total is indeed 3X.
  • Michael
    15.5k
    If X = 10 then the other is 20
    If X = 5 then the other is 10

    In both cases you have now assumed you're opening the smaller envelope and the other HAS to be the bigger envelope.
    Benkei

    You're missing the "and Y = 10" part:

    1. If Y = 10 and X = 10 then ...
    2. If Y = 10 and X = 5 then ...

    Or to put it in simple terms:

    1. If my £10 envelope is the smaller envelope then ...
    2. If my £10 envelope is the larger envelope then ...

    These are the perfectly reasonable conditionals I'm considering, and I'm assigning a probability of 50% to each antecedent.
  • Jeremiah
    1.5k


    In relation to the problem in the OP:

    Why do you need to include Y, Michael? There is no justified reason to do that.

    It does not provide updated information for the original uncertainty of X or 2X, which is reason enough to leave it out and it will absolutely give untrue information. One of the values in your domain is not true, one of your subjective possible outcomes is not an objective possible outcome. So why are you modeling the subjective, when you can model the objective? Why is your subjective model superior to an objective model?
  • Jeremiah
    1.5k
    There has been no justified reason at all as to why we should engage in fantasy over reality.
  • Michael
    15.5k
    Why do you need to include Y, Michael?Jeremiah

    It's in your OP:

    "Initially you are allowed to pick one of the envelopes, to open it, and see that it contains $Y."

    It does not provide updated information for the original uncertainty of X or 2X

    I know. But as you say, our £10 could be X or our £10 could be 2X. If our £10 is X then X = 10 and the other envelope contains 2X = £20. If our £10 is 2X then X = 5 and the other envelope contains X = £5.

    So although it doesn't help us determine if we have the smaller envelope (X) or the larger envelope (2X), it does help us determine how much could possibly be (or not be) in the other envelope. It could be £5. It could be £20. It can't be £40.

    With that new information we can make a decision, and I think @Efram is right when he says this:

    Now to revisit the idea of knowing the amount in the envelope: I think using amounts like £5/10/20 is misleading because £5 intuitively feels like a throwaway amount that anyone would be happy to lose. Instead, what if your chosen envelope contained a cheque for £10 million? Would you throw away £5m chasing an additional £10m that may not even exist?

    And here it gets interesting for me because... given a £10 envelope, I really would switch because a £5 loss is nothing. Given a £10m envelope, I'd stay.

    If there's £10 in my envelope then I would take the risk and switch, hoping for £20. If there's £10,000,000 in my envelope then I would say that sticking is better than switching because the potential loss of £5,000,000 is too significant.

    I think this notion that the answer to the question is determined by whether or not we'd win or lose in the long run over repeated games (with various starting envelopes and possible values of X), and that if we'd break even then it doesn't matter what we do, is short-sighted.
  • RainyDay
    8
    Congratulations (almost) everyone for keeping a cool head through 25 pages. I wonder if acceptance/rejection of the principal of indifference (https://en.wikipedia.org/wiki/Principle_of_indifference) is what divides many people.

    I'd also like to see the original problem rephrased to eliminate personal valuations of the outcome. There's little value in the question if we can all argue along lines of "oh well, I always switch 'cause I'm curious" or "after I see the first amount, I already feel like a winner and that's enough for me".
  • Jeremiah
    1.5k
    The simple fact that it is there is not a reason to include it. Determining what information to include is part of the process, you need actual justification, especially considering that your apporach leads to an impossible outcome. And I am only concerned with formal justifications.
  • andrewk
    2.1k
    Let's name the envelopes Y and Z (note, they do not denote amounts). The expression "if Y = X then Z is 2X or X/2" only adds up to 3X in one instance, the rest results in false conclusions as it contradicts the premise that the total should always be 3X. Knowing that Y is either X or 2X, we get four possibilities:

    If Y = X then Z = 2X for a total of 3X is true.
    If Y = X then Z = X/2 for a total of 1.5X is false.
    If Y = 2X then Z = 2X for a total of 4X is false.
    If Y = 2X then Z = X/2 for a total of 2.5X is false.

    This suggests that replacing the variable of one envelope with a fixed amount or a fixed placeholder messes up things. I'm not sure why. Maybe andrewk can tell me.
    Benkei
    Hi Benkei. Nice to see you join this discussion.

    Since X denotes the smaller of the two amounts, the first statement is true and the other three are false. But the player cannot use the statements because she only knows Y. She doesn't know what X is, and unless she adopts a Bayesian prior distribution for X she doesn't know the probability that Y=X either, so she can't use conditionals.

    I understand that people feel discomfort with the use of Bayesian priors and feel that your expectation of gain is then based on your own guessed distribution, which you know to be wrong, but that's how Bayesian methods work. For all their limitations, they are all we have (except for a complicated exception that I'll mention further down, and which I doubt anti-Bayesians will feel any more comfortable with).

    If we refuse to use a Bayesian prior then what can we say? The value X is a definite value, known to the game host, not a random variable. We have know the value Y, having seen it in the opened envelope.

    If we insist on modelling the situation from a full-knowledge perspective rather than our limited-knowledge perspective, then the gain from switching is X with certainty if Y=X and -X with certainty if Y=2X. But we can't know which is the case so the calculation is useless. We can't introduce probabilities of one or the other cases being actual because we are modelling from a position of full knowledge and only one of them is true.

    Some of the calculations adopt a half-way position where they make X a fixed, non-random amount, and the coin flip result a Bernoulli(0.5) random variable, such that Y=X if B=0, otherwise Y=2X. Such a calculation reflects the state of knowledge of the game host, if we assume the host knows X but doesn't know which envelope has the larger amount in it.

    Under such an approach the expected gain from switching is zero.

    But we ask, why is it reasonable to model the knowledge limitation of the host by randomness, but not to do the same for the player? If that approach is reasonable for the host, it is reasonable for the player, and it is more appropriate to use a Bayesian approach, since it is the player's expectation that we have been asked about.

    On the other hand, if modelling knowledge limitation by randomness is not considered reasonable then we are forced back to where everything is modelled as known and the expected gain is either X or -X with certainty, but we don't know which applies.

    Now for that exception. If one doesn't like that Bayesian approaches model gains by assuming the prior distribution is correct, then we could introduce a distribution for errors in the distribution.

    Say our prior is a lognormal distribution. That has two parameters mu and sigma, for which we assume values to do our calcs. We could reflect our lack of certainty about the parameters by making those two parameters themselves random variables, to reflect our uncertainty about those. Then the calculation will reflect our uncertainty about the prior.

    But guess what happens! It turns out that this approach is identical to assuming a Prior that is the convolution of the original prior with the distributions of the two parameters. So all we've done by modelling the fact that our prior is a guess is change to a different prior. We can repeat that process as often as we like - modelling the uncertainty in the uncertainty in the uncertainty in .... - and we'll still end up with a Bayesian prior. It'll just be more dispersed than the one we started with.

    We might reject the parametric approach as too constrained and instead model uncertainty directly on the CDF of the prior. That gets messy but it will still end up with the same general outcome - a single but different Bayesian prior.

    Summary

    We have three options:

    1. Treat X and Y as both known. Then there is no randomness and the switch gain is either X with certainty or -X with certainty, so the expected gain is equal to that certain gain but the player doesn't know the amount, so this approach is useless to her.

    2. Treat Y as known and model X using a Bayesian prior. This leads to a rule under which the player can calculate a value c such that her expected switch gain is positive if Y<c and negative if Y>c.

    3. Treat X as known and Y as unknown. Then the switch gain has a distribution of X or -X with even odds, so the expected switch gain is zero. This is the approach defended by srap. The approach is coherent but it begs the question of why it is valid to model lack of knowledge about Y/X by randomness, but not lack of knowledge about X.

    Note that approaches 2 and 3 both predict zero as expected gain from blind switching. The difference between them is that 2 gives a strategy for switching based on the observed value of that, when followed, gives a positive expected gain from switching.
  • JeffJo
    130
    You are playing a game for money. There are two envelopes on a table.
    You know that one contains $X and the other $2X, [but you do not
    know which envelope is which or what the number X is]. Initially you
    are allowed to pick one of the envelopes, to open it, and see that it
    contains $Y . You then have a choice: walk away with the $Y or return
    the envelope to the table and walk away with whatever is in the other
    envelope. What should you do?
    Imagine three variations of this game:
    • Two pairs of envelopes are prepared. One pair contains ($5,$10), and the other pair contains ($10,$20). You pick a pair at random, and then pick an envelope from that pair. You open it, and find $10. Should you switch? Definitely. The expected value of the other envelope is ($5+$20)/2=$12.50.
    • Ten pairs of envelopes are prepared. Nine pairs contain ($5,$10), and the tenth pair contains ($10,$20). You pick a pair at random, and then pick an envelope from that pair. You open it, and find $10. Should you switch? Definitely not. The expected value of the other envelope is (9*$5+$20)/10=$6.50.
    • You are presented with ten pairs of envelopes. You are told that some pairs contain ($5,$10), and the others contain ($10,$20). You pick a pair at random, and then pick an envelope from that pair. You open it, and find $10. Should you switch? You can't tell.

    (And note that in any of them, you know you will get $10 if you switch from either $5 or $20.)

    The error in most analyses of the Two Envelope Problem, is that they try to use only one random variable (you had a 50% chance to pick an envelope with $X or $2X) when the problem requires two (what are the possible values of X, and what are the probabilities of each?).

    The Principle of Indifference places a restriction on the possibilities that it applies to: they have to be indistinguishable except for their names. You can't just enumerate a set of cases, and claim each is equally likely. If you could, there would be a 50% chance of winning, or losing, the lottery. In the Two Envelope Problem, you need to know the distribution of the possible values of $X to answer the question
  • Jeremiah
    1.5k
    ou can't just enumerate a set of cases, and claim each is equally likely. If you could, there would be a 50% chance of winning, or losing, the lottery.JeffJo

    That is a very bad understanding of what a sample space and an event is. You are not applying your Principle of Indifference there, which states from your link: "The principle of indifference states that if the n possibilities are indistinguishable except for their names, then each possibility should be assigned a probability equal to 1/n." n in this case would be the total possible combinations of the lottery numbers.
  • Jeremiah
    1.5k
    Furthermore, it makes no sense to use a probability density curve on this problem, considering X would only be selected ONCE, which means X<2X ALWAYS (given that X is positive and not 0). That means no matter what X is the expected value will always be 1/2X+X, in every single case.

    If you try to fit X to a statistical distribution you are just piling assumptions on top of assumptions. You are making assumptions about the sampling distribution and the variance. Assumptions in which you do not have the data to justify. You are also making assumptions about how X was even selected. Assumption on top of assumption on top of assumption. . . .

    Ya, great math there.
  • Jeremiah
    1.5k
    There is a reason we need to consider X as an unknown and approach it as such, with algebra. To do otherwise means making a bunch of baseless assumptions.

    I know algebra is not the most glamorous math, it, however, is very robust, which makes it the more appropriate tool when dealing with these unknowns.
  • Andrew M
    1.6k
    2. Treat Y as known and model X using a Bayesian prior. This leads to a rule under which the player can calculate a value c such that her expected switch gain is positive if Y<c and negative if Y>c.andrewk

    I think the issue is that even if you know Y from opening the initial envelope, the expected gain from switching is still zero if you don't also know c.

    You could potentially calculate c if you observed many runs of the experiment and c remained constant. In which case you could condition on amounts less than or equal to c and in those cases calculate a positive expected gain from switching.

    However for a single run, c just is the lower amount in the two envelopes and is, per the problem definition, unknown.
  • andrewk
    2.1k
    I think the issue is that even if you know Y from opening the initial envelope, the expected gain from switching is still zero if you don't also know c.Andrew M
    c is not an observer-independent item that can be known or not. It is a feature of the Bayesian prior distribution the player adopts to model her uncertainty.

    From the God's-eye (ie omniscient) point of view, which is perspective 1 from the quoted post, there is no c, because there is no non-trivial probability distribution of X. X is a fixed quantity, known only to God and to the game show host.

    So we cannot talk meaningfully about 'the real value of c'.
  • Andrew M
    1.6k
    So we cannot talk meaningfully about 'the real value of c'andrewk

    Can you give a concrete example where such a value would be used?
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment