• Jacykow
    17
    The description of the paradox copied from wikipedia:

    There is an infallible predictor, a player, and two boxes designated A and B. The player is given a choice between taking only box B, or taking both boxes A and B. The player knows the following:
    - Box A is clear, and always contains a visible $1,000.
    - Box B is opaque, and its content has already been set by the predictor:
    --- If the predictor has predicted the player will take both boxes A and B, then box B contains nothing.
    --- If the predictor has predicted that the player will take only box B, then box B contains $1,000,000.
    The player does not know what the predictor predicted or what box B contains while making the choice.


    My question is why would anyone choose two boxes if the predictor is infallible? I have two main points which I hope to see torn down because this 'paradox' seems just too easy.

    1. The predictors infallicity does not exclude the existance of free will. It does not take an all-knowing entity to outsmart or predict ones actions. Observing a game of poker or chess with a large discrepancy of skill clearly shows that humans are somewhat predictable.

    2. If one chooses two boxes and gets $1,001,000 the predictor was not infallible after all. Aiming for this reward is aiming for something that is against the rules. One could say that he chooses only box A and hopes for $2,000 because the box might not be truly clear and contain more money.
  • Outlander
    2.1k
    Why wouldn't they? They don't know the predictor is infallible nor do they know what may or may not be in Box B. I was thinking it'd be funny if Box B contained an IOU for $2,000 legally bound to the owner of the box but unfortunately that was not part of this riddle.

    An infallible predictor as fictious as the idea is would focus on outcome. Free will or not. So they're either right or simply not infallible.
  • fishfry
    3.4k
    I never like these predictor-type puzzles. If you have a predictor you can ask it to predict if its next statement will be a lie. If it says yes then then it told the truth, making the statement a lie. You get a contradiction.

    Therefore there is no such predictor. The very concept of a predictor is contradictory, hence anything follows. All such puzzles are vacuous. I get that they're popular, but I don't see the appeal.
  • unenlightened
    9.2k
    My question is why would anyone choose two boxes if the predictor is infallible?Jacykow

    The logic is that what I choose now cannot influence what is already in the box. The predictor has to predict whether you are bound by logic or by faith in the predictor. By your question, you are bound by faith, whereas by my answer, I am bound by logic. It's a paradox precisely because logic leads to failure.
  • Jacykow
    17
    The very concept of a predictor is contradictoryfishfry

    I think that is the answer to this and many more 'unsolvable paradoxes'. They contain contradictory assumptions. I was hoping for something less anticlimactic.
  • Jacykow
    17

    I'm not sure if you understood my second point. Logic does not have assumptions - it works on the ones you already have declared faith in. The paradox clearly states that the predictor is infallible and there is no reason to question this any more than to question whether box A contains $1000 or not.
  • unenlightened
    9.2k
    it works on the ones you already have declared faith in.Jacykow

    Yes. It also says that the predictor puts something or nothing in box B before the choice is made. So my faith is that my choice cannot affect the past.
  • Pierre-Normand
    2.4k
    I never like these predictor-type puzzles. If you have a predictor you can ask it to predict if its next statement will be a lie. If it says yes then then it told the truth, making the statement a lie. You get a contradiction.

    Therefore there is no such predictor. The very concept of a predictor is contradictory, hence anything follows. All such puzzles are vacuous. I get that they're popular, but I don't see the appeal.
    fishfry

    This doesn't show that the concept of a predictor (or of someone having an infallible predictive power regarding the behaviour of some external system) is incoherent. It merely shows that the predictive power of a predictor can be defeated by the unavoidable effects that the predictor may have on the event that is meant to be predicted. In the case you are envisioning, the person who's behaviour is being predicted is being informed of the content of the prediction before she is called to make a choice. In that case she can indeed act contrary to what she had been predicted to do. But if we ensure that she is not being informed of the content of the prior prediction, and we ensure that her behaviour isn't otherwise causally affected by the predictor's making of his prediction, then there is no such principled limit on the power of the predictor.

    This is how we are normally expected to conceive of the act of the predictor in Newcomb's problem. This predictive act doesn't have any causal effect on the subsequent behaviour that is being predicted. (It only has a causal effect on the content of box-B). Although the player was informed that her choice (whenever she will makes it) will already have been predicted by the predictor, she doesn't know what the content of the prediction is until after she has made her choice. She is therefore not in any position to deliberately make the prediction false through acting contrary to it. The paradox remains.
  • SophistiCat
    2.2k
    I never like these predictor-type puzzles. If you have a predictor you can ask it to predict if its next statement will be a lie. If it says yes then then it told the truth, making the statement a lie. You get a contradiction.

    Therefore there is no such predictor. The very concept of a predictor is contradictory, hence anything follows. All such puzzles are vacuous. I get that they're popular, but I don't see the appeal.
    fishfry

    The predictor may be limited to predicting that one thing and nothing else, so you can't defeat it that way. Also, the predictor doesn't have to be infallible, it only needs to be better than chance. Let's say the predictor is known to be right 55% of the time - not all that implausible. With a large enough leverage, the statistical argument still says that you should one-box, while the causal argument says that you should two-box.

    But I take your larger point that in general, with such puzzles one should not automatically assume that the described scenario is possible, even if it sounds pretty coherent.
  • Michael
    15.6k
    I don't understand how it's a paradox. By the thought experiment's own premise the predictor is infallible, and so therefore whatever I choose is what the predictor predicted. If I choose A + B then the predictor would have predicted this and so put nothing in B, and I win $1,000. If I choose B then the predictor would have predicted this and so put $1,000,000 in B, and I win $1,000,000.

    It isn't possible to win $0 or $1,001,000 and so those alleged outcomes ought not be considered.
  • Isaac
    10.3k


    Yeah, this one always puzzled me too. I don't see anything at all difficult about the predictor knowing what you'll choose by the time you must make your final choice. It doesn't matter how many times prior to that final choice you flip between "...but he'll have predicted I'd think that, so I'll think the opposite...". The point is you can only perform those zig-zags a finite number of times and so a predictor could feasibly predict how many times you'd do it in the time you have available and so arrive at the correct answer.



    I think it says a lot more about how viscerally offensive people seem to find the idea that anyone could possibly know how they're going act.

    Get over it people, you're not that special.
  • SophistiCat
    2.2k
    I don't understand how it's a paradox.Michael

    Yeah, that's how most people react to it. The paradox is that half of those people who think that the answer is obvious do not agree with the other half :)
  • bongo fury
    1.6k
    I think roughly half of us are indignant that the problem is clearly stated as,

    There is an infallible predictor,...

    ... but then,

    Nozick avoids this issue by positing that the predictor's predictions are "almost certainly" correct, thus sidestepping any issues of infallibility and causality.

    E.g. this,

    It isn't possible to win $0 or $1,001,000 and so those alleged outcomes ought not be considered.Michael

    is perfectly true but for the switch (to fallible).
  • Michael
    15.6k
    Then let's assume that the predictor's accuracy is 99%.

    If I pick A + B then there's a 99% chance that I win $1,000 and a 1% chance that I win $1,001,000.
    If I pick B then there's a 99% chance that I win $1,000,000 and a 1% chance that I win $0.

    I'd pick B.
  • bongo fury
    1.6k
    But you can at least believe that more risk-averse people might prefer to (in effect) bank the grand.

    I doubt we'd call that a paradox though, without the "infallible" mis-direction. The OP has a point.
  • Michael
    15.6k
    But you can at least believe that more risk-averse people might prefer to (in effect) bank the grand.bongo fury

    Perhaps, but then that's less about probability theory and more about personal circumstance. Someone in dire financial straits might prefer a guaranteed $1,000 over a 99% chance of $1,000,000, but I think that's tangential to the alleged paradox of the thought experiment.
  • SophistiCat
    2.2k
    If I pick A + B then there's a 99% chance that I win $1,000 and a 1% chance that I win $1,001,000.Michael

    At the time when you are making your decision the money either already is or is not in the box. Your decision cannot change this fact (unless you entertain some strange ideas of backward causality). So if the money is in the box, then the choice is between $1,001,000 and $1,000,000. If the money is not in the box, then the choice is between $1,000 and nothing. Either way, you get more by two-boxing.

    Of course, if everyone reasoned that way, then the predictor would have had a lousy track record, contrary to the stated assumption.
  • Michael
    15.6k
    Of course, if everyone reasoned that way, then the predictor would have had a lousy track record, contrary to the stated assumption.SophistiCat

    I don't see how that follows. If everyone reasons that way and so picks A + B and if the predictor is infallible then it would always predict that the player would pick A + B.
  • Michael
    15.6k
    At the time when you are making your decision the money either already is or is not in the box. Your decision cannot change this fact (unless you entertain some strange ideas of backward causality). So if the money is in the box, then the choice is between $1,001,000 and $1,000,000. If the money is not in the box, then the choice is between $1,000 and nothing. Either way, you get more by two-boxing.SophistiCat

    Although it is true that if I pick A + B then I will win $1,000 more than if I pick B, it is also true that if I pick A + B then I have a 99% chance of winning $1,000 and that if I pick B then I have a 99% chance of winning $1,000,000. That just follows from the thought experiment's premise that the predictor has an accuracy of 99% (else what would that premise mean?).

    Under your account it doesn't matter how (in)accurate the predictor is. Does that seem right? Surely it makes a difference if 99 out of every 100 people people who pick B win $1,000,000 or if 50 out of every 100 people who pick B win $1,000,000? It seems strange to think that this is irrelevant information.

    It might be that that premise can only be true if backwards causality happens but that's irrelevant to the thought experiment. No-one is claiming that the thought experiment can ever happen for real.
  • Jacykow
    17
    So my faith is that my choice cannot affect the past.unenlightened

    The whole point of a predictor is that he doesn't travel back in time. He simply knew ahead of time what your answer would be as described below:

    It doesn't matter how many times prior to that final choice you flip between "...but he'll have predicted I'd think that, so I'll think the opposite...". The point is you can only perform those zig-zags a finite number of times and so a predictor could feasibly predict how many times you'd do it in the time you have available and so arrive at the correct answer.Isaac

    One might say that their free will is unpredictable and this is why I made two points with the first one being:

    1. The predictors infallicity does not exclude the existance of free will. It does not take an all-knowing entity to outsmart or predict ones actions. Observing a game of poker or chess with a large discrepancy of skill clearly shows that humans are somewhat predictable.Jacykow
  • Pierre-Normand
    2.4k
    1. The predictors infallicity does not exclude the existance of free will. It does not take an all-knowing entity to outsmart or predict ones actions. Observing a game of poker or chess with a large discrepancy of skill clearly shows that humans are somewhat predictableJacykow

    You seem to be arguing that the predictor's being able to reliably predict your choice doesn't rob you of your freedom to see to it that you obtain $1,000,000 by choosing just one box (as opposed to merely obtaining $1,000 by choosing two boxes). But this merely rehearses the standard argument for choosing only one box. It doesn't address the flaw in the argument that supports the opposite choice.

    The argument for two-boxing rests on the premise that the content of the boxes already has been determined prior to your making your choice and hence concludes that in any situation (that is, whatever it is that the predictor already has predicted) you are better off taking both boxes rather than taking one. If you are choosing to take only one box in order to see to is that there is $1,000,000 in that box then you are unwarrantedly assuming that it still is within your power to determine the content of that box. But the argument for two-boxing rests on denying that you have any such power at the time when you are called to deliberate and act. The past is the past and you can't alter it. How do you counter this "powerlessness" argument?

    (By the way, I am a one-boxer myself, but I am playing devil's advocate here)
  • Andrew M
    1.6k
    My question is why would anyone choose two boxes if the predictor is infallible?Jacykow

    They shouldn't. Since the predictor is infallible, there can only be two possible outcomes:

    • player chooses box B only; box B contains $1,000,000
    • player chooses both boxes A and B; box B is empty

    I think what makes the paradox interesting is that it's not the case that by choosing both boxes the player will win $1,000 more than by choosing one box. That's the two-boxer intuition, but it doesn't apply here. So we might wonder how the world could be like that. And since only one of those two conditions can be true on conventional assumptions (i.e., either box B contains $1,000,000 or else it is empty) it further seems that the player doesn't have a real choice either.

    However an alternative approach is to model the scenario using quantum entanglement. This is done by preparing two qubits in the following superposition state (called a Bell state):

    |00> + |11>
    

    That corresponds to the above conditions in Newcomb's paradox. The first qubit (i.e., the first digit of each superposition state) represents the player's choice: 0 for choosing box B only, 1 for for choosing both boxes. The second qubit represents the amount of money in Box B: 0 for $1,000,000, 1 for $0. So describing the superposition in terms of Newcomb's paradox:

    |player chooses box B only; box B contains $1000000> +
      |player chooses both boxes A and B; box B is empty>
    

    If the player chooses Box B only (i.e., measures 0 on the first qubit [*]), that collapses the superposition to:

    |player chooses box B only; box B contains $1000000>
    

    Thus, it is certain that box B contains $1,000,000. That is, when the second qubit is measured, it will be 0.

    Similarly, if the player chooses both boxes A and B (i.e., measures 1 on the first qubit [*]), that collapses the superposition to:

    |player chooses both boxes A and B; box B is empty>
    

    Thus, it is certain that box B is empty. That is, when the second qubit is measured, it will be 1.

    To say this in a different way, Newcomb's paradox is like flipping a pair of fair coins a thousand times where the result is always HH or TT, but never HT or TH. Classically, that is extremely unlikely. But using the above Bell state in a quantum experiment, the result is certain. So what makes the predictor infallible here is not that he knows whether the player will flip heads or tails. He instead just knows that the paired coin flips are always correlated. Similarly, the Newcomb predictor doesn't need to know what the player's choice will be, he just knows that the player's choice and the contents of box B will be correlated.

    --

    [*] Important caveat: In quantum experiments, the measurement of a qubit in superposition randomly returns 0 or 1. So on this model the player makes their choice, in effect, by flipping a coin. That is, they make a measurement on the first qubit which randomly returns 0 or 1. As a result of the entanglement, the predicted outcome from measuring the second qubit is certain.
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment