• Bradaction
    Recently, I was investigating a potential information hazard, of which I will not mention due to the potential risks it could cause to others, given the unlikely scenario in which the information indeed turns out to be true. During this time, I stumbled across a variant to the famous thought experiment Kavka’s Toxin Puzzle. I am attributed this variation of the experiment to Mark Green on Quora, as I could not find any other reference to the variation in my short time researching. It goes essentially as follows.

    - A scientist has acquired or created a supercomputer which can perfectly predict the actions of people

    - This computer does so by creating simulations

    - The scientist gives you a vile of poison that will make you ill for a day, before achieving a complete recovery.

    - Next Tuesday, the scientist will ask the computer if you drink the poison on Wednesday, if the computer says yes, they will deposit $1,000,000 into your account.

    - The scientist then states that you can keep the money, regardless of whether you drink the poison.

    - Receiving the money on Tuesday, you realise it would be completely irrational to drink the poison on Wednesday, except-

    - For the computer to accurately predict what you would do, the simulated you would need to be unaware that you are in a simulation-

    - It’s impossible to prove that you are not the simulation, thus the only rational choice would be to drink the poison.

    Given that the computer is considered a perfect predictor and can create simulations in which its inhabitants believe not to be in a simulation, anyone who doesn’t do as the computer predicts, is certain to be simulated. For example, not drinking the poison after receiving the money would prove that you are in a simulated reality. These simulated realities would end when either Wednesday ends without you drinking the poison, or you are drinking the poison. Yet, we fail to see the horrifying truth that this creates. The abhorrent situation that could be created and abused under such a system. The universe in which we exist is not inherently a simulation, in fact it is more likely than not, not a simulation. Yet, a universe in which the prediction of the computer is not followed, is certain to be simulated.

    We should simplify this experiment. Ask the computer what your next word will be. Wait for its response, and do not say a word. Consider that the computer suggests that your next word will be ‘run’, and you intentionally say a different word, ‘expiration…’ immediately, the simulation is ended, the universe is over, and everyone who existed in it, 7 billion people, effectively die.

    This isn’t too big a deal, of course, unless someone wanted to destroy our universe, in which case doing so becomes terrifyingly easy.

    1. Create an AI or Super intelligence capable of running such simulations, (Potentially possible and likely.)

    2. Ask it a simple prediction, and then do the opposite of its answer.

    3. YAY! Dead universe, we are all simulated.

    Ironically, this is a form of Timeless Decision Theory. The universe could have existed outside of a simulated reality, right up until the point that the answer to the question was acted on, in which case the universe becomes certainly simulated. In essence, the future caused something that happened in the past.

    So, logically, someone could destroy the universe as we know it, by simply asking a question to a supercomputer.

    But it gets worse. Consider that the person who does this and abuses the computer, hates society so much that it chooses to do it over, and over, and over. Such a person would effectively kill over 8,400 billion people in an hour. Wiping them from the face of existence over, and over, and over, in an endless cycle of vengeance and torment by eradicated the reality the exist in from existence.

    So, I would be interested to hear if anyone has any thoughts or ideas on this that could help us understand this. Also, if you replace any mention of computer or AI with a post-singularity ai it becomes an even more concerning situation, given how quickly we appear to be approaching said singularity!
  • TheMadFool
    Kavka's Toxin Puzzle

    An analysis as per Wikipedia.

    1. A person should intend to drink the poison (there's a pay-off worth it).


    2. Once a person intends to drink the poison (1 above), there are no good reasons to drink it (pay-off already achieved).

    Thus a reasonable person must intend to drink the toxin by the first (1) argument, yet if that person intends to drink the toxin, he is being irrational by the second argument (2). — Wikipedia

    The solution:

    1 (above) is based on the condition that a person can intend only to drink the poision. The person doesn't have to put his money where his mouth is. That's why the intention to drink the poison is rational.

    2 (above), however, makes sense only if the person who intends to drink the poison has to drink the poision (intentions have to be matched with appropriate, corresponding deeds). That's why to intend to drink the poision is irrational.

    Put simply, Kavka's paradox switches between intentions only (intend to drink the poison, 1) and intentions & actions (intend to drink the poison and drink it, 2). That's the meat and potatoes of Kavka's paradox!

    I suppose the point Kavka's trying to make here is we can break the causal chain between intentions and deeds based on intentions, just what the doctor ordered if we're to live in peace (deterrence theory, mutually assured destruction).
  • magritte
    So that the criteria of motivation and intention to act shifted between time A and time B resulting in a difference in probabilities of actual action?
  • Philosophim
    But is this a perfect simulation? In the first case it seems the computer simulated that the person would drink the poison "if they were not told a simulation showed they would drink the poison." If the computer also included in the simulation that the person would drink the poison, even when told they would drink the poison, then if the simulation was perfect, they would.

    This is a paradox of saying something is a perfect simulation, then creating a situation in which the simulation is imperfect. If its a perfect simulation, then there are no contradictions, and no human choice could alter what the computer predicted. If the computer simulated that a person would drink the poison, if they were told the computer stated they would drink the poison, but the person did not, then we would know the computer could not perfectly simulate the world.

    Essentially to sum it up, you are using the definition of "perfect", then inventing situations in which it is imperfect. That's not a paradox, its just a contradiction of terms.
  • magritte
    What's perfect?
    Logically the computer cannot do otherwise. Therefore your conception of perfect must be wrong.
  • Pfhorrest
    There cannot be perfect prediction in principle because of dynamical chaos: the act of making a prediction of a system that includes the predictor changes the system in a way that the predictor cannot predict faster than the events actually unfold.

    Say your system consists of a room with a prediction computer and a simple robot that reads the computer’s prediction of what it will do and then does the opposite. The prediction computer knows this about the robot and so predicts that the robot will do the opposite of the opposite of the opposite of the opposite of … ad infinitum, resulting in a calculation that cannot be completed before the time it’s trying to predict has already passed.
  • god must be atheist
    There could be a perfect prediction, but then the person should not face the day thinking "I won't drink this poison, but I will, due to the prediction." It is not the prediction that precipitates the action; the example describe it as if it did.

    If the prediction is perfect, then the person would want to drink the poison on the day, with or without knowing the prediction.

    So for the prediction to be perfect, the person should not face the day "I won't drink the poison" and then drink the poison only due to the prediction's wording.
  • Agent Smith
    It appears that Kavka's Toxin Puzzle is, inter alia, about the existence of a checkpoint betweeen intention and action pertaining to intention. I can intend to pick up a stone, but that doesn't mean I (free) will! Everyone has, I'm sure, an experience that proves this point to themselves. The world of regret is essentially composed of such instances: I wanted to tell her (I love her), but I didn't! I wanted to meet my uncle (before he passed away), but I didn't! So on and so forth...

    Kavka's paradox:

    1. I should intend to drink the poison.

    After that (since I've already got my reward)

    2. I no longer intend to drink the poison.

    Did I ever intend to drink the poison? Thought police?

    More can be said...
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.