Oldcomb’s Paradox

This post made me think a bit about the Newcomb paradox. Newcomb’s paradox is basically a prisoner’s dilemma with a sci fi twist. The neat version:

You’re in a room with two boxes. Box A contains $1,000, box B could either contain $1,000,000 or nothing. The person who put the money in the boxes is a near infallible predictor. You are given the choice of either choosing box B (thus possibly getting $1,000,000) or both box A and B (thus either getting $1,001,000 or $1,000). The twist is that the predictor only placed the $1,000,000 in box B if he thought you would choose that one alone.

Therefore there are four possible outcomes:

Predicted choice Actual choice Payout
A and B A and B $1,000
A and B B only $0
B only A and B $1,001,000
B only B only $1,000,000

This is a paradox in that both choices are seemingly irrational. If the predictor is usually right, then being the kind of person that would choose one box is beneficial and so it is rational to one box. On the other hand, since he’s already put the money in the box, there’s no sense in not taking the extra $1,000.

It ends up being that this paradox brings up the problems of rational choice, free will and reverse causation. But for me, the problem is much greater: all of the choices are positive. I have thus decided to come up with a much worse paradox. I call it Oldcomb’s Paradox.

Everything in Oldcomb’s paradox is the same, except for what’s contained in the boxes. In this paradox, the predictor is not a rich person who enjoys messing with people’s greed: he’s a psychotic murderer who enjoys toying with people’s emotions. Thus the boxes are airtight containers from which there is no escape unless you open them. The predictor has sealed an orphan in Box A and your beautiful fiancée in Box B. The assumption is that if he predicted you’d two-box, then he already killed your fiancée and therefore Box B only has her corpse.

What do you choose?

Predicted choice Actual choice Payout
A and B A and B You save the life of an orphan
A and B B only Everyone dies
B only A and B You save the life of an orphan and your fiancée
B only B only You save your fiancée’s life

Should you choose both boxes and thus possibly save the lives of everyone even though you fully believe that the predictor has never been wrong and so the chance of this being correct is limited? Or should you one-box and hope you save your fiancée’s life but sentence a poor orphan to certain death?

My response to both this situation and the original paradox has always been the same: I no-box. I’m not much of a gambling man.

This post made me think a bit about the Newcomb paradox. Newcomb’s paradox is basically a prisoner’s dilemma with a sci fi twist. The neat version: You’re in a room with two boxes. Box A contains $1,000, box B could either contain $1,000,000 or nothing. The person who put the money in the boxes is…

3 Comments

  1. Interesting case.

    David Lewis has a nice little article called “Prisoners’ Dilemma is a Newcomb Problem” that makes the same point you have here.

    If someone’s gotta die, I’d probably do what I could to make sure it wasn’t my fiancee, beautiful or not. So my decision to one- or two-box would hinge on which box she’s in. As you described it, I would one-box. Anyway, the whole “loved one” aspect might bring in a new element (for some people) since the intention might be to save the loved one–and less a matter of minimizing deaths.

  2. To be more constructive, perhaps you could mend the analogy by having 50 or 100 random people in the “million” box and just one random person in the “thousand” box. And then you could stipulate that the agent wants to minimize deaths.

    Of course, I do think you’re onto something by having something personal at stake in the Newcomb case. I suspect that people may be a bit too flippant about their hypothetical choice (ha) since the worst case scenario in standard Newcomb cases is that they break even. Putting one’s fiancee or life’s savings on the line seems like a promising route to more responsible thought-experimenting. hmm… the ethics of thought experimenting…

  3. Ha ha, yeah. Your point is probably a better analogy, but I like mine better because I think it’s harder to decide when the decision will directly affect the decider. I like making all of the possible worlds in my thought experiments the worst possible worlds imaginable.

    In this possible world, for instance, there is a triple-O God, but the last O stands for Omni-malevolent. 😈