Are you a one-boxer or a two-boxer?
The following scenario represents Newcomb’s paradox (non-relevant details may have been changed): I want you to imagine that there exists a person called The Predictor. He predicts human decisions, and has always gotten it right. Due to his legendary status, some say he’s a man, others a machine, others an angel. One thing everyone is sure of, however, is that if the predictor predicts that a person will make a particular decision, then you would be smart to bet the house on that decision being made, such is his amazing strike rate.
Now imagine that one day as you’re walking along the street, a black van pulls up alongside you, a bag is pulled over your head, and you’re bundled inside. The van speeds away as you lose consciousness. When you wake up, you’re in a brightly lit room, sitting in a chair, unrestrained, at a table – a bit like one you’d expect to see on the pavement outside a coffee shop. On the table in front of you are two black cube-shaped boxes, each about one litre in volume. You can’t see inside them because they have closed lids. They are each labelled with a large white letter. One is A, the other is B. Sitting across the small table from you is a man who you’ve never seen before. “Hello,” says the grey haired man in an old, wise sounding voice. “Allow me to introduce myself. I’m the predictor. And as for these boxes you see here, you can keep either both of them, or, if you prefer, just box B. The reality is, of course, that I already know which option you’ll pick. I predicted it, you see, and I’ve never been wrong before. Remember, I’m the predictor!”
“You’ve never been wrong before? Wow… can you see the future?” You ask. “No, not as such,” he replies. “What I offer are predictions about things that I haven’t seen. They are forecasts. My prediction about which box or boxes you will choose isn’t based on me having been told by God, or through having a magic window on the future. I’m just a very, very, very good predictor.”
You’re convinced, but still a bit stunned by the whole experience. You ask “What’s in the boxes?” He replies, “Why, money of course. I chose how much money to put into them based on my prediction about which option you would choose. Box A contains one hundred dollars, you can be quite certain of that. As for box B, listen very closely: If I predicted that you would choose boxes A and B, then I didn’t put anything in box B. It’s empty. If, however, I predicted that you would choose only box B, then it contains one million dollars.
A million dollars sure sounds nice. So, which option should you choose – Box B, or both boxes? And why would you make that choice? I’ll wait for a few people to answer before I say any more.
Glenn Peoples
James Rea
It’s obvious, The Predictor is, in fact, Derren Brown, and he’s using all those tricky NLP techniques to subliminally persuade me to pick box B.
Regardless, the most sensible choice is to select both boxes; whatever the prediction was, the worst I could receive would be $1000, but it might be $1001000.
From a purist perspective, is there such a thing as ‘free will’ if all our choices are known in advance?
Kay
Based on betting on a certainty I wil take just box A.
MichelleM
Both boxes. I’m greedy so I’ll take the sure thing hoping for the slight chance that The Predictor is having an off day.
Kenny
You should pick up both boxes. The million dollars is either in box B or its not, regardless of what decision you make. If it is there, then you get $1000100 instead of a mere $1000000. If it’s not there, then you get $100 instead of $0. Either way, you’re better off picking up both boxes. Or, look at it this you, suppose that you did pick up only the one box. Then, after the fact, you could rightfully think to yourself “If only I had picked up both boxes. I would have been $100 richer!” Of course, if you knew in advance of the predictor’s prediction that you were going to play the game, you should try your darndest to make yourself into a one boxer. But, once the prediction is done and over with, there’s nothing you can do to change that. So you should go with choice that leads to better result regardless of what prediction was made.
MichelleM
Heh: “…instead of a mere $1000000…”
Glenn
Kenny: “The million dollars is either in box B or its not, regardless of what decision you make.”
Well sure, either Q or not Q, that’s a truism. But here’s the thing: Do you doubt the predictor? I mean if we totally eliminate the predictor from the equation then I would pick both boxes too, since the only thing to consider is the safer bet, all other things being equal. But if the predictor is incredibly reliable (in fact he appears to be 100% reliable), then every world in which you pick box B is likely to be a world in which the predictor predicted that you’d pick box B, meaning that it’s the safer bet to get a cool million, right?
In other words – if you picked Box B and the predictor is reliable, then he predicted that you’d pick box B, and the million would be in box B. That’s a heck of a lot better than 50/50 odds right?
Are all the other commenters sure about their choice?
Kenny
Glenn, the thing is that, when I make the choice, the prediction is already over with and done. It’s not that I doubt the predictor; it’s that what he predicted no longer matters. What the predictor predicted doesn’t causally or counterfactually depend on the outcome of my choice. (If it did, then I would agree that I should one box, but that’s not what makes the original paradox interesting). So, if he predicted that I would one box, then if I two box I get a $1000100 as opposed to the mere $1000000 I would have gotten by one boxing. If he predicted that I’ll two box, then if I two box at least I’ll have $100 instead of $0. Either way, I’m better off two boxing. Or, to put it in the language of decision theory, two boxing dominates one boxing.
MichelleM
Yeah, I’m sure. I know I would choose both boxes every time. The only way I could lose this is if I went against what I know is my nature to only choose box B and the predictor wrongly predicted I would not do that. Although…even then I’ve won something because I’ve brought down the invincible predictor…
Kenny
Yeah Michelle,
And you can think about it this way too. If you pick up the one box and get the million, then the predictor has played you for a sucker (he manipulated you into passing up an extra $100). If you pick up the one box and get nothing, then that sucks. If you pick up two boxes and get only $100, then, well, at least you didn’t get nothing as you would have had you picked up only the one box. If you pick up both boxes and get $1000100, then not only did you get a lot of money, you showed up the predictor!
bethyada
B only
CPE Gaebler
Y’know, I’d prolly say “Screw it, $100 isn’t that much of a risk. I’ll take Box B.”
Of course, I haven’t seen anyone question whether or not the predicter is telling the truth or not… 😛
Glenn
I think, Kenny, that you’re treating the predictor as not really being part of the equation. I am making the assumption that the predictor is correct – that’s more or less stipulated to be almost certainly true by the thought experiment.
Therefore:
1) If I choose A and B, then the predictor predicted that I would choose A and B.
2) If the Predictor predicted that I would choose A and B, then B is empty.
3) Therefore if I choose A and B, then B is empty.
Notice that the “if… then” relationship expressed here is not a causal one. I see no flaw in the above argument. Likewise:
4) If I choose box B only, then the predictor predicted that I would choose box B only.
5) If the predictor predicted that I would choose box B only, then box B contains $1,000,000.
6) Therefore if I choose box B only, then box B contains $1,000,000
It’s logic, stupid!
Kenny
You’re using indicative conditionals. It’s the subjunctives that matter.
Glenn
Kenny, so do you have an objection to 1-3 or 4-6? Which premise looks false?
Kenny
I agree that the argument is valid and (given the stipulations of the case) sound. I disagree that the indicative conditionals you are using are the ones that you should consult in your practical reasoning.
Glenn
Well, if the arguments are sound, and they tell you how to get a mill, who cares?
Elizie
I’d totally pick box B. Whooop! A cool million in my pocket.
Sigh, if only …
Kenny
The indicative conditionals that show up in the arguments don’t tell you how to get the mill. For that you need causal conditionals. Run the arguments with causal conditionals and they aren’t sound anymore.
Glenn
Actually they DO tell me how to get the mill. They tell me that if I choose box B only, then box B contains $1,000,000. There ya go – how to get a mill!
Kenny
No. Suppose Jack’s habits are such that he eats pizza in the morning if and only if he got wasted the night before (furthermore, Jack knows this about himself). Jack wakes up one morning after a wild party, with a hankering for pizza, really regretting having gotten wasted the night before. The following conditional is true: If Jack doesn’t eat pizza this morning, he didn’t get wasted last night. Does that conditional tell Jack how to avoid having gotten wasted last night? Does the truth of that conditional entail that Jack can avoid having gotten wasted last night by refraining from eating pizza this morning? Of course not. The reason why is that there is no causal or counterfactual dependence relationship (at least not in the right direction) between Jack’s eating pizza this morning and his not having gotten wasted last night. Indicative conditionals that are not backed by corresponding causal or subjunctive conditionals are useless for practical deliberation. And in the Newcomb’s paradox scenario, the relevant indicative conditionals are not so backed.
Glenn
“Does that conditional tell Jack how to avoid having gotten wasted last night?”
No, but it can put him in a position of knowing that he did get wasted last night, and in the case of Newcomb’s paradox, that’s what we care about.
Kenny
How is that? Suppose Jack forgot what he did last night (it was a really wild party) but still knows the relevant conditionals. Suppose also that he did get wasted last night. Then is it currently in his power to do something (namely refrain from eating pizza) such that were he to do it he would know that he didn’t get wasted last night? The answer is “No”. If he were to refrain from eating the pizza, then the relevant indicative conditionals would be false but it would still be true that he got wasted last night. Assuming it is within his power to refrain from eating pizza, it’s in his power to make it so that he does not know the relevant conditionals (because it’s in his power to do something such that, were he to do it, those conditionals would have been false).
The same holds in the case of Newcomb’s paradox. Say the predictor predicted I’ll two box. Then when I’m standing there it is not within my power to do something such that were I to do it I’d know I’m getting a million dollars. If I were to one box, then I would have made it so the predictor’s prediction is false, not that I know that I’m getting a million dollars. Again, it’s the subjunctive conditionals that matter here, not the indicatives.
CPE Gaebler
But, with the problem as stated, his habits are such that he’s going to eat the pizza, right?
If your analogy is going to be remotely as strong as you’re trying to make it, those habits are going to have to be strong enough that he has a 100% chance of eating the pizza if he got wasted last night…
Kenny
Right, he’s going to eat the pizza. I’m assuming that it can be within one’s power to refrain from doing things that it is true that one will not refrain from doing (anyone who believes in the compatibility of human free will and divine foreknowledge of all human actions should have no problem with that assumption). A couple of points here:
First, the example is not meant primarily as an analogy but as an illustration of the fact that indicative conditionals that aren’t backed by subjunctive or causal conditionals are useless for practical deliberation. I think it serves that purpose. Second, I think that the arguments that I made concerning it are sound even if we do stipulate that the probability that he’ll eat pizza given that he got wasted is %100 (and that he knows this). It’s still the case that were he not to eat the pizza, he would have nevertheless gotten wasted last night.
CPE Gaebler
Yeah, the whole question does seem like “cheating” somehow.
I would guess the problem goes like this, though.
For all of those people who reasoned that they would choose both boxes: The predictor would know that you would reason that way, and thus would put nothing under box B.
For all those who reasoned that they would choose box B: The predictor would likewise have put a million under box B.
The whole issue seems somewhat… completely backwards from common experience, which makes it hard to think about rationally.
As an aside, it reminds me somewhat of a problem I saw on Facebook. It goes like this:
An angel appears to a gathering of the worlds finest philosophers, and promises to return the next day and answer any one question, truthfully. It promptly disappears.
The philosophers spend the rest of the day arguing over what to ask the angel. Having only one shot, they want to make sure they ask the best question possible. One of them hits on a solution, so the next day they pose their question to the angel:
“What is the ordered pair whose first member is the best question to ask you, and whose second member is the answer to that question?”
The angel responds, “It is the ordered pair whose first member is the question you asked, and whose second member is the answer I am now giving.” And promptly disappears.
Glenn
Correct, because you can’t know something that’s false. By the way, he’s going to eat it. 🙂
Geoff
I know plenty of things that are false..
Glenn
LOL Geoff – you mean you know THAT they are false. But if something is false, then you can’t know it to be true. That’s what I meant. You can THINK that you know it, but you don’t.
Geoff
I dunno, some people think politicians tell the truth, that global warming is a fact, and that creationism is the only possible way the world could come to be.
All false, but held to be true by people who believe them.
But then, I know what you’re saying.
What you are saying is that you can not know a fact as “not a fact”, because a fact is a fact, and by nature can not be “not a fact”. Right?
In above example the only “knowables” are 1. He will eat the pizza, or 2. He wont eat the pizza. The knower can not know that 1 is true, and yet 2 is, because if 1 is not true, 2 is.
HAH. Now I am confused.. 😛
Kenny
Right Glenn. So if he were to refrain from eating the pizza (even though he won’t refrain, I’m assuming it is within his power to do so), he wouldn’t know the relevant conditional because it would have been false. If he does eat the pizza then (assuming he knows the relevant conditional) he is in a position to know that he did get wasted last night. But, either way, it is currently not in his power to do anything such that, were he to do it, he would not have gotten wasted last night. So, even if he wants it to be the case that he did not get wasted last night, he should not take the indicative conditionals that he knows to be true as reasons not to eat the pizza.
Likewise, when I’m standing there deciding whether to one box or two box, at best, all that it is within my power to do (assuming that it is within my power to do either) is either place myself in a position to know that I’m getting the million dollars (if he predicted I’ll one box and that’s what I do) or falsify his prediction (and thereby do something such that, had I done it, the relevant conditionals would have been false and therefore never known by me). Either way, I can’t do anything to effect whether the million dollars is in the box. So even if I want it to be the case that I will get the million dollars I can’t, when I’m standing there, do anything to bring about that outcome. Whether I’m getting it or not doesn’t depend on what I do. Whether I get the extra $100 does depend on what I do, however. If I two box, then, no matter what, I’ll be $100 than I would have been. So (if my goal is to maximize the amount of money I get from playing the game) I should two box.
It’s dominance, stupid! 🙂
CPE Gaebler
And yet I, as a one-boxer, will be ending up with $1M more than you 😛
Kenny
True. Rationality doesn’t always pay off! 😉
Kristian Joensen
“or falsify his prediction (and thereby do something such that, had I done it, the relevant conditionals would have been false and therefore never known by me).”
But isn’t it the case that you according to the parameters of the paradox you can’t falsify his prediction since he is always correct in his predictions? With the way that Glenn worded the original post he seems to be saying that you are indeed correct and that it is only the case that he has a 100% succes rate so far and not the case that he is Omniscient.
However to me it seems that interpretation trivializes the paradox since under that intepretation is it just a matter if you trust the predictor. You could just say: “Bah, humbug. This whole predictor thing is just bullshit”.
I am also confused by your mention of getting $1000100. Since if you pick both boxes million will not be in box B:
“If I predicted that you would choose boxes A and B, then I didn’t put anything in box B. It’s empty. If, however, I predicted that you would choose only box B, then it contains one million dollars.”
In other words:
1. If you choose boxes A and B the predictor would have predicted this.
2. If he predicted it, he did not put anything in Box B.
3. Therefore, if you choose boxes A and B, he did not put anything in box B.
With the additional premise: 5. Under any condition there will be $100 in box A.
It follows that: 6. If you choose boxes A and B you will get $100.
{Tim}
Well, it all comes down to whether you trust him, doesn’t it?
Given his known record, I’d probably take a punt on his guessing I’d pick box B, and pick box B.
OTOH, if the Predictor has been reading this thread then chances are Kenny is only ever going to get max $100, regardless of what he actually does pick. ;P
Here’s alternative for those who don’t like rules, though: I think I’m just going to pick box A by itself, and see what happens.
Kenny
If you pick box A by itself, the Predictor promptly shoots you in the head with a beretta. Then he says “Signing off this is Glenn Peoples”.
P.S. To any Predictors out there. I’m just kidding. I’m really a one boxer. Really. I swear!
Kenny
Kristian,
The fact that the predictor is always correct doesn’t entail that when I’m standing there deciding what to do that I can’t do something that falsifies his prediction. All that follows is that I won’t in fact do what would falsify his prediction.
It is very important, in my view (and the view of many philosophers) to distinguish between indicative conditionals and subjunctive(/counterfactual) conditionals when it comes to thinking about this paradox. To get a handle on the distinction, consider the following pair:
(1) If Oswald didn’t shoot Kennedy, someone else did.
(2) If Oswald hadn’t shot Kennedy, someone else would have.
(1) is obviously true. You’d have to be some sort of conspiracy theorist, however, to believe (2). (1) is an indicative conditional. (2) is a subjunctive (or counterfactual) conditional.
If we go along with the stipulation that the Predictor is always correct. Then it is true that if I one box, I will get a million dollars. However, that doesn’t entail that if I were to one box I would get a million dollars. It’s the latter conditional, I’m maintaining, that I should care about in my practical reasoning, not the former.
I do agree that if the Predictor has the ability to foresee the future, in such a way that his predictions actually depend on what I choose (either causally or counterfactually), then I should one box. In that case, the relevant subjunctive conditionals that I say are false in the original case would be true. But, if the paradox were set up that way, it wouldn’t be nearly as interesting from the standpoint of decision theory.
What makes the original paradox interesting from the standpoint of decision theory is that it seems to show that two staples of practical rationality can come apart. On the one hand, it seems that practical rationality councils that we seek to maximize expected utility. And in Newcomb’s paradox, it seems that the rule of expected utility councils one boxing. On the other hand, it seems that we should follow the rule of dominance. That is, we should perform those actions (when they are available) that would leave us better off than the alternatives in some possible outcomes and no worse off than the alternatives in any of the possible outcomes. And the rule of dominance seems to favor two boxing. You don’t get this clash of principles, however, if you make the Predictor’s predictions depend on what you choose (if you do that, the dominance considerations drop off).
Dan
I believe that I would pick one of the boxes at random to distract the Predictor just before using my chair to smash him to a bloody pulp. I dislike being mugged with a passion. If I were appropriate movie hero material, I’d probably throw out a one liner along the lines of “Predict this!!!” Then I would take box B, since the EV of Box B is $500,000 and the possible loss of $100 — even in more valuable NZ dollars — just isn’t that big a deal to me.
Following that, I would sue the Predictor for $1 million for battery, false imprisonment, intentional infliction of emotional distress, and fraud. I’ll further posit that my answer reveals something about why law professors make bad philosophers.
Ilíon
The correct response it to jump over the table and strangle the bastard for kidnapping you.
Then open both boxes and take any money that may be there.
Kenny
Dan and Ilion, it won’t work. The predictor will be expecting it!
Glenn
No, Kenny, he only predicts what people will do with box decisions. I should have mentioned…
Kenny
Oh, then screw him! Dan and Ilion have the right idea!
CPE Gaebler
Welll, even if he didn’t predict he’d get beaten, he’d have predicted that both boxes would get taken. So, clearly, the best thing to do would be to beat him up, but only take box B afterwards. 😀
… But seriously, my approach to this paradox weirds me out a lot. Because, as I see it, the way to get the most money is to completely forget the entirely sensible, logically sound line of reasoning that says that what’s in the boxes is there or not, so taking both boxes is the most rational way to go. Because if you manage to convince yourself that you’ll get a million dollars if you give up that one hundred, you will get a million dollars. Because he will have predicted that you would act in such an irrational fashion. It’s one of the rare cases (which are generally entirely hypothetical, of course) where believing something to be true, “makes” it true in a sense.
Kenny
But the way the paradox is set up, believing it to be true doesn’t make it true. There’s no causal or counterfactual connection between picking up the one box and there being a million dollars in it. If you pick up the one box and get the mill, then if you had picked up both boxes you would have got the mill and an extra $100. Now, before the prediction is issued, matters are different. Presumably the predictor is very good at reading off what your dispositions are. So if you are aware that you will be playing the game and aware of the setup, you should do your very best to make yourself into a determined one boxer (because, prior to the prediction, it does matter what you are determined to do). But, when you’re standing there, after the prediction has been made, and you find your resolve to one box has worn off, the rational thing to do is pick up both boxes. Of course, the predictor may well have anticipated your resolve would wear off. But, oh well, too late to do anything about that now. Either way, if you don’t pick up both boxes, you’ll be passing up an extra $100.
CPE Gaebler
But if you convince yourself not to pick up the extra $100, the predictor will have predicted that you do so, and you will get a million. Believing doesn’t literally make it true, but the statement “If you believe one-boxing will get you a million dollars, then you will get a million dollars” is a true statement. Thus, it’s to your financial advantage to force yourself to forget your rational line of argument. Yes, the prediction has been made… but if you two-box, the prediction will have been that you two-boxed.
Actually, somehow this seems unusual in exactly the same way as entanglement in quantum physics. I’ll have to think about that.
Kenny
“‘If you believe one-boxing will get you a million dollars, then you will get a million dollars’ is a true statement.”
It’s a true statement only if you read it as an indicative conditional. If you read it as a subjunctive or causal conditional it’s false. And, as I have argued, it is the subjunctive or causal conditionals that matter for practical deliberation, not the indicatives (at least not the indicatives that aren’t backed by subjunctive or causal conditionals). There’s nothing like quantum entanglement or backwards causation going on here. If there were, then I would agree that you should one box. But that’s not how the original paradox is set up (and that’s not the setup that makes it interesting).
Archena
“I believe that I would pick one of the boxes at random to distract the Predictor just before using my chair to smash him to a bloody pulp. I dislike being mugged with a passion. If I were appropriate movie hero material, I’d probably throw out a one liner along the lines of “Predict this!!!” Then I would take box B, since the EV of Box B is $500,000 and the possible loss of $100 — even in more valuable NZ dollars — just isn’t that big a deal to me.
Following that, I would sue the Predictor for $1 million for battery, false imprisonment, intentional infliction of emotional distress, and fraud. I’ll further posit that my answer reveals something about why law professors make bad philosophers.”
My hero!
Since I already know his prediction is made and finished, there’s no benefit to picking box B only. If he predicted I’d choose B then the million is already in there. If he predicted I’d choose both then there’s nothing in the box. May as well take my $100 and the off chance the creep actually put a million dollars in B. My choosing B won’t put a million dollars in it and my choosing both won’t take the million out.
Then I use the $100 as part of my retainer because I have every intention of suing him until all he has left are his undies and his socks. Evidently the predictor isn’t so great at predicting his own, lawsuit riddled, future…
CPE Gaebler
And yet, I’m sitting pretty on my million dollars because I two-boxed, and you’re stuck with just a hundred. Because he predicted I’d two-box and you’d one-box.
(smug smirk)
Ilíon
Also, if The Predictor knows that his prediction is correct, he should have no problem with publically stating it beforehand, right?
Ilíon
See, the way this hypothetical and the others of its class are set up, the “prediction” can’t really be tested.
CPE Gaebler
Actually, testing it would be the easy part. Just write it down beforehand. No need to tell everyone, just set up a situation where your physical evidence of the prediction can’t be tampered with between being written and being observed.
Max
You beat the crap out of him for kidnapping you, then open both boxes. If the million is not in there then you force the information about where it is out of the criminal bastard… he has to have it stashed somewhere.
So much for logic.
Ilíon
CPE, I said “the way this hypothetical and the others of its class are set up.”
Damian
The predictor is lying; you can only fit about $100,000 in a one-litre volume based on the assumption that they are NZ, US or AUS $100 notes. 😉
Dan
“The predictor is lying; you can only fit about $100,000 in a one-litre volume based on the assumption that they are NZ, US or AUS $100 notes. ;)”
It only takes a post-it to write down the number of a numbered Swiss Bank Account. Or you can load up a debit card with a million. Cash is so 1990s.
Damian
Dan, my original comment wasn’t a serious attempt at nit-picking (that’s why I put the wink). It’s a classic riddle regardless of the volume of the container.
For me it comes down to whether I’m being asked to act according to the premise of the paradox or act in the real world. If we keep to the realm of the mind game then I would go for only one box. If this was applied to the real world I would open both.
I suspect that the paradox arises from the mixing of the mind game (in which we have a person with extraordinary powers) and the real world (where most of us would not believe such a person to exist). If that makes any sense.
Of course, if we don’t really have free will and there was someone who was able to determine what our course of action really *would* be (perhaps by imaging our brains) then I would have to go for only one box in the real world. But then, if that was the case, why ask the question because you’re going to choose what you’re going to choose eh? 🙂
If I could condense this paradox it would simply be “do you believe in determinism?”. Which is a fascinating question in itself.
Dan
Damian – “Of course, if we don’t really have free will and there was someone who was able to determine what our course of action really *would* be (perhaps by imaging our brains) then I would have to go for only one box in the real world. ”
I caught the wink, but was just being lawyerly. And the idea of some day having sufficient funds to need a numbered swiss bank account is occasionally intriguing.
Interestingly, I have less problem with the problem as a real-world conundrum because the radiology department at the university where I teach actually has an imager that can almost identify specific thought patterns. They have one demonstration where a subject of unknown (to the evaluator) gender is put into the imager, asked questions, and the imager can actually show the thought activity as the subject works out a response. The evaluator can tell the subject’s gender based upon their thought patterns. It’s not a far step from that to being able to image the brain in real time as it makes a decision to choose one box or both boxes and predict with extraordinary accuracy which choice the subject will make.
Damian
Dan, there’s some pretty cool stuff happening with fMRI and other imaging techniques but I’d say we’re still a million miles off being able to predict what someone will do in a case like this. There was an interesting study a few years back where people were asked to click a button and note when it was that they decided to act and the results indicated that there was brain activity prior to when they thought they’d made the decision. I’ve had a look through a couple of my books on consciousness and can’t locate a reference but I recall having a couple of misgivings about it.
Either way, it seems to me entirely possible that if we live in a deterministic universe (but who knows the answer to that one!) we should be able to predict what people will choose to do given enough data. But then we come back to my earlier point which is that the question about what you would ‘choose’ to do loses its meaning if you have already measured what you *will* do. Also, to further complicate the matter, if you had made all the appropriate observations to find out what they were going to do and told them *afterwards* about the predictor you would likely have changed what the person was going to do.
A fun paradox all round.
Damian
Here is the link to the study I mentioned: http://www.ncbi.nlm.nih.gov/pubmed/18408715
And here is a discussion of the results of the study: http://www.wired.com/science/discoveries/news/2008/04/mind_decision
Like I say, I have my reservations but it is an interesting contribution on the topic of free will (or at least our conscious access to the choices we make).
Ken G
I would take only one box. I would get a million dollars, then I would victoriously count it while being lecturered by “two boxers” about how that’s not really the way I should have chosen.
Martin Woodhouse
Um . . . Erm . . .
The question as posited doesn’t say that X is a perfect predictor, but that he is a ‘very, very, very good one’.
Do I think that the chances are greater, or less, than 1,000,000 : 100 that he has got this particular prediction right?
GHW*
(* Go Home Woodhouse)
Martin
Glenn
Martin, I think we are meant to assume that the predictor’s prediction is correct.
Garren
Odd, I thought ‘just take B’ was so obvious I must be missing something. It seems to be a simple matter of taking the premise seriously.
Upthorn
Okay, so here’s how it works.
The predictor has already made his prediction and placed the money, so if he predicted we’d take only B, even if we now choose to take them both, we get the $1000000.
Of course, the predictor knows this, and so is always going to predict that we take both boxes, and therefore there will always be only the $100 under A.
Jason
Meh, just give me box B.
What’s $100 between friends.