Let’s Deal the Cards!
by Mike Schilling
The Monty Hall Problem is a great piece of recreational mathematics, because it’s very simple to state, doesn’t require any advanced mathematical tools to analyze, and has a solution that’s both definite and completely counter-intuitive. It goes like this:
Suppose you’re on the game show Let’s Make a Deal. At the climax of the show, you get to take part in the Big Deal of the Day, where you choose one of three rooms, each with a door. Behind one door is all kinds of great stuff: a new car, a house full of furniture, tickets for a round-the-world cruise. Behind each of the other two doors is a goat chewing on a bale of hay. After you make your choice, Monty Hall opens up one of the other doors, which reveals a goat. He now offers you the choice of switching to the other unrevealed door. Should you take it?
That’s how the problem is usually stated. The correct answer is “I have no clue.” With any problem in probability, it’s vital to pin down exactly what the conditions are. For instance, suppose Monty only offers you the ability to switch when you were right in the first place. Obviously, you should say “no”. So let’s make the tacit assumptions explicit:
- The prizes aren’t moved from room to room after the game starts.
- Monty always offers you the chance to switch.
- He always begins by opening one of the doors you didn’t pick and shows you a goat before asking for your decision.
- If there are two be-goated doors available, he picks one at random.
Now the answer is clear: you should switch. The room you switch to is twice as likely to be the winner as the one you started with.
At first glance, this makes no sense. You don’t seem to know any more than you did to begin with. Whether you were right or wrong, Monty could show you a goat. What makes the room you didn’t pick any better than the room you did pick? What if you switch, and he offers you the chance to switch back? If that’s a good decision, should you switch back and forth forever? And if switching back is a bad decision, how is it different from switching the first time?
Let’s look at it two ways (which will both lead to the same conclusion.) One: you had a one-third chance of being right the first time. Since Monty will always show you a goat, your first guess still has a one-third chance of being right. We know the prize isn’t behind the door where we saw the goat. Therefore, the only other possibility is the door you can switch to, whose probability must be the other two-thirds.
If that logic seems too slippery (even though it’s quite correct), let’s work the problem from first principles. One of the best ways to solve a problem is probabilities is to run things repeatedly and see how often the different possible results appear. So, we play the game 300 times: the prizes will be behind door number one 100 times, door number 2 100 times, and door number 3 100 times. Say that we always pick door number one (it really doesn’t matter if we pick different doors; it just complicates the counting. The point is that we’re right one third of the time.) So, let’s compare the two strategies.
If we don’t switch, then we’re right the 100 times the prizes are behind door number 1. That’s one third of the time.
If we switch, things are more interesting:
- The hundred times that the prizes are behind door number 1, Monty shows us that either 2 or 3 has a goat, and we switch to 3 or 2 respectively. Silly us. We lose.
- The 100 times that the prize is behind door number 2, Monty shows us door number 3. He has no choice in the matter. He can’t show us our door (number 1), and he can’t show us number 2, because it doesn’t have a goat. So he shows us 3 and we switch to 2 and win.
- By exactly the same logic, the 100 times that the prize is behind 3, Monty shows us 2 and we switch to 3. Again, we win.
So, we draw the same conclusion: by switching we win two thirds of the time.
Now let’s switch gears to a different game entirely, Contract Bridge. A brief introduction for those who aren’t familiar with it. It’s played by two team of two players each, who sit at a four-sided table, partners opposite each other. The players bid against each other for the right declare the hand. The winner becomes the declarer. His partner is the dummy, and puts his hand face up on the table. The declarer lays both his hand and the dummy’s hand. In bridge problems, the players are called North, South, East, and West. The declarer is almost always South, which makes North the dummy. West leads to the first trick; thereafter, the winner of a trick leads to the next one. Each player must follow suit if possible. Ignoring trumps for now,the highest card of the suit led wins the trick.
Suppose you are South, and you hold the following spades, in your handand in the dummy:
North: A 10 8 4 2
South: K 7 6 5
You lead the 2 from the dummy. East plays the Jack, you go up with the King, and West plays the 3. You have won the trick in your hand and now lead the 5. West plays the 9. What should you do?
The only spade you haven’t seen is the Queen. If West has it, you should play the 10, which will win (East has none left), and then lead the Ace, which will drop West’s Queen. If East has it, you should play the Ace, which will drop the Queen right away. Either way, the right play in spades gives you five tricks and the wrong play only four. Once again, it may seem like a pure guess. And once again, it isn’t; playing the 10 on the second trick is twice as likely to be correct.
Let’s use the same technique as before and play the hand 1600 times, to allow each possible distribution of the outstanding cards to be played 100 times. (What follows isn’t precisely accurate, since the distributions become more likely when the cards are split more evenly, but we’ll only be off by a few percentage points.)
- 200 times one hand will be out of spades on the first trick. We know before we have to make a decision that that’s not the case.
- 400 times one hand will have a small singleton, so both hands can play a small card on the first round. Again, not the case.
- 400 times each hand will have one honor card and one small card and can play a small card on the first round. Again, not the case.
- 300 times West will have either a singleton honor or the doubleton Queen-Jack. Still not the case
(Now, only the cases we care about remain.)
- 100 times East will have the doubleton Queen-Jack. Half the time he will play the Queen, and half the time the Jack. (It’s in his interest to disguise what’s in his hand, so he’ll play unpredictably.)
- 100 time East will have the singleton Jack and have no choice but to play it.
- Likewise 100 time East will have the singleton Queen, and have no choice but to play it.
That is, regardless of whether East plays the Jack or Queen, it will be a singleton two-thirds of the time, so you win two-thirds of the time by playing your Ten but only one third by playing your Ace. This is called the Principle of Restricted Choice, and applies in many situations in Bridge, of which the above is probably the simplest. The idea is that if a play could be made in one of two situations that a priori seem equally likely, but in one situation the play is forced while in the other it’s optional, assume the former is the case. That is, assume the choice was restricted.
This is, of course, the same principle that applies in the Monty Hall Problem. If You pick door number 1, and Monty shows you that door number 2 has a goat, it’s more likely that Monty had to show you door 2 (because 3 has the prizes) than that he had the choice of showing you 2 or 3 (because you were right in the first place.) You should assume his choice was restricted.
The Monty Hall Problem originated in a letter to The American Statistician in 1975, and became famous and controversial when Marilyn vos Savant published it in her column in 1990, yet the underlying theory was known by expert bridge players by the early 60s. There’s probably a lesson of some kind there.