# Let’s Deal the Cards!

by Mike Schilling

The Monty Hall Problem is a great piece of recreational mathematics, because it’s very simple to state, doesn’t require any advanced mathematical tools to analyze, and has a solution that’s both definite and completely counter-intuitive. It goes like this:

Suppose you’re on the game show Let’s Make a Deal. At the climax of the show, you get to take part in the Big Deal of the Day, where you choose one of three rooms, each with a door. Behind one door is all kinds of great stuff: a new car, a house full of furniture, tickets for a round-the-world cruise. Behind each of the other two doors is a goat chewing on a bale of hay. After you make your choice, Monty Hall opens up one of the other doors, which reveals a goat. He now offers you the choice of switching to the other unrevealed door. Should you take it?

That’s how the problem is usually stated. The correct answer is “I have no clue.” With any problem in probability, it’s vital to pin down exactly what the conditions are. For instance, suppose Monty only offers you the ability to switch when you were right in the first place. Obviously, you should say “no”. So let’s make the tacit assumptions explicit:

- The prizes aren’t moved from room to room after the game starts.
- Monty always offers you the chance to switch.
- He always begins by opening one of the doors you didn’t pick and shows you a goat before asking for your decision.
- If there are two be-goated doors available, he picks one at random.

Now the answer is clear: you should switch. The room you switch to is twice as likely to be the winner as the one you started with.

At first glance, this makes no sense. You don’t seem to know any more than you did to begin with. Whether you were right or wrong, Monty could show you a goat. What makes the room you didn’t pick any better than the room you did pick? What if you switch, and he offers you the chance to switch back? If that’s a good decision, should you switch back and forth forever? And if switching back is a bad decision, how is it different from switching the first time?

Let’s look at it two ways (which will both lead to the same conclusion.) One: you had a one-third chance of being right the first time. Since Monty will always show you a goat, your first guess still has a one-third chance of being right. We know the prize isn’t behind the door where we saw the goat. Therefore, the only other possibility is the door you can switch to, whose probability must be the other two-thirds.

If that logic seems too slippery (even though it’s quite correct), let’s work the problem from first principles. One of the best ways to solve a problem is probabilities is to run things repeatedly and see how often the different possible results appear. So, we play the game 300 times: the prizes will be behind door number one 100 times, door number 2 100 times, and door number 3 100 times. Say that we always pick door number one (it really doesn’t matter if we pick different doors; it just complicates the counting. The point is that we’re right one third of the time.) So, let’s compare the two strategies.

If we don’t switch, then we’re right the 100 times the prizes are behind door number 1. That’s one third of the time.

If we switch, things are more interesting:

- The hundred times that the prizes are behind door number 1, Monty shows us that either 2 or 3 has a goat, and we switch to 3 or 2 respectively. Silly us. We lose.
- The 100 times that the prize is behind door number 2, Monty shows us door number 3. He has no choice in the matter. He can’t show us our door (number 1), and he can’t show us number 2, because it doesn’t have a goat. So he shows us 3 and we switch to 2 and win.
- By exactly the same logic, the 100 times that the prize is behind 3, Monty shows us 2 and we switch to 3. Again, we win.

So, we draw the same conclusion: by switching we win two thirds of the time.

Now let’s switch gears to a different game entirely, Contract Bridge. A brief introduction for those who aren’t familiar with it. It’s played by two team of two players each, who sit at a four-sided table, partners opposite each other. The players bid against each other for the right declare the hand. The winner becomes the declarer. His partner is the dummy, and puts his hand face up on the table. The declarer lays both his hand and the dummy’s hand. In bridge problems, the players are called North, South, East, and West. The declarer is almost always South, which makes North the dummy. West leads to the first trick; thereafter, the winner of a trick leads to the next one. Each player must follow suit if possible. Ignoring trumps for now,the highest card of the suit led wins the trick.

Suppose you are South, and you hold the following spades, in your handand in the dummy:

North: A 10 8 4 2

South: K 7 6 5

You lead the 2 from the dummy. East plays the Jack, you go up with the King, and West plays the 3. You have won the trick in your hand and now lead the 5. West plays the 9. What should you do?

The only spade you haven’t seen is the Queen. If West has it, you should play the 10, which will win (East has none left), and then lead the Ace, which will drop West’s Queen. If East has it, you should play the Ace, which will drop the Queen right away. Either way, the right play in spades gives you five tricks and the wrong play only four. Once again, it may seem like a pure guess. And once again, it isn’t; playing the 10 on the second trick is twice as likely to be correct.

Let’s use the same technique as before and play the hand 1600 times, to allow each possible distribution of the outstanding cards to be played 100 times. (What follows isn’t precisely accurate, since the distributions become more likely when the cards are split more evenly, but we’ll only be off by a few percentage points.)

- 200 times one hand will be out of spades on the first trick. We know before we have to make a decision that that’s not the case.
- 400 times one hand will have a small singleton, so both hands can play a small card on the first round. Again, not the case.
- 400 times each hand will have one honor card and one small card and can play a small card on the first round. Again, not the case.
- 300 times West will have either a singleton honor or the doubleton Queen-Jack. Still not the case

(Now, only the cases we care about remain.)

- 100 times East will have the doubleton Queen-Jack. Half the time he will play the Queen, and half the time the Jack. (It’s in his interest to disguise what’s in his hand, so he’ll play unpredictably.)
- 100 time East will have the singleton Jack and have no choice but to play it.
- Likewise 100 time East will have the singleton Queen, and have no choice but to play it.

That is, regardless of whether East plays the Jack or Queen, it will be a singleton two-thirds of the time, so you win two-thirds of the time by playing your Ten but only one third by playing your Ace. This is called the Principle of Restricted Choice, and applies in many situations in Bridge, of which the above is probably the simplest. The idea is that if a play could be made in one of two situations that a priori seem equally likely, but in one situation the play is forced while in the other it’s optional, assume the former is the case. That is, assume the choice was restricted.

This is, of course, the same principle that applies in the Monty Hall Problem. If You pick door number 1, and Monty shows you that door number 2 has a goat, it’s more likely that Monty had to show you door 2 (because 3 has the prizes) than that he had the choice of showing you 2 or 3 (because you were right in the first place.) You should assume his choice was restricted.

The Monty Hall Problem originated in a letter to The American Statistician in 1975, and became famous and controversial when Marilyn vos Savant published it in her column in 1990, yet the underlying theory was known by expert bridge players by the early 60s. There’s probably a lesson of some kind there.

which makes North the dummy.Good lord, I hope our dear Canadian/Minnesotan friend misses this post. That was totally uncalled for.

The funny thing about the Monty Hall problem is that I’ve seen those clear and obvious solutions a dozen times, and I get them, but the answer still escapes my intuition. That and I have a tendency to confuse them with Monte Carlo simulation. (Not to mention the Full Monty.)Report

In the end I just had to grind out the possibility tree, and when you do that the answer stares you in the face.Report

Thank you. Your second method of explanation is much easier to grasp then the first. I’m in Hanley’s boat where it just seems totally mystifying to me, but reading that helped.Report

Are you looking for an argument?Report

I told you once.Report

Whatever you guys are referencing, it went right over my head. I guess I’m North.Report

Yet another Monty.Report

Same here, but I, also, just can’t get it through my thick skull.

My reasoning is thus:

1. Monte knows what’s behind each door,

2. He will

onlyshow you a goat before asking you to choose,3. Therefore, the chooser gets net zero additional information

4. Therefore, your actual odds don’t change at all.

Does this violate your presuppositions, Mike?Report

Those are all correct. But in the case where Monty’s choice is restricted (that is, where you chose wrong), he is giving you additional information by showing you the door with the goat; he’s telling you that the remaining door has the prize.Report

Right; there is information revealed, just most people don’t see it.Report

A great post on an important topic. Wikipedia’s “other host behaviors” table is quite interesting.

To get a really thorough handle on the probability from a Bayesian standpoint, see here, then return to the Bayesian account of the Monty Hall problem.

If you prefer to do without the math, just consider intuitively that the initial offer gave you a one in three chance of picking the winning door, while the second offer gave you a one in

twochance of picking the winning door… onlyyou haven’t taken the second offer yet, and you still need to do so.ReportThe explanation for the Monty Hall problem that finally broke through my stupid, stupid brain was this one:

Imagine that there are 100 doors. Behind 99 of them, there is a goat. You pick your door… and Monty Hall shows you behind 98 other doors. Goats galore… then he asks you if you want to trade for the last one.Report

my stupid, stupid brainI thought you were an egghead.Report

The other good one is “let me just get an answer for this question and then I’ll be out of your hair!”Report

That one “cracked” me up.Report

eggsasperatingReport

Yeah, multiplying the doors always seems to help people see the way the probabilities work. I used to tell students to imagine a million doors. That always seemed to make it clear.

In grad school, I took a modeling course with 10 students: 2 cognitive psychology students (one of them me), 7 robotics guys (they were all really into this), and 1 applied math guy. One of our first assignments (maybe our first assignment, I can’t remember) was to do a Monte Carlo simulation of the Monty Hall problem (this will make James’ head explode). The applied math guy did the simulation, got what we all got (you should switch), and still refused to believe it. He ended up arguing with the professor for most of the subsequent class about the mathematics of the problem. It was really amusing for the two innumerate cog psych guys (one of them me, in case you’d forgotten).Report

It’s pretty amazing how often math people get that one wrong.

I think it goes to show you that mathematicians love applied computational math (diff eq and orbitals and stuff like that) a bunch more than theoretical math (number theory and abstract algebra and stuff like that) and most of them glide through statistics and forget it immediately.

Stats is a

boringsubject to a surprising number of mathematicians. The stats guys are usually baseball nuts.ReportThe hundred or million doors version of the problem helped, but that still wasn’t enough to intuitively get the difference between the conventional problem and the “Monty Acts Randomly” problem, where switching doesn’t matter.

What finally made it make sense to me is that, if Monty is acting randomly, if he opens 98 doors (or 999,998 doors, or even one, the evidence is not so strong in that case but the principle is the same) and doesn’t find a car, then you have very strong evidence that he couldn’t find a car, that the car was behind the door you initially picked. And so sticking with your initial choice is significantly more likely to be correct. But if he knows which door the car is behind, and specifically doesn’t choose it (as in the the conventional problem), there’s no such evidence. Intuitively, it’s obvious in the random case that it doesn’t matter which door you pick. Once I grasped that the random case gives you significant information that the conventional case doesn’t, it made sense that switching becomes a better choice in the conventional case.Report

I’m not understanding it any better with more doors. Can someone break it down for me? Thanks.Report

Monty Hall is giving you information when he shows you what’s behind the doors.

I’ll use the 101 doors example because, hey, that’s how I roll.

You’ve your hundred-and-one doors in front of you. Pick one. Any one. Got it? Now I’m going to show you 99 doors that were *NOT* the car leaving one, and only one, standing in front of you.

Out of the 101 doors, what are the odds that you just happened to pick the car?

1 out of 101, right?

So what Monty Hall is, essentially, asking you is “would you rather stick with the *ONE* door you just picked… or the 100??? Here, I’ll get rid of 99 goats for you.”Report

IF YOU HAVIN GOAT PROBLEMS I FEEL BAD FOR YOU SON

I GOT 99 PROBLEMS BUT A GOAT AIN’T ONEReport

Dude! I get it! Thanks!

So, imagine the 101 doors, labeled, conveniently, 1 through 101. I choose #1. Monty shows me 2-100. All goats. I can stick with #1 or I can choose #101.

I think the confusion stems from thinking, “The odds of it being behind #101 are 1/101, the same as being behind #1.” Which, technically, is true, at least at the start. But I’m really having the choice of door #1 OR doors #2-101. Do I have that right?Report

Yes.

Paring it down to having the choice between door #1 and doors #2 and #3 and, for the record, #3 sucks so the choice is really between door #1 and #2… pares down the odds enough so that it “feels” like a coin flip when, really, it’s a 2 out of 3 chance.Report

OK Kazzy, let’s play a game. Here are one MILLION dollars (imagine I said that in my best Dr Evil accent). Behind one of these doors is a fabulous beach getaway! Behind each of the other 999,999 doors is a pile of poop. I want you to pick one of the doors. Go ahead, pick, I’ll wait…

… … … … … OK, pick already!

Great, thank you! You’ve selected door #371,619. What is the probability that the door you selected has the beach getaway behind it? One in a million, right? Now, I know which of the doors has the getaway behind it, so what I’m going to do is open all but one of the remaining 999,999 doors and show you a pile of poop behind it.

OK, it’s two and a half weeks later. All of the doors are open, the entire room smells like poop, and you’re probably thinking that no beach getaway is worth all of this, but bear with me. There are two doors remaining: the door you selected, which had a 1 in 1,000,000 chance of being the getaway door, and the one of the remaining 999,999 doors that I didn’t open. Behind one of these two doors is a vacation getaway, and behind the other, a pile of poop. You can either stick with the door you initially chose, or you can switch to the one door you didn’t choose that remains unopened. What do you want to do?

Is it clear now what you should do? If not, think about it like this. When you chose the 1 door, there was a 1 in a million chance that you chose right. Just because I opened the other doors does not change the probability that you chose correctly. There is still a 1 in a million chance that the door you chose has the beach getaway behind it. There was a 999 ,999/1,000,000 chance (a probability of 0.999999) that the vacation getaway was behind one of the remaining doors. I opened all but one of those doors. So the probability that the beach getaway is behind the one remaining door is 0.999999 (999,999/1,000,000). The probability associated with all of the remaining doors attaches itself to the one door I didn’t open, because the probability that the door you chose has the getaway doesn’t change.

If this doesn’t help, think of it like this: imagine we do this several times. If, as most people initially think, the probability that the door you selected has the getaway is .5 (50/50 chance) after I’ve opened all but one of the remaining doors, then that means that you should get it right about half of the time if we run it multiple times. So even though the probability that you’ve chosen the right door is 1 in a million, initially, just by opening 999,998 of the remaining 999,999 doors, I’ve made you right 50% of the time. That’s obviously wrong, right?Report

Interesting that pigeons figure it out.Report

If it’s any consolation, it looks like it takes the pigeons a few thousand trials to figure it out (here’s the paper), whereas someone just has to tell us how to do it. Yay humans!Report

But if nobody tells us how to do it, we apparently can’t learn from empirical experience as well as pigeons. That’s a bit depressing, but perhaps the immunity to empirical learning it explains why so many students never learn that regular study leads to better grades.Report

I wonder if pigeons can do the Wason Selection Task.Report

Time for a J-Stor search. Maybe after class.Report

Heh… I’m trying to think about how you’d set it up for pigeons. First, they’d have to learn the rule, which, even though it’s a simple rule (something like, “If a card has a square on one side, it is blue on the other side”), then give them trials to see if they can learn the minimum number of cards to turn over in order to prove the rule. Something tells me no one’s tried this with pigeons, but maybe with primates.Report

Couldn’t find a cite for pigeons and the Wason test.

Somewhere I have a copy of an article about Bluejays and the prisoner’s dilemma, but I can’t find it now.Report

I would be amazed if a pigeon could do that, since it is purely ‘if…then’ thinking.

Honestly, I am actually surprised that Wikipedia says most people have trouble on it. Anyone with even a little formal logic training should be able to suss out the correct answer quickly. Monty Hall is a different kettle of fish; formal training seems to make people worse at it, most of the time.

The fact that pigeons perform better than humans on Monty Hall just shows that intelligence isn’t always what it’s cracked up to be. Sometimes it pays to be dumb.Report

Yeah, I meant it as a joke.

If I remember correctly, professional philosophers do better at the Wason Selection Task than most people, which does suggest that a little training in logic helps. But in essence, it just shows that while

modus ponensseems to come naturally to us,modus tollenshas to be beaten into us, at least as an abstract principle (people do just fine with the Wason Selection Task when you make it relevant to them).ReportGuess my sarcasm filter is on the fritz.

It saddens me that professional philosophers don’t perform essentially perfectly.

I bet if you parsed out philosophers by specialty, you would see a difference though.Report

I suppose that the human response to the situation is more likely to be “Food comes out of this gizmo; therefore there must be food inside the gizmo; how do I take the gizmo apart?” Intelligence comes in different flavors…

I always liked the story about the chimpanzee. Researchers hung a bunch of bananas higher than the chimpanzee could reach or jump. After a while, they provided a pole long enough to reach the bananas; the question they were thinking was, “How long does it take a chimp to figure out it can use the pole to knock down the bananas?” After a while, the chimp stood the pole on end directly beneath the bananas, used its four “hands” to scamper up the pole while keeping it balanced, grabbed the bananas and dropped to the ground.Report

Allow me to present an alternative hypothesis. The students are there in order to get course credit on psychology course, and they presumably get it regardless of how well they do in solving the MHP. They’re doing 200 trials of a boring “game”, and the only reward for success is an electronic congratulations message from the computer. What they’re optimizing for is probably not obtaining a maximum score.

Connecting that to “why so many students never learn that regular study leads to better grades” is left as an exercise for the reader.Report

Looking at your contract bridge version, it seems to rely on the fact that, if a player has both Jack and Queen, they pick which one to play randomly. If, for some reason, a player always plays the Jack first out of the Jack-Queen double (and you know that), then you have no additional information. Of course, if you know that, and they play the Queen, then you know with certainty that they don’t have the Jack, which is presumably why sophisticated players pick randomly.

I suppose the analogous Monty Hall-type problem is if Monty always opens the lowest numbered door that does not contain a prize. In that case, when the player picks door number 1, if Monty opens door number 2 switch or not makes no difference, whereas if he opens door number 3 switching is always correct.Report

That all sounds right to me.Report

This makes me wonder if there’s an optimal strategy for Deal or No Deal.

I think it’s “Always trade suitcases” but I’ve no idea what to do before that.Report

I never watched much, but I always wondered if the person up in the booth knew where the money was. As they came down to the end, sometimes the offers were pretty close to the expected value of choosing one of the remaining suitcases at random (which would be reasonable given no knowledge of where the big prize actually was) and sometimes they were

waybelow the expected value (which would be reasonable if they knew the big prize wasn’t in the contestant’s suitcase, and wanted the player to hold on to the one they had).One of those tough situations for the contestant — they get to play the game exactly once. The person in the booth gets to play many times.Report

Michael’s right; the optimal strategy depends on whether or not the guy in the booth knows anything, and the way they make the offers indicates that maybe they know and maybe they don’t.

Which leads me to believe that they *do* know, but they have a good statistician who sets the value of the offer based upon a randomizing algorithm that pretends that they don’t always know. Just sometimes.

Otherwise, a smart guy would have a major advantage in the game, the offer would always be a tell.Report

That show fascinated me when it first came on (and not for the 30 obvious reasons, well at least not JUST for those reasons). I actually worked up a spreadsheet to figure the expected value of the unopened cases. As near as I could tell, the offer was always lower than that number. But I don’t know what if anything that means other than they wanted the contestants to “Go On!” because it made for better television.

BTW, did you know that game show winnings are typically paid from an insurance policy taken out by the producers? So someone’s figured out all the stats.Report

What’ll

reallybake your noodle is the Two Envelopes Problem.I have two envelopes. One contains

ndollars, the other contains2n. You pick an envelope, but before you open it I give you the opportunity to open the other one instead. Is there a logical proof that you should (or should not) switch, as there is with the Monty Hall Problem?*****

Rather easier is The Missing Dollar. Three men stay at a hotel; the manager charges them thirty dollars, and they each pay ten dollars. He later changes his mind and refunds them five dollars. The busboy, taking the money to the room, thinks “it’s hard to split five dollars three ways, so I’ll just keep two, and pretend that the manager gave them each a dollar back.”

So each man has paid nine dollars, plus two that the busboy kept; that adds up to

twenty-ninedollars. Where did the other dollar go?ReportOh shut up. The busboy could have only refunded three dollars to each man. 30 – 5 + 3 + 2 is still 30 dollars.Report

And yet programmers specifically make the contra mistake, multiplying 9×3=27, adding 2 getting 29 and they’re stuck. Now ask yourself why?Report

A programmer wouldn’t make that mistake. A programmer would store the money in a numeric total. Nowhere in the spec was the programmer asked to multiply anything.Report

They get hung up on 27 x 3 + 2 because you told them it was important. It’s not a mathematical puzzle, it’s a test of their ability to stop dealing with the stated problem in good faith and say “you’re full of shit and your question is nonsensical.”Report

Obviously, there’s no reason either to switch envelopes or not to: you have no information to distinguish one envelope from the other. Is the argument about expectation, like

I currently have N. There’s a fifty percent chance that the other envelope has N/2 and fifty that it has 2N, so my expectation after I switch is 5N/4. I should switch. OK, now I’m good. Except that that same logic still applies! I should switch back! Please, can I … Oh, thank you. Good, back to the first envelope. Man, good thing that same logic doesn’t still … arrrrrrrrrrrrrgh!If so, it’s silly because it completely overvalues the concept of “expectation” when important information is missing.Report

It’s the way it’s stated that makes the missing dollar problem perplexing. It suggests the wrong answer and it’s hard to un-suggest it in your head. But really the math is 3 x 9 = 27 = 25 (in the till) + 2 (in the bus-boy’s pocket). This is more of a brain bug than a logic problem.Report