On What Doesn’t Really Matter
Philip Kitcher expresses a frustration I’ve also had with Derek Parfit:
Consider the case that Parfit refers to as “Bridge,” a variant on a much-discussed scenario. In the canonical version, five people are bound to a track and threatened by the approach of a train. On the rail of the bridge over the track sits a fat man, whose heft would be sufficient to stop the train. Would it be right to push him from his perch onto the track below, thus using him as a buffer to protect the five? Of course, if you imagine yourself on the bridge faced with this choice, all sorts of awkward and practical questions arise. Would you be able to dislodge the fat man? (For the puzzle case to work, you have to be of lesser girth—otherwise you would have the option of sacrificing yourself.) If you pushed him, would he fall in a way that would halt the train? Is there some other way to prevent the deaths—a signal that can be given or a switch that can be thrown? Could you persuade the fat man to jump? Could you say, “Fat man, let us leap together”?
To avoid some of these questions, Parfit’s variant of the story stipulates a remote-control device that you can use to launch someone from the bridge on to the track. In this way he seeks to dodge or escape certain questions—but his modification introduces many others. How can you tell what will happen if you use whatever device you have? Could you stop the train simply by opening the trap, without anything falling through? Can you signal to the potential victim and arrange for some appropriate substitute object to fall through the trap? Are there other devices you should seek that would allow you to communicate directly with the driver, or to stop the train in less messy ways? Your response to any actual situation would depend on how you would answer or address questions such as these—on how you would cast around in attempting to avoid any death or injury (just as, in the original story, you would seek alternatives to the stark choice assigned to you). Parfit’s emendation of the canonical scenario is guided by no standard of objectivity for evoking reliable responses, and thus it generates further versions of the disease it is intended to cure.
You cannot respond to the imagined predicament without thinking hard, but hard thinking leads through a cloud of questions to a state of confusion. A few conditions are simply declared: the outcomes are known and the options limited. But since that sort of certainty and limitation is exceedingly remote from the circumstances in which we make our practical decisions, our judgmental capacities cannot be put to work in their normal ways. Readers are pitched into a fantasy world, remote from reality, in which our natural reactions are sharply curtailed by authorial fiat. When we are called on to render a verdict, the dominant feeling is a disruption of whatever skills we possess, and a corresponding distrust of anything we might say-often publicly visible when lecturers ask their audiences to respond to some puzzle case: only partisans of some particular theory answer confidently, while the rest sit in uncomfortable silence. The reader may even be left with a deep sense of unease that matters of life and death are to be judged on the basis of such cursory and rigged information.
I’d add that as a matter of empirical fact, people are horrifically bad at predicting how they and others would behave in the sort of extreme situations on which Parfit’s work so often rests. We don’t usually or easily imagine that we will be like Christopher Browning’s Ordinary Men — but, given the command to murder a fat guy Jews for the sake of five others the Fatherland, a surprising number of us will grudgingly comply. And some will comply enthusiastically, without even asking whether the five others are really in danger, or whether killing the fat guy will really save them.
I would suggest that a more suitable ethics for an only dimly self-aware species like own would focus less on these hypothetical constructions and more on arranging the world so that they and their kind come to pass as rarely as possible.
Agreed.
But then politicians wouldn’t have anything to do, so they won’t allow that.Report
Over several years, I’ve run a handful of experiments inspired by some other studies that used (roughly) the original Trolley and Footbridge problems. One of the things we realized quickly was that both of those raised all sorts of problems, not the least of which is that they don’t really work together (they’re not alignable, in our language). One of the things we saw early on was that people thought it would be more moral to jump in front of the train than to throw someone. This possibility isn’t discussed much in the philosophy literature, oddly enough (sometimes it’s avoided by suggesting that the dude is really obese, but does that mean obese people should jump themselves?). So we came up with an entirely different set of dilemmas, designed to target the sorts of intuitions the Trolley family of problems are supposed to target. But it was a pain in the ass, and I mean a serious pain, to come up with a set that works. Even then, they only work in the context of our study. I suspect that ethical dilemmas that are stimulating enough to make a point outside of a narrow experimental context, but rigorously enough designed to make that point well, are going to be pretty much impossible.Report
I’d be very interested to see what you came up with, although my difficulty with this type of problem is not only the standard difficulty that many philosophers seem to have — namely, that the problems are artificial and/or contrived.
That’s a problem, granted. What I also observe and am troubled by is that people are lousy at predicting their own behavior or the behavior of their counterparts in extreme, life-or-death situations.
So the students who say, “It would be more moral for me to jump” are not likely, in my opinion, actually to jump if they were presented with the situation in real life. Nor is anyone else, necessarily.
We may or may not learn a good deal about ethics by examining contrived situations. The contrivance, though, is only part of the problem. The bigger problem is that talk is cheap.Report
Jason, I agree. Our original problem set had to do with various taboo tradeoffs, in the context of a hospital (people don’t like to trade lives for money, e.g.). Later versions were even more mundane. None of them were entirely satisfactory, but they tested our hypothesis fairly well, I think. As well as we could expect at least. I’ll try to post a couple that make sense together later.Report
This is kind of the killer problem with the veil of ignorance.Report
As soon as I saw this post I was thinking to connect it to Murali’s post on Stillwater’s challenge for this exact reason.Report
It does do a good job of informing you of the stuff that you assume. You go behind the veil of ignorance, you make your call as to what is just, and then you come back and you think really hard about what that says about assumptions that you carried with you behind the veil without realizing you were doing so.Report
Yes, exactly. My difficulty with the veil of ignorance is that our personal attributes are not susceptible of being stripped away. They are what we are. Get rid of individuality, and you’ve gotten rid of personhood. The discussants behind the veil of ignorance are not persons at all. They are aliens, and I have no reliable idea of what they might come up with.
Report
This is a pretty excellent point.Report
Re-reading Hegel led me to believe that Hegel would look at Rawls as someone who had a hole in is head.
“You want me to do what, now?”Report
This is really just Milgram in a new suit of clothes.Report
No one brought this up but the scenario seems a little fat phobic.
I wonder if our sense of valuing others more or less based on characteristics might come into play. For example are people more willing to sacriface one fat man to save 5 non-fat ones than they are to sacrifice one skinny person to save 5 fat people?Report
I have to say that people who speculate about whether the trapdoor will work as advertised are missing the point of the problem. I suppose you could strip it down further (some of the Saw setups might work), but if I’m reading Chris above correctly, you still get quibbles.
Now that I think of it, I’ve been around a couple of times when someone has asked “Would you kill one person to save five?”, and several times the first reaction has been “How could that ever happen?” – meaning they want a fat-man-on-the-bridge scenario. Perhaps there is some avoidance going on?Report
The problem isn’t the “fat guy” but the “massive guy’. F=MA and the important variable here is the mass.
A few open questions remain about this little paradox. Can I say I hate little paradoxes like this? The Massive Guy’s mass won’t be able to stop the train, it’s going to jump the rails. Sum of vectors. And how many people are in the train? And even if the Massive Guy wasn’t pushed, the masses of the other people would make the train jump the tracks as surely as the Massive Guy. Everyone’s screwed in this little parable.
Report
These situations always remind me of the Theodicy problem.
There are two assumptions that we begin with:
And the fun part of the argument comes:
Assuming your goodness, tell us what the good thing to do would be.Report
An alternative version I read once was that you see a train thundering down the track and you are standing at the switch. As currently set the train will careen into five people who have no hope of avoiding the train (indeed they’re unaware of its approach). You could, if you chose, throw the switch sending the train careening down an alternative track into one individual (similarily oblivious and incapable of evading the train) instead or foul the switch and derail the train killing dozens. It is established that you are too distant to warn either of the people on the tracks nor can you halt the train. Your options are to do nothing, allowing the train to kill five people, throw the switch causing the train to kill one person or foul the switch causing the train to kill dozens.Report
And thus God spanked Zarathustra.Report
That’s actually the original problem, the “Trolley Problem.” The problem in Jason’s quote is a variant of the “Footbridge Problem,” which was meant to contrast with the “Trolley Problem.”Report
To be honest, in either Parfit or North’s example (I’ve actually seen North’s more) I’d likely do absolutely nothing – not as moral choice but simply due to panicked indecision.
My initial choice in Parfit’s hypothetical was “Throw myself in front of the train” because I’d like to think I’m that kind of person, but I don’t actually know if I am. To avoid that sort of answer, the hypothetical just keeps getting messier and messier as a result of trying to target the situation Parfit is going for (as Chris notes above).
But I think the fact you have to create these new sort of feedback loops – “I’d do [x]” – “You can’t do [x] because of [y]” – “Then I’ll do [z]” shows how the hypothetical approach is the wrong approach entirely. We don’t and can’t do the sort of clean weighing of different things to measure morality like Parfit tries to give, so it doesn’t illuminate much of anything. So many of our choices are based on incomplete information (does fair trade actually help? what will the effect of this action actually be? does charity [x] do good work? how similar is animal pain and conciousness to human?). And even beyond that, not all our actions are targeted to some straightforward moral calculus; sure, a hospital in devastated Haiti or remote Rwanda where 200,000 people have no doctor is more important than wildlife re-introduction or giving someone money for college, but I’ve given some cash to all of them instead of giving the money all to the Rwanda charity.
So anyway my new response to the hypothetical is “Fire a gadget that stops the train before reaching the people whose life it threatens”, because as long as we’re doing a fantasy-world hypothetical, I’m Batman.Report
I think the most likely thin I would do is freeze up and do nothing. Perhaps I might turning my head as not to watch.Report
Interestingly, if you look at the original Milgram experiments in more detail, the one thing they definitely didn’t find is that people will inflict pain and suffering if ordered to do so. In fact, when Milgram’s subjects were actually ordered to administer the electric shocks, the rate of compliance fell to zero. Instead, it seems to show that people can be persuaded to inflict pain and suffering if its for what they believe to be a good cause. This doesn’t make the results any more pleasant, but it does reflect on what would be needed to actually get the average person to push the fat guy off the bridge.Report
It always helps to wear the white lab smock, too. Authority derives from its uniform.Report
This is true. And the uselessness of Parfait’s thought experiments can be seem more clearly if you put the experimenter in the white lab coat in the scene issuing instructions.Report
Andy Taylor: Listen here, Ernest T. Bass! This is Sheriff Taylor! Go on home and leave these people alone! You’re keepin’ ’em awake!
Ernest T. Bass: Tell ’em to go back to bed! Charlene’s the one I want to talk to!
Barney Fife: Listen here, Ernest T. Bass! This is Deputy Fife! I’m armed and if you don’t go home, I might just take a shot at you
[another rock come flying through the window]
Barney Fife: Stop that!
[Another rock hits the window]
Briscoe Darling: Sheriff, tell your deputy to be quiet before he gets us all stoned to death!Report
yes, i’m told having a set of band uniforms might be helpful if the world ever goes tits up. because it doesn’t matter what uniform…Report
The exercise here is to push every parameter until everyone makes it out alive and unharmed. HO scale model trains kill very few people.Report
Ever what? Every person? Or every parameter? Because I gotta tell ya, things aren’t looking too good for the fat guy…Report