One Ideology to Rule them All

Vikram Bath

Vikram Bath is the pseudonym of a former business school professor living in the United States with his wife, daughter, and dog. (Dog pictured.) His current interests include amateur philosophy of science, business, and economics. Tweet at him at @vikrambath1.

Related Post Roulette

205 Responses

  1. LeeEsq says:

    I’m more than a little confused about the map? Why is Israel, the Jewish State, shown as having a Sunni Islamist ideology? Why is the United States, with our current parties, social democratic and Sweden, the ur-social democratic state, as liberal?Report

  2. Burt Likko says:

    Someone’s got to do it, so I’ll get the ball rolling: what about horrific consequences for an individual that lead to beneficial consequences for a great many people? Do we disclaim the slow, torturous murder of the individual that liberates and enriches hundreds of thousands of other individuals on the basis of consequence?

    Or is it okay to throw in a little categorical imperative in there, maybe admit that intent matters at least a little bit regardless of outcome?Report

    • zic in reply to Burt Likko says:

      Embedded in the utilitarian principle, it seems to me, is an understanding that, at best, it’s a balancing act, and it needs to be balanced across a number of conflicting concerns. So we put people’s basic civil rights and essential needs above having the lightest tax burden. We balance the need to educate the most reluctant student with the needs of the most gifted. The needs of security with the needs of freedom.

      And that balance is a wave form over time, not a single data point.Report

      • Will H. in reply to zic says:

        I think that’s where utilitarianism breaks down. That, and the degree to which expediency is a legitimate concern.

        I get where you’re coming from. It’s like “sea level” being a much different thing standing at the shoreline– there’s not any one sea “level,” but many different levels of water there. Surfers rejoice in this.
        Yet the giggity-giggity of those rejoicing surfers should not drown out the fact that, “Where’s the water at around here?” is an inquiry of discrete and verifiable answer. (“Dude, there was vapor in your breath even as you said that.”)

        So, it still breaks down to hierarchy.

        And even if it’s an equilateral allocation, sameness fits well within the hierarchal structure.

        (So, you have a plate sitting in front of you, and on the plate is a pile of green beans. Which green bean do you eat first? Do the green beans offer up their leader– i.e., “Eat this guy first– he’s the chief of all green beans.” — ? Are they refusing to offer up their leader? Should they be sorted by length before eating? What gives?)Report

    • Mike Schilling in reply to Burt Likko says:

      I’m buying a chaise lounge so I can lie down in comfort right here in Omelas.Report

      • Glyph in reply to Mike Schilling says:

        I thought of Watchmen, naq ubj Zbber vf zlfgvsvrq gung crbcyr pna pbagvahr gb ivrj cflpubcnguvp zheqrebhf ubzbcubovp enpvfg Ebefunpu nf fbzr xvaq bs ureb.

        Ohg vg’f boivbhf gb hf gur ernqref gung ertneqyrff bs gur hgvyvgnevna “evtugarff” bs Irvqg’f cyna (nsgre nyy, ur fnlf vg vf gur bayl jnl sbe gur uhzna enpr gb nibvq pbzcyrgr frys-vzzbyngvba, naq tbqyvxr travhf Qe. Znaunggna nterrf jvgu uvz – naq Qervoret tbrf nybat, orpnhfr jung ryfr pbhyq ur qb ntnvafg Irvqg naq Znaunggna naljnl) gurer vf fgvyy fbzrguvat nqzvenoyr nobhg Ebefpunpu’f qbttrq nqurerapr gb cevapvcyr nobir pbafrdhrapr.

        Gurl (jryy, Irvqg naq Znaunggna) ner fznegre guna ur vf. Gurl (jryy, Qervoret naq nethnoyl Znaunggna) ner “orggre” guna ur vf.

        Ohg Ebefpunpu fgvyy znl or “evtug”, naq gurl znl or jebat.

        Gubhtu gur jbeyq ohea.

        (And before anyone gets in ahead, yes, I know it’s only a story. But I think it illustrates a similar point).Report

    • Kim in reply to Burt Likko says:

      Burt,
      Pardon me, but I’m more concerned, at this very moment, with the slow torturous murders that DON’T actually help hundreds of thousands of individuals much.Report

    • Vikram Bath in reply to Burt Likko says:

      what about horrific consequences for an individual that lead to beneficial consequences for a great many people?

      Assuming the beneficial consequences aggregated across those great many people more negate the consequences for the unfortunate individual, let the individual burn.

      There is a bunch of cognitive research about how bad we are at multiplying. If you ask three separate groups how much they would be willing to spend to save 8, 80, and 800 kids from dying of dysentery, they will say $76, 92, and $87 respectively. Denying benefits to a large number of people *is* horrific. It’s just that our brains have trouble visualizing it as horrific because we can only simulate one other person’s feelings at a time in our heads.

      Incidentally, I will note that we have made this trade-off before. In a prior post somewhere I mentioned Typhoid Mary. She was imprisoned for her lifetime despite not having knowingly done anything wrong. I’ve never heard anyone make the argument that this was bad. Because when the consequences become serious enough, you have to stop grandstanding and be a utilitarian.Report

      • Glyph in reply to Vikram Bath says:

        Typhoid Mary. She was imprisoned for her lifetime despite not having knowingly done anything wrong.

        Depends how you parse this, I guess. She could have remained free if she would have stopped seeking employment as a cook and followed hygienic practices (terms she agreed to). It was only after they’d caught up with her again (after she changed her name and started working as a cook again) that they really put her away for good.

        I agree she seems to have disbelieved she was infected and doing anything wrong. But she did break the terms under which they initially freed her.Report

      • Kim in reply to Vikram Bath says:

        Vikram,
        When the consequences become serious enough…
        You’re willing to condone outright murdering an innocent child?

        sorry to pull this out of ivory tower intellectualism, but I don’t
        think most folks can profess to being a utilitarian without
        actually running the theory through some real world paces.Report

      • Kim in reply to Vikram Bath says:

        “When the consequences become serious enough”
        … you’re willing to condone sexual slavery of minors?Report

      • BlaiseP in reply to Vikram Bath says:

        And then the whining school-boy, with his satchel
        And shining morning face, creeping like snail
        Unwillingly to school.

        Try telling your third grader of the benefits of an education, sending him out into the freezing wind, to the street corner on a dark January morning, to wait for the bus. You just do that, Kim. Ask the kid about Horrific Consequences.Report

      • Glyph,
        Yeah, I’m not that sure what Typhoid Mary’s options were.

        I’m a bit uncertain as to why she would try to become a cook again if she had other options.Report

      • Glyph in reply to Vikram Bath says:

        According to wikipedia:

        Release, name-change and second quarantine (1915–1938)

        Upon her release, Mallon was given a job as a laundress, which paid less than cooking. She soon changed her name to Mary Brown, and returned to her old occupation. For the next five years, she worked in a number of kitchens; wherever she worked, there were outbreaks of typhoid.Report

      • Fnord in reply to Vikram Bath says:

        Unless you go along with every “think of this children” “if it saves even one child, isn’t it worth it” argument, then you’re already making that trade-off.Report

      • Kim in reply to Vikram Bath says:

        fnord,
        I don’t give a duck which side Vikram’s really on.
        I just want him to own up to it.

        Because either side is really, really unpleasant.

        Life ain’t easy, and the bigger decisions you make,
        the more they weigh on your soul.Report

    • kenB in reply to Burt Likko says:

      This is not an argument against utilitarianism per se — it’s solved just by adopting a more sophisticated evaluation mechanism, with the appropriate weights/multipliers.

      However, it does point up that utilitarianism depends on prior agreement on values — what results are good, what results are bad, and most of all, how do we adjudicate between different bundles of goods and bads. It’s likely that there’s a substantial connection between one’s political ideology and one’s decisions on how these various trade-offs should be resolved even in the absence of a political ideology.Report

      • Burt Likko in reply to kenB says:

        If that’s the case, @kenb , then intent doesn’t matter at all, does it? Vikram appears to be arguing for pretty close to a purely consequentialist moral calculus. Indeed, he affirms this by saying “let the individual burn” and referring without disapproval (perhaps even with approval) to Typhoid Mary’s imprisonment.

        If intent is completely irrelevant then it doesn’t matter at all why I do a particular thing, only that the well-informed, objective observer of my action would determine that it was more probable than not that net good will result from it. All manner of atrocities might be justified in this fashion.Report

      • Kim in reply to kenB says:

        Burt,
        intent only matters to the extent that others learn about it, and have happy/sads about it. Utilitarianism does allow for net happy/sads.

        The psychopath who tortures people to make them stronger (and succeeds, and convinces them and others that he was doing it for their own good), might be a good utilitarian.Report

      • Burt Likko in reply to kenB says:

        Right. But isn’t the psychopath’s behavior nevertheless intolerable?Report

      • roger in reply to kenB says:

        Burt,

        I am not following you here…

        “If intent is completely irrelevant then it doesn’t matter at all why I do a particular thing, only that the well-informed, objective observer of my action would determine that it was more probable than not that net good will result from it. All manner of atrocities might be justified in this fashion.”

        Are you suggesting intent matters more than consequences? Are you suggesting that good intents can’t lead to all manners of atrocities?Report

      • Kim in reply to kenB says:

        Burt,
        I wouldn’t say so. I would feel a lot more upset if this was actual brainwashing, mind.Report

      • Burt Likko in reply to kenB says:

        @roger I’m more modestly suggesting that intent must be in the mix. At least a smidge. I’m certainly not suggesting that good intentions is the sure-fire way to avoid atrocity.

        Let me more ambitiously posit that we must simultaneously be both utilitarians and deontologists. An action that has an unacceptable result is unacceptable on utilitarian grounds. Unacceptable intent is unacceptable on deontological grounds.

        If you like virtue ethics and distinguish them from either utility-based or intention-based moral schemes, then go ahead and add that as a third layer.

        When there is no reasonable choice other than between the beneficial effect with fell intent on the one hand, and the awful result with good intentions on the other hand, then we have what is called a “dilemma.” A hard choice in which there is only the ability to select the least bad option. In such a situation, Vikram’s suggestion that we ultimately come down to practical result as opposed to more abstract metrics may well be valid. Or not.

        But the OP doesn’t focus on hard choices, but rather on a broad scheme, applicable to most situations. In most situations, in cases when there are several acceptable choices available, then intent matters.Report

      • Vikram Bath in reply to kenB says:

        I’d note that even if a world ruled by utilitarianism does allow for atrocities, there would be fewer than under any other ideology. If you do not like atrocities, you should try utilitarianism.

        To me, this attack on utilitarianism is a bit farcical. Utilitarians seem like awful people because they acknowledge that it is difficult to make decisions that are awesome for everyone. Other ideologues claim that their philosophies are awesome for everyone, and they thus exempt themselves from criticism.Report

      • Roger in reply to kenB says:

        Thanks Burt. Great answerReport

      • Jaybird in reply to kenB says:

        I’m more modestly suggesting that intent must be in the mix. At least a smidge. I’m certainly not suggesting that good intentions is the sure-fire way to avoid atrocity.

        The problem is that we don’t know what intentions are. We can work with stated intentions, of course… but I am reasonably certain that those don’t map 1:1.Report

      • Burt Likko in reply to kenB says:

        Oh, pshaw, @jaybird . I infer intent from actions daily in my law practice, which is no different than what lots of people do when evaluating others in daily life.

        For philosophical discussions, moreover, intentions can be givens.

        And minimally contemplative and self-aware individuals may know their own intentions and judge themselves accordingly.

        That it is sometimes a hard and inexact process to discern the true intentions of another person does not make those intentions irrelevant to our moral calculus.Report

      • Burt Likko in reply to kenB says:

        @vikram-bath it’s unknowable whether pure utilitarianism would result in more or less atrocity than pure deontology or pure virtue ethics or pure futurism or what have you.

        In one sense, a pure utilitarian utopia would necessarily result in zero atrocity, since by definition each decision made would be calculated to maximize good and minimize harm resulting from the decision (writ large so as to avoid adverse results from accumulations of small decisions bearing unintended cumulative consequences), so whatever results were achieved would be, by definition, the least harmful and most beneficial alternative, so anything but pure utilitarianism necessarily creates atrocity.

        @mike-schilling referenced Ursula K. LeGuinn’s parable of Omelas and that’s as fine a hypothetical as any we might consider. Some of those who stayed in Omelas rationalized to themselves that the suffering of the child was the least atrocious of all possible scenarios.

        The child’s suffering wasn’t desirable, of course, but it was the least bad solution they could see. Any other situation would involve more suffering, and therefore was more atrocious than the solution they had. So they stayed. And I can see the logic to their argument for why staying was justified.

        But I can’t shake the abiding conviction that those who left Omelas made the morally superior choice. The child’s intense suffering is intolerable — because the child was innocent; he did not deserve to suffer and had not even volunteered to suffer for the sake of others. And by choosing to remain in Omelas, its citizens ratified the suffering of an innocent. Do you disagree, @vikram-bath ?Report

      • Jaybird in reply to kenB says:

        For philosophical discussions, moreover, intentions can be givens.

        It seems that they’re only interesting in situations where the intentions don’t match up with the “intended” outcome.

        If an actor intends bad things and bad things happen… so what? We expected that.
        If an actor intends good things and good things happen… well, we need more of that.
        If an actor intends bad things but good things happen… well, what judgment is required at all? If the actor knows what’s good for her, she’ll smile and not and accept the civic medal… this is interesting, I guess, but I imagine that it’s interesting mostly because it’s rare for bad intentions to fail.
        It’s when the actor intends good things and then bad things happen that we are interested. I presume it’s because good intent will reduce bad action recidivism.

        But I’ve no idea how much weight to give intent… even in theory where, you’d think, it’d be easiest to do so.Report

      • Stillwater in reply to kenB says:

        Let me more ambitiously posit that we must simultaneously be both utilitarians and deontologists. An action that has an unacceptable result is unacceptable on utilitarian grounds. Unacceptable intent is unacceptable on deontological grounds.

        +1+.

        But this is a pitch right in RTod’s strike zone, no? That ideology is the enemy?Report

      • greginak in reply to kenB says:

        Without knowing intent how can we know what to measure to determine success or failure?
        It seems to me like a clear statement of intent is the start of a measurement process and also needed to try to look at all possible solutions.Report

      • Stillwater in reply to kenB says:

        It’s when the actor intends good things and then bad things happen that we are interested.

        Why this particularly? I mean, if we’re only talking about intentions.

        It seems you equate “good intention” with “bad governance” pretty selectively. And question-beggingly. Libertarians aren’t the exclusive holders of the “good intentions = bad outcomes” principle. I wonder why you think it differentiates your views from others in any way other than a degree of focus.Report

      • Stillwater in reply to kenB says:

        Or… what greg said.Report

      • Jaybird in reply to kenB says:

        So we’re just talking about intention completely divorced from outcomes?

        Well, doesn’t that make it easy to the point of being tautological? Bad intentions are bad. Good intentions are good.Report

      • Burt Likko in reply to kenB says:

        Maybe I agree with @tod-kelly , @stillwater . Did you ever think of that?Report

      • Burt Likko in reply to kenB says:

        Of course bad intentions are bad and good intentions are good, @jaybird . But as others have noted, it gets more interesting when you have good intentions but bad consequences. Is that good? I say no (pace @vikram-bath , results matter). Or, what happens when you have bad intentions but good consequences nevertheless (perhaps inadvertently) result? That, I say, is also bad, although it seems that @vikram-bath would disagree, having staked out a relatively extreme consequentialist position. (That last sentence invites comment from @vikram-bath and other strong consequentialists, of course.)Report

      • Stillwater in reply to kenB says:

        So we’re just talking about intention completely divorced from outcomes?

        Well, the two thing are completely divorcable, no? Why not look at them that way?Report

      • Jaybird in reply to kenB says:

        Thinking about it some more, intentions actually provide us something discrete. Here it is: separable, distinct, and, in theory, measurable.

        An outcome? Well, there are first order outcomes, second order outcomes, third order outcomes, and, if we’re assuming something like the real world, we’ve got ripples coming in from the fifth and sixth order outcomes of events that happened yesterday getting in and messing everything up.

        How do you measure an outcome when nothing ever ends?

        Unfortunately, that’s what happens, in practice, to stated intentions. “You can’t just look at my first order outcomes! What I was shooting for was really, really awesome third order effects!”Report

      • Stillwater in reply to kenB says:

        How do you measure an outcome when nothing ever ends?

        Well, you’re preferred theory includes intentions as basic principle, if I’m not mistaken: revealed preferences. Is the outcomes of those revelations “never ending”? What then?Report

      • Jaybird in reply to kenB says:

        Well, my are preferred theory is a weird form of deontology.Report

      • Stillwater in reply to kenB says:

        Weird in what way?Report

      • Jaybird in reply to kenB says:

        Erm, wait. Maybe it’s a weird rule-utilitarianism. I dunno. It’s that silly vector essay from a million years ago.

        Maybe it’s a pure utilitarianism. I forget.

        In any case, I don’t see what I, personally, believe as half as interesting as a calculus whereby I can best figure out how to measure stuff like intent.

        Heck, I don’t even need one that makes me say “I WILL ADOPT THAT HENCEFORTH!”, I’d settle for one that makes intuitive sense and that can have two or more people take it to the same series of events and come out with similar judgments at the end of the day.Report

      • Stillwater in reply to kenB says:

        Just saw this.

        Maybe I agree with {Rtod). Did you ever think of that?

        Of course Burt. I never took you for an ideologue. I was just giving a shout out to Our Tod in the context of the OP. That’s all.Report

      • zic in reply to kenB says:

        Thinking about it some more, intentions actually provide us something discrete. Here it is: separable, distinct, and, in theory, measurable.

        Really? Why are intentions measurable while outcomes not measurable?

        Intentions are not always clear and distinct. A few weeks ago, there was a post about a guy who didn’t think girls should go to college because they’d be exposed to bad things. So what’s the real intent there? I’d say it’s a hell of a lot easier to measure the outcomes then it is to pinpoint the intent.Report

      • Stillwater in reply to kenB says:

        JB, can I say this without it sounding antagonistic?

        Probably not.

        But what you wrote here is exactly why I get frustrated with your argument style. In part A, you criticized people for embracing a moral philosophy that you disagree with. But when asked to provide you’re presumably better moral philosophy, you politely refrain and instead repeat you’re criticism of other people’s moral philosophy.

        If getting moral philosophy right is so easy, why not just say what the solution to all these problems is? Or, you know, refrain from criticizing so much?Report

      • Jaybird in reply to kenB says:

        Second order, third order, whatever order outcomes. What if the first order outcome is really good but it goes on to change the culture for the worse?

        There’s a John Adams quotation: I must study politics and war that my sons may have liberty to study mathematics and philosophy. My sons ought to study mathematics and philosophy, geography, natural history, naval architecture, navigation, commerce, and agriculture, in order to give their children a right to study painting, poetry, music, architecture, statuary, tapestry, and porcelain.

        The joke I’ve seen made from this is to go on through the generations until you end up with professional wrestling, monster trucks, the Jersey Shore, and dubstep.

        While that’s a little over the top, it’s easier to ask whether the First Amendment protection of pornography isn’t doing damage to any number of people, and whether the argument that the First Amendment allows for condomless sex on camera wouldn’t result in more people getting STIs.

        But that’s not even the great granddaddy of them all when it comes to the difference between intent and outcome: Prohibition. When you hear that the intent is to get drunkards to stop beating their wives and children, or innocents like Tobias Ragg from drinking themselves into a gin-besotted sleep, or villains stealing to get money for whiskey… well, yeah. It makes perfect sense to keep booze out of the hands of these addicts.

        Except the stuff that happened went on to happen.

        There was an argument that went on for a time that explained that we just weren’t trying hard enough. We weren’t putting our back into it. If only we’d double down, we’d finally see the benefits that were originally intended. And, for a time, these arguments made a lot of sense to people.

        Why? Because intentions were discrete. Outcomes? We just haven’t gotten to the promised land just yet. Just a little bit further.Report

      • Jaybird in reply to kenB says:

        Stillwater, what’s the complaint? That I haven’t provided my own system? I provided it a while back.

        https://ordinary-times.com/blog/2009/07/07/the-vector-a-post-theist-moral-framework

        There.

        Can I question about how in the heck we’re supposed to measure “intent” now? Or do I have to answer all questions about my own theory?

        If someone asks me a question about my own theory, can I instead ask them to post their theory? If not, why not?

        Having established all that crap again, can I ask how in the heck we’re suppose to measure intent *IN THEORY* in such a way that can get separate folks to come to similar conclusions? Or do I have to spend some more time on my theory before I have standing to do that?

        If getting moral philosophy right is so easy, why not just say what the solution to all these problems is? Or, you know, refrain from criticizing so much?

        How’s this? If I say “I have the following problem with how such-and-such can’t be calculated under this theory and that’s why I don’t like it… well, maybe you could explain how such-and-such would be calculated because, from here, it doesn’t seem like it can be”, perhaps a better answer would be “here’s how we’d measure such-and-such” than “have you written an essay?”

        But, to answer your question, I have written an essay. The link is up there.

        Now what? Do I have to answer more questions about my essay before we figure out how we measure intent? How many more?Report

      • zic in reply to kenB says:

        Ok, @jaybird you get prohibition.

        I get polio vaccines.

        I think I’d like Title 9, too; because the intent to treat girls equally in education when it comes to sports has proven to be a good thing; though the outcome of professional teams that are as profitable as men’s teams remains elusive.

        The point is that some things will have bad outcomes; obviously bad outcomes. And bad as prohibition was, there was also an awakening of alcoholism as an illness, not just bad behavior.

        But sometimes, there are good outcomes. Unimaginably good outcomes that we couldn’t have predicted before hand. Like our ability to communicate here. Or the way a computer made it possible for me, a dyslexic, to become a writer. And don’t give me any crap about private initiative, policy — the desire for better ways to calculate trajectories of bombs — launched the computer revolution. Just because a bad thing might happen doesn’t mean you should freeze like a rabbit worrying about a fox.

        You might choke on the chicken bone. Slip in the bath tub. Fall down the stairs. Slip of the toilet. But don’t stop eating, bathing, going in and out of your house, or taking a crap once in a while.Report

      • Chris in reply to kenB says:

        How do you measure an outcome when nothing ever ends?

        That’s probably one of the best objections to consequentialist theories of practical reason. It can go even further, too: how do we know what the consequences of an act are, given that, particularly on the level of policy or the actions of states, the number of factors is too complex for us to comprehend? The problem if induction, and issues of counterfactual reasoning, rear their ugly heads here.

        If I run into traffic on my way to work, and take a shortcut, which results in an accident that kills a person, does consequentialism condemn my taking the shortcut?

        Obviously, a theory of practical reason that takes into account intention gets further here than any but the most sophisticated consequentialisms, right?

        There are ways that consequentialists try to work around these problems, but I’ve never been convinced by them.Report

      • Jaybird in reply to kenB says:

        I don’t understand the polio vaccine example. What was the intent behind polio vaccines if it wasn’t significantly different from eradicating polio?

        For Title IX, I’d look at examples where colleges cut men’s programs to meet the standards and then we can weigh the number of programs cut to the number of programs created and do, more or less, an apples to apples comparison without even having to get “intent” involved.

        I’m not arguing that we can foresee outcomes perfectly (though, I’d like to think that people who have been informed by experience and/or history tend to be better at guesswork than those who haven’t). I’m more trying to figure out to what extent intent is something that we need to take into account. The best reason that I can think of is prevention of recidivism is baked into the cake for people who try to work according to the best of intentions and they take “what happened last time” into account.Report

      • Jaybird in reply to kenB says:

        If I run into traffic on my way to work, and take a shortcut, which results in an accident that kills a person, does consequentialism condemn my taking the shortcut?

        The concept of moral luck is one that I wrestle with from time to time.

        I take a shortcut every day and nothing of note happens and, by default, I’m not bad. Someone else takes a shortcut and hits somebody. They’re a bastard.

        Had I left 4 minutes later, or earlier, or listened to a different radio station that made me drive faster or slower… I’d be the bastard that killed a guy.Report

      • Stillwater in reply to kenB says:

        Jaybird.

        Can I question about how in the heck we’re supposed to measure “intent” now? Or do I have to answer all questions about my own theory?

        Well, no. You can’t ask that question. Intent can’t be measured. It’s the background assumption by which measurement makes sense. Without, measurement is meaningless.

        If someone asks me a question about my own theory, can I instead ask them to post their theory? If not, why not?

        But people have posited there own theories, dude. If you think simply asserting that good intentions can lead to bad outcomes is some sort of killer argument, I think you’re seriously confused. Even your theory – insofar as it’s more than just descriptive – invokes good intentions.

        What settles the matter? Surely not cherry picking results, I would think.Report

      • Stillwater in reply to kenB says:

        how do we know what the consequences of an act are, given that, particularly on the level of policy or the actions of states, the number of factors is too complex for us to comprehend?

        How is this unique to consequentialism relative to any other moral theory?Report

      • Jaybird in reply to kenB says:

        So demonstrating that I had written my essay made you giving that answer possible?

        That’s wacky.

        In any case, I don’t understand the whole “It’s the background assumption by which measurement makes sense. Without, measurement is meaningless.” thing.

        If you think simply asserting that good intentions can lead to bad outcomes is some sort of killer argument, I think you’re seriously confused. Even your theory – insofar as it’s more than just descriptive – invokes good intentions.

        No, my assertion is that intentions are only interesting when they lead to different outcomes than intended… and, for some reason, we rarely (if ever) find reason to discuss bad intentions leading to good outcomes. So we’re stuck talking about how someone who did something horrid didn’t mean to.

        Which strikes me as less interesting than others seem to find it.Report

      • Stillwater in reply to kenB says:

        Alsotoo, Chris, consequentialism is basically a theory about moral judgment given outcomes rather than moral principles given epistemically justified expectations.Report

      • Stillwater in reply to kenB says:

        So we’re stuck talking about how someone who did something horrid didn’t mean to.

        Man, once again, I wish I knew how this relates to what I said. If your argument is simply that people with power can do horrible things, you won’t get any disagreement from me. Apart from that, I no longer know what we’re talking about.Report

      • Chris in reply to kenB says:

        Still, right. Well, it doesn’t have to be about moral judgments specifically. It can be about normative judgments more generally. And it can be about decision making (should be about decision making, or the problem of complex causal relations becomes intractable).Report

      • Stillwater in reply to kenB says:

        or the problem of complex causal relations becomes intractable

        It’s intractable in any event, if we take a sufficiently wide scope. Butterfly wings and all that.

        Where do we circumscribe the boundary? Somewhere less then hypotheticals? By generalizing from a single event?Report

      • roger in reply to kenB says:

        I have been critical of the utilitarian arguments too, but the idea that we should criticize an operational principle because it is not omniscient across all time and space is like totally bogus, dudes.

        We can never be sure of our actions. Everything we do is a hypothesis, an attempt at solving whatever problems we hope to address. Despite its flaws, I think a strength of utilitarianism is actually that it doesn’t sweep this assumption under the rug. It acknowledges reality.Report

      • Chris in reply to kenB says:

        Roger, you’re familiar with various methods of failure analysis?Report

      • Pierre Corneille in reply to kenB says:

        @kenb

        I happen to agree with you on this:

        However, it does point up that utilitarianism depends on prior agreement on values — what results are good, what results are bad, and most of all, how do we adjudicate between different bundles of goods and bads. It’s likely that there’s a substantial connection between one’s political ideology and one’s decisions on how these various trade-offs should be resolved even in the absence of a political ideology.

        Unfortunately, I don’t see many people, except for @burt-likko addressing your point in this sub-thread. To me it’s a good rejoinder to utilitarianism, or at least it’s more blanket forms (I realize there are different kinds of utilitarianism, and I don’t know enough about them).

        To me, the principled pragmatism that @tod-kelly and others believe in is a great ism. But the principles come from somewhere, and I think if we push how one justifies what is thought far enough, then we approach territory where there are basic assumptions about right and wrong that are reminiscent of what a lot of us think of when it comes to ideology.

        I think the arguments advanced by @vikram-bath and @tod-kelly work much better against what Orwell called “nationalism,” and what I would call “tribalism,” than they do against the notion of “ideology” itself.Report

      • Chris in reply to kenB says:

        However, it does point up that utilitarianism depends on prior agreement on values — what results are good, what results are bad, and most of all, how do we adjudicate between different bundles of goods and bads.

        But this is true of any theory of moral judgment or practical reason more generally, isn’t it?A virtue ethic still has to come up with virtues and their practical instantiations, and a Kantian deontology has to arrive at universal rules via reason, right? It is possible to judge decisions and actions only by virtue of their consequences, or their utility specifically, based on values arrived at in a variety of ways — reason, discourse, practical experience, whatever. All a consequentialist theory says is that once you have a set of values, you evaluate based only on the consequences and how they accord with those values.

        The trolley problem, for all its faults, illustrates this quite well: you can have the same value, namely that human life is something to be protected, and arrive at different conclusions based on whether you act on a principle or rule, which says that you should not act in such a way that you directly harm another human life, or only on the consequences, which leads you to choose to act in such a way that you save the most human lives possible given your options.Report

      • kenB in reply to kenB says:

        @pierre-corneille @chris

        Alas, real life is not letting me participate very deeply in this, but I should say that my statement was not meant as an argument against utilitarianism/consequentialism in general but rather against the thought in the OP (or at least, in the penumbras and emanations of the OP — I’m not sure now that it was really there) that if we just get rid of our ideological commitments and focus on determining consequences, we can resolve most of our disagreements with careful open-minded empiricism.

        I do agree that “ideology” in and of itself isn’t the real target — rather it’s the tendency to treat one’s own ideological package as Truth rather than as a convenient bundle of assumptions and values that help us to make sense of the world. I don’t know that “tribalism” is exactly the right word for this, but I can’t think of a more appropriate one at the moment.

        Reihan Salam once labelled himself, in a jokey post at TAS, as a “realservative” — by which he meant more or less that he had a conservative outlook but was always mindful of the fact that he had no proof that his outlook was superior to others. You can see this shining through his posts, which is why he’s one of my favorite bloggers. I think we should all strive to be “real” in that way — libereals, librealtarians, etc. Easier to say than to do (and easier to see others’ failure at that than our own).

        Having not read others’ comments carefully, I apologize in advance if I’ve simply repeated what’s already been said.Report

    • LeeEsq in reply to Burt Likko says:

      Not only does intent matter but the ends do not necessarily justify the means. Each of the ideological states of the 20th century truly believed that they were going to create paradise on earth but certain drastic attempts were necessary. I doubt anybody on this site will justify this on utilitarian grounds even if Mao managed to pull communism off.

      How you achieve things can be just as important as what your trying to achieve.Report

      • @leeesq

        Thanks for saying “necessarily.” I get irked when someone just trot out “the ends don’t justify the means” when it is manifestly clear that the person saying it obviously believes the ends justify at least some undesirable means.Report

  3. zic says:

    Nice, Vikram.

    And I think I totally agree, perhaps I too am a Utilitarian. I think there’s a parallel ideologue, the Pragmatist and perhaps a combined, the pragmatic utilitarian.Report

    • Vikram Bath in reply to zic says:

      I’ve wondered what the relationship between “pragmatism” and utilitarianism/consequentialism should be, but I always end up discovering that I don’t understand the words well enough to make a big deal about the differences.

      My gut notion though is that the pragmatist is more likely to just tweak existing systems rather than design them from the ground up the way they should have been done in the first place.

      When it comes to politics, I think you really have to follow that notion of pragmatism if you want to have any success. Because if people hear what you really think, they’ll call you crazy.Report

      • zic in reply to Vikram Bath says:

        +1.

        If you can’t build from the ground up, pragmatic decisions are forced upon you.

        And when you do build from the ground up, pragmatic decisions will be forced upon you once the unintended consequences begin to show the flaws.Report

      • Chris in reply to Vikram Bath says:

        Generally, pragmatic ethics is a sort of meta-metaethic, compatible with pretty much any metaethical approach.Report

    • Scott Fields in reply to zic says:

      The “ideology” I consider my own is principled pragmatism, a framing I picked up from Tod Kelly here, though I don’t know that he coined it.

      As Mark Thompson notes below, a prerequisite for empirical investigation of potential consequences is success criteria. “Policies that have good expected consequences are good, and policies with poor expected consequences are bad” doesn’t mean much until you’ve established what “good” and “bad” are.Report

      • zic in reply to Scott Fields says:

        I’ve been arguing for ages here that laws for new programs should include a method of assessment that’s flexible and funding for doing that assessment. (I’ve repeatedly done so on this blog.) I know, that conversation reeks of the education arguments and standardized testing. But it also hints at good planning; including the planning to use this round of stuff to gain knowledge for better decision making next year or in 10 years or whatever.

        It’s a way of thinking about how to conduct the public’s business. Right now, we think of short-term savings instead of long-term stability and growth. And we’re prone to acting like the House, bringing the same repeal up over and over, expecting a different result, and then throwing a hissy fit when things don’t change. That’s one definition of insanity, or so a therapist once told me.

        We can do better; and admitting that we will get things wrong but that we can at least try to learn from those mistakes would be a huge step to acting like grownups governing our country.Report

      • Scott Fields in reply to Scott Fields says:

        @zic – I agree.

        In my little corner of the business world, we work to establish a culture of continuous improvement, mindful that making things better is an iterative process where the lessons learned from the previous change are systematically applied to the next change. And no improvement efforts are undertaken without a clear definition of the problem to be resolved, meaningful metrics by which to assess success and some monitoring to determine sustained control over time.

        But this approach is technocratic and too easily demonized, it seems.Report

  4. NewDealer says:

    I am not a utilitarian for reasons that Burt Likko mentions. Also utilitarians are also human and as humans are bound to make mistakes, have biases, and find convenient ways of their preferred outcome to be (wait for it) coincidentally be the utilitarian solution. Some of the most arrogant and conceited people I’ve met have been self-professed utilitarians, never in the wrong, always in the right, incapable of expressing any doubt.*

    What is the metric for determining whether the needs of the many outweigh the needs of the few? What if a policy will be hugely beneficial for 50.2 percent of the population and horrific for 49.8 percent?Report

    • NewDealer in reply to NewDealer says:

      Forgot my asterisk:

      I consider the ability to openly express doubt about your ideas, beliefs, and policy preferences to be of extreme importance. I seem to be alone in this view.Report

      • Kim in reply to NewDealer says:

        “I consider the ability to openly express doubt about your ideas, beliefs, and policy preferences to be of extreme importance. I seem to be alone in this view”

        Nope, merely hypocritical. there are millions like you, who will say “everything’s fine to doubt” until someone dares to touch the third rail.

        When you stop using reason and start in on the insults, you have lost the rhetorical argument.Report

      • LeeEsq in reply to NewDealer says:

        Kim, your point being? All your doing is pointing out that people are people. Nearly everybody has a third rail.Report

      • Kim in reply to NewDealer says:

        Lee,
        When rails collide, sparks happen!Report

      • LeeEsq in reply to NewDealer says:

        Are you even capable of not speaking in metaphor?Report

      • Kim in reply to NewDealer says:

        Lee,
        Yes, I am. But I do believe you are missing half the humor in what I wrote above (specifically: if I say switching to insults loses the argument, and I am calling someone a hypocrite… well, then haven’t I lost the argument?)Report

      • @newdealer

        “I consider the ability to openly express doubt about your ideas, beliefs, and policy preferences to be of extreme importance. I seem to be alone in this view.”

        You’re not alone. The catch, at least when it comes to me, is that there are probably ideas I have that are so ingrained I might not even be aware of them enough to doubt them.Report

    • Mike Schilling in reply to NewDealer says:

      Good policies are ones that are good for the people who matter.Report

    • Vikram Bath in reply to NewDealer says:

      Also utilitarians are also human and as humans are bound to make mistakes, have biases, and find convenient ways of their preferred outcome to be (wait for it) coincidentally be the utilitarian solution.

      If you find a way to free us of non-human decision-makers, I will probably endorse it.

      Some of the most arrogant and conceited people I’ve met have been self-professed utilitarians

      I don’t think I’ve ever actually met anyone IRL who claimed the label.

      I would note though that just because arrogant and conceited people adopt an ideology doesn’t necessarily make it wrong.

      incapable of expressing any doubt

      I think again, that you might just be describing humans–particularly the humans who tend to voice opinions. My Facebook feed has plenty of Republicans and Democrats posting, but I have yet to find a doubtful one.

      What is the metric for determining whether the needs of the many outweigh the needs of the few?

      I am not offering a metric, and indeed I haven’t really needed one in practice. There is enough uncertainty around that calculating a precise utility is usually not necessary. The point is to recognize that the point of decision-making is to decide in a way that makes things better rather than a way that maps to some preset ideology impervious to evidence.

      What if a policy will be hugely beneficial for 50.2 percent of the population and horrific for 49.8 percent?

      Horrific consequences are to be avoided. Indeed, that’s the whole point. It’s the other ideologies that insist that something must be done even if it has horrific consequences.Report

      • LeeEsq in reply to Vikram Bath says:

        I think computer scientists are working on a way to free us from human decision makers. The only problem with that is if Hollywood is to be believed, it won’t turn out so well for humans.

        Theocracy is another way to free us from human decision makers, or at least binding human decision makers to only interpreting from a particular sacred text and doctrine, but that hasn’t had such a good track record.Report

      • Kim in reply to Vikram Bath says:

        Vikram,
        Okay, pal, please, go try to apply this in real life.
        If you think it’s not a horrific consequence to kill an innocent 5 year old, please, for the love of god try again.

        Fun Utilitarian Games!
        It’s perfectly okay to kill one innocent kid to stop Global Warming… but only if everyone doesn’t find out about it (as that would cause undue suffering to the populace at large).Report

      • If you think it’s not a horrific consequence to kill an innocent 5 year old, please, for the love of god try again.

        Kim, I think you need to take a step back here.Report

      • BlaiseP in reply to Vikram Bath says:

        @leeesq : Computer scientists would still have to construct the comparators. What’s a good decision? One which maximises for [utility] . It’s in square braces because utility might have measurable components but the underlying principles are as vague as the theocracy’s goals are concrete.

        You can make people go to divine service. You sum up the number of people in church, subtract it from the total population and get the number of reprobates. Might even send out patrols to hunt them down, make sure they all get inside the mosque.

        But even the most elegantly-constructed software model would need some guidance on where force ought to be applied to make changes for the better. There’s that word again, “better”. No evicting it from the concept of utility.Report

      • James K in reply to Vikram Bath says:

        @vikram-bath

        If you find a way to free us of non-human decision-makers, I will probably endorse it.

        Are you familiar with futarchy? What do you think of it?Report

      • LeeEsq in reply to Vikram Bath says:

        BlaiseP, I was making a Terminator joke.Report

      • BlaiseP in reply to Vikram Bath says:

        Oh, okay. I avoid Ahnold movies on principle.Report

      • Fnord in reply to Vikram Bath says:

        We may not be free of human decision makers, but that doesn’t mean that some decision-making systems don’t work better than others on the lump of meat and cognative biases we call a brain.

        As you point out, people are bad at multiplication; they’re also pretty good at self-justification. It’s at least plausible that those traits make catastrophic moral failures more likely under utilitarianism than other moral systems.

        As a theoretical utilitarian, I say that there are good utilitarian reasons why utilitarian reasoning shouldn’t be used too broadly by humans.Report

      • @james-k , Thanks for the link. There is some elegance to that idea. Some objections that come to mind are the potential for market manipulation. (There was an article in the atlantic recently about someone who bet a couple million on Romney for unguessable reasons.)

        In general, I am a fan of trying things that haven’t been tried though. I would love to see a single state try this for some sort decisions so that the rest of us can see what happens.Report

      • James K in reply to Vikram Bath says:

        @vikram-bath

        Apparently prediction markets are basically impervious to manipulation. No matter how much money you pour into them, speculators can make money by pushing the price back again, and they will. in fact since manipulation increases the liquidity in the market, it actually improves the market’s function.Report

      • BlaiseP in reply to Vikram Bath says:

        Prediction markets can be manipulated with planted rumours and outright lies. Sure, they will rebound, once the truth is out, but the interval is enough to cause enough of a bulge for crooks to take advantage of it.Report

  5. Here’s the problem with pure utilitarianism – it doesn’t exist absent some form of categorical imperative. How does a utilitarian even define a “good” outcome without reference to some sort of underlying philosophy or ideology? I don’t think this is possible.

    I would instead argue that the problem is not a lack of concern with utility so much as it is the elevation of policy preferences to the same status as normative principles. To repeat something I’ve grown fond of saying: “Have too many principles, and you soon have none.” This is because the more principles you have, the more frequently those principles are going to come into conflict, and thus the more exceptions need to be made to those principles. To preserve unity amongst your ideological grouping, these exceptions must then get elevated to the level of principles themselves, which eventually leads to exceptions to the exceptions, all of which need to be rigidly adhered to for the sake of ideological unity. In essence, the only true “principle” of the ideology eventually becomes the preservation of the ideology – the ideology would thus be most properly defined as “conservatism/Islamism/leftism/libertarianism/etc. is an ideology premised on the preservation of conservatism/Islamism/leftism/libertarianism/etc.” At that point, it’s not an ideology at all – it’s nihilism.

    This, to me, is the great problem with modern ideologies – far from being too consistent, they are actually incredibly inconsistent with no clear definition of what is Good. A proper normative ideology sets goals to be achieved (or, if already achieved, then maintained and preserved) that are goods in and of themselves, but says nothing about how those goals can or should be achieved.Report

    • Glyph in reply to Mark Thompson says:

      “Have too many principles, and you soon have none.”

      “So many vows…they make you swear and swear. Defend the king. Obey the king. Keep his secrets. Do his bidding. Your life for his. But obey your father. Love your sister. Protect the innocent. Defend the weak. Respect the gods. Obey the laws. It’s too much. No matter what you do, you’re forsaking one vow or the other.”

      George R.R. Martin, A Clash of KingsReport

    • How does a utilitarian even define a “good” outcome without reference to some sort of underlying philosophy or ideology? I don’t think this is possible.

      I do acknowledge that there are certain philosophically-relevant problems with utilitarianism. There are different kinds of goods and bads that might be difficult to compare. Even within an individual, it’s hard to assign utilities well. E.g., I would be willing to die a year earlier than I might otherwise for compensation, but I don’t have much of an idea of what that compensation should be.

      And it gets much, much worse when you open the problem to including other people and animals.

      Still, even if these problems are unsolvable, one is better off striving to determine what actions lead to generally good outcomes even if “good” is poorly defined. And most practical problems don’t really involve harming children to help a bunch of adults or other such oddities whose main reason for existence is to test the boundaries of ideas rather than to be representative of the sort of problems people actually have to solve.

      I’d also note, that you have to choose *some* way of making decisions. Most people are currently following heuristics like “the minimum wage regardless of its level is always too low” or “Criminals should get more jail time, regardless of what they get now.”

      So, dismissing utilitarianism requires something stronger to take its place. And if someone suggests something better here, I actually will switch to that. Because I would be a lousy utilitarian if I wasn’t willing to give it up for something better. 🙂Report

      • Michael Drew in reply to Vikram Bath says:

        It’s not so much that it has philosophically relevant problems as an ideology; it’s that it simply falls short of being an ideology. Utilitarianism, at least as you have described it, is nothing more than an analytical framework for calculating preferred actions based on a substantive understanding of “the good” that has to be filled in by the user. What the user fills in for their theory of the good – that’s the ideology. Utilitarianism doesn’t offer a substantive theory of the good – because it can’t, because it, from what I can tell, actually is only meant to be a calculative framework for going from a particular understanding of the good to a plan for what’s best to do. But it leaves the whole question of what’s good open for people to hash out using ideology, weighing of values, discussion thereof, and much more.Report

      • @michael-drew This. A thousand times this.Report

      • [utilitarianism] is only meant to be a calculative framework for going from a particular understanding of the good to a plan for what’s best to do. But it leaves the whole question of what’s good open for people to hash out using ideology, weighing of values, discussion thereof, and much more.

        I agree with that, but I think we disagree on the relative importance of calculative frameworks and the weighing of values. Most modern-day ideologues don’t really weigh values. Rather, their ideologies give them ready answers to questions without the need for calculation.

        If people acknowledge that calculation was required at all, that would be a big, dramatic change in the status quo. Yes, the weighing of incomparable values would still be with us. There is no ready utilitarian position on abortion, for example, but many other questions could be answered without the need for everyone to agree on a common weighting of values.Report

      • BlaiseP in reply to Vikram Bath says:

        Let’s suppose a Utilitarian and a Deontologist were having a tussle over some “Think of the Children”issue. Doesn’t matter which party resorted to the argumentum ad populum, one of them did.

        The Utilitarian says “I am thinking of the children. Every day, children are beaten and abused by uncaring parents. I will therefore get a law passed, putting all children into dormitories, where they will be properly supervised and won’t be abused. Maximum good for all the children.”

        The Deontologist, horrified, says “Parents have a duty to raise their children. Even if a few parents are abusive, we might have some mechanism for removing the children from such a situation — but all the children?”

        “Oh, I thought we were talking about ALL the children, not just the ones being abused.”

        “Of course I was only talking about some of the children, I cannot accept your Dormitory Solution. Where parents have failed in their duty to raise children, society has a duty to intervene.”

        “Ah, but my Dormitory Solution would stave off all such problems before children get hurt. Yours doesn’t.”Report

      • To add to Michael Drew’s and Mark’s points, but also to tweak them a bit, I’ll say that ideology–or something we might call “ideology” if we haven’t settled on a precise definition of what that is–can inform another element of a utilitarian calculus. What I’m talking about is choice of first resort and choice of last (or later) resort.

        Here’s what I mean. When presented with a problem, a stereotypical liberal (a la americaine) will incline toward a government centered solution, while a stereotypical libertarian will incline toward a market solution. After contemplating outcomes and other realities, the less stereotypical liberal might endorse market solutions and a less stereotypical libertarian might endorse a government solution. One is not necessarily less or more “utilitarian” than the other, although each might be less utilitarian than a “pure-ish” utilitarian, but their starting points are different.Report

      • roger in reply to Vikram Bath says:

        Riffing on Pierre….

        Eric Bonabeau, the author of Swarm intelligence makes a crucial point…

        “Human beings suffer from a “centralized mindset”; they would like to assign the coordination of activities to a central command.”

        “With self-organization, the behavior of the group is often unpredictable, emerging from the collective interactions of all of the individuals. The simple rules by which individuals interact can generate complex group behavior. Indeed, the emergence of such collective behavior out of simple rules is one the great lessons of swarm intelligence.
        Solutions to problems are emergent rather than predefined and preprogrammed. The problem is that you don’t always know ahead of time what emergent solution will come out because emergent behavior is unpredictable. If applied well, self-organization endows your swarm with the ability to adapt to situations that you didn’t think of. ”

        We have a central command bias. We tend to think in term of top town, rational design. This comes natural to us. The cognitive blind spot is in decentralized order and bottoms up design. Science and markets discover and create order and knowledge within their domains in an extremely counterintuitive way. This partially explains why both developed so late as formal institutions.

        I heartily agree that some problems are best solved top down. Specifically those problems which cannot be solved bottoms up. The problem is that bottoms up problem solving is viewed as magical thinking by most educated adults. This is not just true of liberals. Conservatives share a similar cognitive bias, it is just that they replace the government master planner with an even more powerful deity.

        Long way of saying that 90% of discussions with those with the top down mindset is in getting them to even consider that something other than top down planning is even possible. And yes, anarchists probably are the exceptions on the other side of the scales.Report

      • Michael Drew in reply to Vikram Bath says:

        Vikram,

        Sorry for the lateness here, but it’s worth noting a point of agreement. I’m completely with you here: “If people acknowledge that calculation was required at all, that would be a big, dramatic change in the status quo.” I do consider myself some form of rough utilitarian in that way and others, but as I suggest, ultimately that doesn’t really end up answering much about what I really believe. I’m still in a stage in life where I’m primarily weighing values, observing results from various utlitarian applications of value weightings. So it doesn’t really take anyone very far toward understanding where I stand ideologically to say that I’m a utilitarian, or will be when I settle on some set of values and then begin to apply them using utilitarianism. And actually, in the case of people with better-worked out sets of value weightings, just to say they’re utilitarians wouldn’t say much about them either – you have to either describe their value sets or observe their utilitarian applications to know much about how their particular utilitarianisms are programmed, or in any case cashed out, ideologically.Report

  6. BlaiseP says:

    Bentham-ic utilitarianism is awfully tough on the concept of Society, which Bentham said was a fiction. Groups form up around shared fictions in pursuit of shared gains for themselves.

    Here’s the thing about truth: everyone carries his own version about. No two are quite the same.

    But power, power doesn’t care about truth. Power can be translated into work. Work moves things. Effort isn’t work until it accomplishes something, moves a square kilo of lead one centimeter. Nobody thus affected against their wishes will consider such power anything but tyranny. Force majeure. All the things the Libertarians hate. Power everyone understands, money is translated into power, mandate translates to power. Potential energy become kinetic energy.

    The utilitarian, confronted with the question of Raising Taxes would immediately laugh and respond with a rabbinical question of his own: “Which taxes and upon whom?” He would snarl a bit, still laughing, put on his accountant’s hat and demand to look at the chart of accounts, especially Accounts Payable, wanting to know who’s been getting paid and how. Looking up, he’d say “You don’t need to raise taxes. All you need to do is stop borrowing and spending and your problem is solved at once.”

    Utilitarianism and its child Consequentialism are fine things — right up to the point where they start justifying themselves. They just can’t avoid the words “Should” and “Ought” and “Better” and “Worse”. Consequentialism is a willful horse and few can ride it without being galloped down roads to towns they might not wish to visit, shouting “Woah!” all the while. Who gets to say “Should” and “Ought”? Those with power. Anyone can say “Better” or “Worse” for all the good it will do them. The powerful do as they wish within the mandate they are given and the rest of us may cry about it at our leisure.

    Consequentialism must be tempered with certain grim realities about the nature of power and mandate. A nation is more than its financial ledgers. If tax revenues are viewed as an continuing investment in a nation’s continued viability — and not merely Big Gummint extorting so much of our Hard-Earned Money — we might arrive at some common sense about taxation and expenditures. I will not hold my breath while the various -isms from every corner of the compass rose come to terms on that subject.Report

    • Vikram Bath in reply to BlaiseP says:

      What are the alternatives though? If people are not consequentialist, those with power will still be defining “should” and “ought”.Report

      • BlaiseP in reply to Vikram Bath says:

        Power is constrained by mandate. Political power, financial power, ideological power — the only consequential aspects to the arbitrary use of such power is the self-interest of the power-wielder and the potential fallout arising from what he does with such power. I am told money can’t buy you love and other such jingle-jangle folk wisdom but I have yet to meet the commodity money won’t buy. Maybe time. Money can’t buy you more time. It can buy another few months of my time, though, if the price is right.

        I’m still in the camp of Quine, very much of the school of consequentialism. But I know enough of the world to understand people are stupid and self-centred. They don’t think about the consequences of their actions, especially not consequences to other people. Look at the AGW denialists. Try to tell them we’re inducing chaos into the system, they’ll tell you, without batting an eye, that such warnings are useless fearmongering because disaster hasn’t yet befallen us. Absolute logical nonsense. Falling from atop the Empire State Building at terminal velocity, waving at the horrified onlookers “Hey, I’m doing just fine. Wheee! Better’n Six Flags, this ride!”

        People with power might do wise things or stupid things, relative to their own self-interest. Hobbes’ monarch might view his own success or failure based on the wellbeing of his own kingdom — but he continues, saying you won’t find such thinking in a crowd. The more people butting into that conversation, the less-likely you’ll get any consistent view of what’s good for everyone.

        So what are the consequences of people being stupid and self-centred? They carry around their own version of the truth. They can’t contemplate anyone else’s truth statements because they won’t evaluate their own. See, consequentialism says, with St. Paul “The wages of sin is death.” We know if we go on screwing up the world, pulling all the fish out of the sea, burning up all the petroleum, cutting down the jungles — we know there’s no other planet nearby. Eat drink and be merry for tomorrow we die — hell, people can’t even get that far. Today, the US government is shut down because one group of people can’t deal with other people’s conception of the public good enacted into law.Report

  7. Creon Critic says:

    I think values rule over utilitarianism. The values supply the definitions for what good consequences are. So should NYC regulate sugary drinks? Fatty foods? Smoking in public places? Well there are competing values at stake like the public health and autonomy. Utilitarianism is particularly weak when faced with the challenge of incommensurable values.

    The truth seeker initially views all sides of an issue with equanimity and allows the evidence to make the decision.

    This sounds very good, but has a lot less content than at first glance. Say we’re deciding whether to have a public broadcaster (and connected, how we’re financing it). For the sake of simplicity I’ll present three options. Option A, no public broadcaster, let the market decide. Option B, yes a public broadcaster, but only through voluntary subscription (roughly the US position). Option C, yes a public broadcaster, financed through taxation (roughly the UK position). A, B, and C are all viable choices. What evidence helps us make the decision depends on the values we bring to the choice. Further, I don’t see how the evidence leads to a definitive conclusion. Suppose it is factually true that public broadcasters serve as incubators for new voices. Option A is still viable. It is difficult to see what evidence one could supply that could conclusively undermine any of the three options. (And that’s leaving out the more controversial Option D, no private broadcasters, only public broadcasters.)

    These value-problems multiply exponentially for utilitarianism when confronted with any of the culture war issues. How is the evidence to make a decision regarding abortion policy? And beyond the culture wars, utilitarianism, again, only comes in handy once the value priors have been set. For instance whether prisons are to serve more punitive or more rehabilitative functions.

    Overall, the dispute remains deciding amongst the consequences we prefer. Having defined good consequences differently from one ideological school to another elevating utilitarianism doesn’t get us particularly far. And furthermore, utilitarianism is particularly unhelpful in resolving these fundamental value disagreements.Report

  8. roger says:

    Another great conversation starter, Vikram. You are becoming my new favorite blogger/writer.

    I am not a utilitarian either, though I respect and admire anyone who chooses to be one. My experience is that one useful way to sort our values is that we can be SELFISH, ALTRUISTIC or UTILITARIAN. I also think we are each a little of all of them.

    As a rule of thumb, I try to look for solutions which satisfy all three value sets…the demands of the altruist, the self focused and utilitarian. This points to the class of actions which are mutually voluntary.

    This of course negates many involuntary actions or interactions which harm one to benefit another. The problem I have with this class (win/lose) is that it is extremely hard to compare, contrast and measure utilities. Furthermore, win/lose activities spiral into arms races of destructive and wasteful defense and offense. Indeed, you can even battle over which measuring stick of utility we use and who gets to wield it.

    There is though one way to get win/lose redistribution in a way which satisfies all three value foundations. Namely, to voluntarily agree to the process or rules before the “game” starts. In other words, you can choose to participate at a constitutional level behind a veil, ala Rawls and Buchanan. I prefer to choose rules which are expected to be best for me, for those I love and for everyone else. Perhaps I can persuade others to choose similarly.Report

    • Kim in reply to roger says:

      “Perhaps I can persuade others to choose similarly.”
      … try starting with charities.Report

    • kenB in reply to roger says:

      Ditto on the first line — I’ve really enjoyed your contributions, VB.Report

    • Mad Rocket Scientist in reply to roger says:

      I gotta agree with Roger, you’ve been keeping me thinking.

      Good work!Report

    • North in reply to roger says:

      Ditto. Excellent posts.Report

    • BlaiseP in reply to roger says:

      Nobody thinks he’s being selfish when he’s being selfish. We have to tell little kids “Don’t be selfish. Share your toys, you’re not playing with that one anyway,”

      Newsflash: utility can’t be measured. It always ends up with what, in your own words, is ” best for me, for those I love and for everyone else. ” What does “best” means in any given context? It’s entirely dependent on outcomes. We’d have to make a choice, not knowing what the outcome will be. And we don’t know. We have to act. Even inaction is also a choice. Tick-tock.

      Utilitarianism is ultimately a failure of recursion. When I write recursion into code, I keep a recursion pointer, so I don’t blow up the machine. With Utility, it really is turtles all the way down. But they’re good turtles.Report

  9. Chris says:

    if the only reason your ideology has value is that it has good consequences, then it has no advantages (and probably has some disadvantages) over pure utilitarianism.

    This is false, in that it is possible that there are good consequences that can’t be reduced to “utility.” At least, this is the case unless “utility” is such a vague and abstract term that “utilitarianism” not only subsumes all consequentialism (of which, until this point, utilitarianism was supposed to be a special case!), but all possible metaethics including deontology. This is because it is possible for a metaethic, including deontology, to define what constitutes good consequences, and then use consequences in determining the value of an act or a decision. This is in fact what most metaethics do, ultimately, which is why consequentialism is specifically an ethic in which normative properties are determined by consequences alone, and utilitarianism is a an ethic in which normative properties are determined entirely by utility. Without some variant of alone, or entirely, or only, pretty much any ethic becomes consequentialist or utilitarian (which is what you’ve done here). However, if normative properties are at all determined by something other than consequences, or utility, then even if an ethic takes into account consequences, or utility specifically, it is not utilitarian or consequentialist.

    This is important, because it is possible to determine normative properties via an ideology and consider consequences, and therefore take into account empirical evidence (in fact, even take a scientific approach), and still not be a consequentialist, much less a utilitarian. In fact, I doubt there are any ideologues who don’t consider the consequences of their actions at all, and don’t adjust their behavior based on consequences. They just aren’t considering the consequences you want them to consider. So when you deal with ideologues, once you recognize that you can’t change their behavior by talking about the consequences you want to talk about, you have to start talking about the consequences they want to talk about, or stop talking to them altogether.Report

    • roger in reply to Chris says:

      “They just aren’t considering the consequences you want them to consider.”

      If someone believes most people will burn for eternity in hell, then they can legitimately support exterminating humanity — for consequential /utilitarian reasons!Report

      • Chris in reply to roger says:

        And they have.Report

      • Chris in reply to roger says:

        Or put differently, people aren’t going around deliberating on their moral judgments and decisions msot of the time. Instead, their judgments of whether an action, their’s or someone else’s, is moral is largely an intuitive one based on an affective reaction to the consequences, actual, perceived, or predicted, or to the act itself, depending on what we’re talking about.

        A perfect example might be the reaction to the HPV vaccine. As you may recall, Governor Perry signed off on an HPV “mandate” (the scare quotes are meant to indicate that it was really easy for parents to get around the mandate) which put the HPV vaccine on the standard vaccine schedule for 12 year old girls. There was a lot of backlash, some of which had to do with government telling parents what to do (even if they weren’t, really), but much of it was a reaction to the association of 12-year old girls and sex, since HPV is a sexually transmitted disease. People reacted to the predicted consequence — 12 year old girls having sex — and that largely or entirely determined their normative judgment of the vaccine and the mandate. There’s nothing particularly rational about this, and people weren’t thinking long and hard before they came up with the predicted consequence and subsequent affect-determined (which is to say, emotion-determined) judgment, but they were definitely taking into account consequences, and those consequences were out of sorts with their belief (intuitive as it was) that 12-year old girls should not be having sex.Report

      • Chris in reply to roger says:

        I should stop and think about whether I have more to say before I hit “Post Comment”:

        The HPV example gets to my point about reasoning with people, as well. You can tell someone a million different ways that the HPV vaccine will save thousands of women’s lives, but once someone’s convinced it’s about 12-year olds having sex, that’s probably not going to get you very far. You have to either undermine that association, or move on.Report

      • roger in reply to roger says:

        In Kahneman’s book, Thinking Fast and Thinking Slow, he documents how people frequently replace or substitute a difficult, complex problem with a simple one. Then they just answer the simple one.

        Substitution bias.Report

  10. North says:

    Yeah I’m gonna chime in with everyone else here. The basic failure of utilitarianism is that all things cannot be objectively reduced to a utilitarian value. If you cannot measure then you cannot analyze or compare and of course if you cannot do this then the ship of utilitarianism founders on the merciless shoals of the real world.Report

    • roger in reply to North says:

      But what of rule utilitarians who suggest that you can side-step this issue by allowing people to choose which sets of rules they operate under? In other words, that there is no need to compare or measure utility between people if you set the system up so that each person chooses which game to play and who to play it with?Report

      • Chris in reply to roger says:

        I’ve never heard of a rule utilitarianism or consequentialism more broadly that says everyone gets to come up with their own rules. That would be pretty damn chaotic.Report

      • North in reply to roger says:

        Dunno, don’t we just call them libertarians Roger?
        Since utilitarianism is predicated on an objective measuring of utility and then striving to maximize it I’d think that subjective utility measurement systems (like free market systems) would be considered imperfect from a utilitarian viewpoint.Report

      • roger in reply to roger says:

        Chris,

        You took a serious comment and rewrote it into a cartoon. Should I assume you are no longer interested in discussing the topic (Public Choice Theory)?

        The idea is that universal agreement at a constitutional level ala Rawls-meets-Buchanan allows one to overcome the problems of conflicting notions of utility. In effect it moves the level of choice up one or more levels of abstraction. The Public Choice theorists have various suggestions on ways to approach this ideal. One of them is not “everyone gets to come up with their own rules”Report

      • roger in reply to roger says:

        North,

        Again I am not a utilitarian, but I think rule utilitarianism is extremely common among libertarians (specifically the consequentialist branch).

        The argument, roughly, is that property rights and contract law mixed in with maximal freedom where others are not directly harmed leads to an institution which would be the type of institutional arrangement which a utilitarian would logically choose. I would argue that they can also be logical for a self focused person and an altruist too.Report

      • Stillwater in reply to roger says:

        ERoger, you wrote

        Again I am not a utilitarian.

        I find that sorta surprising. I’ve engaged in some pretty in depth conversations with you about these issues in which you say you identify as a pragmatist about institutional frameworks, choice and free markets providing as providing the greatest good for the greatest number, and that rights are conventionally and not a priori determined.

        I’ve also noticed a strain in your arguments which prioritize non-coercion (as you define it) as ambiguous between a good in and of itself (a priori) and as sufficient for maximizing utility (a posteriori). And that markets free markets are necessary and sufficient (given *your* conception of markets!) for maximizing total utility.

        If people are confused about what you believe, it seems to me it’s because you argue selectively for a mish-mash of criteria to justify your views – rights, a priori first principles, categorical imperatives, consequentialism, utilitarianism – invoked primarily to disagree with people for what you perceive as the ideological presuppositions motivating their views rather than anything they’ve actually said one way or the other in defense of those views. Let’s face it: you hate liberals.

        I hope my saying this doesn’t come as a shock to you, since I’ve been making these types of objections for as long as you and I have been discussing these matters.Report

      • Stillwater in reply to roger says:

        Again I am not a utilitarian.

        Here’s just one counter example: you argued that continuing to support government subsidies for the poor (a taking from others!) is justified if the employers who take advantage of those subsidies would have to increase price by paying them a living wage.

        That’s a form of utilitarianism, one which defines utility as low prices.Report

      • Chris in reply to roger says:

        that each person chooses which game to play and who to play it with?

        I’m not sure how to turn that into a cartoon.

        If this is your view, no, I’m not particularly interested in discussing it, because it’s not serious. I highly, highly doubt it’s your view. I’m sure I’d be interested in discussing what you actually believe.Report

      • Stillwater in reply to roger says:

        In other words, that there is no need to compare or measure utility between people if you set the system up so that each person chooses which game to play and who to play it with?

        The Public Choice theorists have various suggestions on ways to approach this ideal. One of them is not “everyone gets to come up with their own rules.”

        Those are prima facie contradictory, Roger. I hope you see that. One of those claims doesn’t go with the other.

        If you mean something more subtle – if you add in a bunch of idealized conditions about human rationality and institutional structures and the dynamics of sensitively dependent systems – then maybeyou have a compelling point here.Report

      • roger in reply to roger says:

        Stillwater,

        I will skip over the final gratuitous insult. Have you considered that saying something like that makes this forum an uncomfortable environment? I always assumed you liked me (like I do you) and we just had hot discussions, but this has me questioning that assumption.

        I am a pragmatist and a consequentialist. You and I often agree on things in these ways. And yes, I constantly present arguments that doing x will result in greater good.

        You are also right that I praise non coercion as being good in and of itself (because most people don’t want to be coerced — if they liked it I would be fine and dandy with it) and that I believe mutually voluntary, positive sum interactions maximize utility.

        Just to clarify, if I provide directions on how to get to Carnegie Hall, it does not imply I am arguing you should go to Carnegie Hall. Stated another way, when I argue that free markets or positive sum interactions maximize utility, it is not the same thing as saying YOU SHOULD MAXIMIZE UTILITY. I know of no a priori reason why I or Vikram or you SHOULD be a utilitarian.

        I am fine if you are a utilitarian. I am fine if you are an egoist. I am fine if you are an altruist. I have repeatedly stressed in these pages that I believe all people are actually a complex mix of all three. I know I am.

        However I have no arguments on why anyone SHOULD be one or the other. And if I did have an argument on what you should be I can’t imagine any reason why you should listen to me. Indeed my only moral advice is to not follow my moral advice. The good thing about this advice of course is that it is logically impossible to disobey.

        Let me repeat what I said earlier. I look for solutions which would persuade someone regardless of whether they are egoist, altruist or utilitarian. The intersection point is to create or choose a set of rules which from behind a veil will result in maximal expected utility according to your values. I assume this is obvious enough without an example, but I can provide some illustrations if it would help.Report

      • Stillwater in reply to roger says:

        Roger, you said I was an agent of evil due to my own ignorance. More than anyone on this site – I think, anyway, maybe LWA is in the mic too – I’ve engaged in your arguments and responded to you comments with thoughtful critiques and counter arguments and evidence and dialogue. I think I’ve given liberal’s views on all these things the best effort I could have given – and yet you call me an ignorant agent of evil.

        I don’t know how to get past that part.

        As a person I know I’d like you and would love to hang aroung the beach with you drinking … well I like Whiskey on the rocks, which isn’t very beachy. So the disagreement isn’t personal. Please PLEASE don’t think that it is. You have your views about political economy, and I disagree with them. Due to my own ignorance as a Liberal.Report

      • roger in reply to roger says:

        Chris and Stillwater,

        Just to add the details. The idea is that people can get together and agree to institutional rules voluntarily. Others can choose to join those institutions which meet their values. If no institution meets their values they can attempt to persuade those in an institution to adapt, or they can create a new one and persuade others to join them.

        In general people will tend to join institutions which have universal rules (aka no PRIVILEGE, with the word used in my sense not yours). Everyone would like to have privilege of course, it is just that those without privilege refuse to play their reindeer games. So they settle on universal standards like property rights, rule of law, common safety nets, enforcement mechanisms, and so on. They may even agree to ways to change their constitution or to enable certain individuals to make certain types of collective decisions.

        I understand that this vision is an ideal. In reality, we are born into institutions that are already all developed and perverted by past actions not all of which were benevolent in intent. As an ideal, my recommendation is to slowly and carefully take baby steps in the direction of more institutional choice.

        I have listed these in past discussions. They include such ideals as simple non activist rules, rather than complex interventionist rules. They include allowing mutually voluntary actions where practical. Subsidiarity where practical. Opt outs where practical. Choice built into to institutions where practical. Exit rights where practical. Sunset provisions and supermajorities where practical.

        I also know that in some cases none of these are practical. Fine. Baby steps. Let’s start where it is practical.

        I agree with you that modern mixed economies are the bees knees. No generation has been anywhere near as well off as the current one. I would like to see incremental improvement making it even better. After all, if we can just continue the current per capita growth trends, in 400 years our descendants will make the inflation adjusted equivalent of a million dollars a day. Should be more than enough to fix global warming and pay for universal health care to boot.Report

      • roger in reply to roger says:

        I will gladly meet you for that drink. I bet we would have a blast.

        Peace.Report

      • BlaiseP in reply to roger says:

        O that would look so great on a marquee. “Tonight only: Agents of Evil.”

        If I ever get a band back together (sigh) I’m gonna name it Agents of Evil.Report

      • Stillwater in reply to roger says:

        If you get the band back together and record an album Blaise, could you throw me a bone and name it “Vandalism or Death”.

        Just think about it.Report

      • BlaiseP in reply to roger says:

        That I will. It’ll start out with a terrifying Mexican trumpet solo, the Degüello. At the Alamo, Santa Ana had 14 bands playing it.

        Vandalismo – o muerte!Report

      • Chris in reply to roger says:

        In general people will tend to join institutions which have universal rules (aka no PRIVILEGE, with the word used in my sense not yours).

        This is not true. In fact, it’s hard to imagine anyone would think it’s true given all of human history.Report

      • roger in reply to roger says:

        If given a choice or alternatives. My recommendations are based upon extending choice, options and exit rights. History is full of privileged folks devising great ways to prohibit exit and choice and thus use their privilege to exploit the masses.

        Foragers were usually able to avoid domination due to an egalitarian ethos combined with the exit freedom of nomads with relatives in neighboring bands. With agriculture the exit freedom was lost, and the stationary bandits eventually tend to take over.Report

      • Chris in reply to roger says:

        Women might have something to say about your examples. Or men who think about the way women were treated in the groups you mention.

        People don’t gravitate towards universality. They gravitate towards rules that favor them or their in-group. Women and out-groups have pretty much always gotten the shaft. For example, how did women exit the hunter-gatherer groups you mention? If and when they wanted to, or by being married off to another group (or kept as war bounty by another group, or whatever)?Report

      • roger in reply to roger says:

        In other words you are claiming that they were prevented from exercising exit rights by their husbands. I am arguing for exit rights and increasingly favorable options. So how are you disagreeing with me?

        “People don’t gravitate towards universality. They gravitate towards rules that favor them or their in-group.”

        Again this was my initial comment. People prefer privileged positions. The trouble is that they cannot get others to along with the sucker’s game except via coercion and preventing exit rights. So that is what we see. Hence my arguments against coercion and for expanding the freedom to choose.

        Are you trying to argue that people freely choose exploitation? Why? Or are you just agreeing with me that people will establish privilege when able to get away with it?

        By the way, would you like to expand on your above point on “failure analysis?”Report

      • Chris in reply to roger says:

        Your initial comment was:

        In general people will tend to join institutions which have universal rules (aka no PRIVILEGE, with the word used in my sense not yours).

        Which is the opposite of:

        People don’t gravitate towards universality. They gravitate towards rules that favor them or their in-group.

        Which was my reply.Report

      • roger in reply to roger says:

        And my next sentence was…

        “Everyone would like to have privilege of course, it is just that those without privilege refuse to play their reindeer games.”

        My next paragraph then went on to explain that this is an ideal and that in the real world those that are able to get away with it will seek to exploit the situation in non benevolent ways.Report

      • Chris in reply to roger says:

        Then except for that sentence, we are on the same page. That sentence is just false. Very few people care about universality.

        And freedom to exit is great, but only really feasible on small scales, eh? And even then, it’s only possible assuming diversity between groups.Report

      • roger in reply to roger says:

        “Very few people care about universality.”

        Since most people are self focused, they rationally prefer PRIVILEGE.

        Absent coercion, those UNDERPRIVILEGED can rationally be expected to rebel, exit or choose other relationships.

        This destroys privilege and thus is actively prevented with force, coercion, ideology and the elimination of competing alternatives. You are 100% right that most history reflects this.

        My recommended solution is to create exit rights and competitive alternatives. Since everybody wants privilege, and nobody wants to grant it to others over them, the solution to the prisoners dilemma is to settle for universality aka equality under the rules. Standard prisoners dilemma solution.

        As an example. When I played tennis in high school there were lots of people to play with. Some cheated and thus tried to establish a privileged position. Fortunately I had exit rights. I simply chose to play with friends who did not cheat. This also discouraged me from cheating as nobody would want to play with me. Dilemma solved. The logical solution is to abandon privilege and settle for universality.

        “And freedom to exit is great, but only really feasible on small scales, eh? And even then, it’s only possible assuming diversity between groups.”

        Exit does not have to be physical. It can also be a choice or option. For example I have a choice on SS retirement age and payout. It does indeed require variation between alternatives though, or at least the option of creating a new variation.Report

      • Chris in reply to roger says:

        Absent coercion, those UNDERPRIVILEGED can rationally be expected to rebel, exit or choose other relationships.

        This destroys privilege and thus is actively prevented with force, coercion, ideology and the elimination of competing alternatives

        This is probably true, but as it’s never been the case, it’s hard to know.Report

      • roger in reply to roger says:

        “This is probably true, but as it’s never been the case, it’s hard to know.”

        You are substituting perfection for reality. Exit and choice are not absolutes. The choice gained in an English village when the guilds lost their monopoly privileges was an example of incremental improvement. The freedom serfs got by escaping to the free towns was an incremental improvement. The additional choices those behind the Iron Curtain or the Chinese Wall got when markets were allowed in was an incremental improvement. The freedom slaves got from the 13th Amendment….

        Every human action and interaction can be considered an opportunity for choice. The advance of modernity has been directly correlated with increasing freedom, opportunity and choice within positive sum dimensions. The fact that there is so much more that can be done to promote exit rights and freedom to choose reveals how much further we can still progress.Report

      • Kim in reply to roger says:

        Lol. listen to roger cheering the black plague.
        (not that I don’t agree with what he’s saying. it’s still funny tho)Report

      • roger in reply to roger says:

        @stillwater

        You repeated your claim that my views are an incoherent mish mash. I presented my defense.

        Could you do me a favor by responding, please.? Otherwise I fear you will repeat it again, and I will defend it again like some eternal episode of Groundhog Day. I may be wrong, but if so I would like to know why.Report

      • roger in reply to roger says:

        Perhaps you are using a different decoder ring than the rest of us, Kim.Report

      • Stillwater in reply to roger says:

        Roger, I’ve given you all sorts of arguments over the last coupla years which you’ve consistently rejected. I think we just look at this stuff completely differently. And I’m OK with that. Really, I am. (Why wouldn’t I be?) My biggest objections aren’t about the positive views you argue for (tho I have lots of objections to those which I’ve articulated to you along the way) but instead are reserved for your negative critiques of liberal’s views and arguments. Those strike me – consistently – as just flat out wrong and confused. But here’s the kicker: that your response to my objections that you’re gettin liberal’s views wrong is to tell me that I’m the one who’s confused (or ignorant, or whatever). I really don’t know how discourse can proceed when your answer to another person’s objection is to attribute ignorance to them. I mean, you did right above this commentwhen your response to Chris was that he’s confused about what he’s saying.

        How does the dialogue go at that point? He insists that he isn’t confused and you insist that he actually is? That seems like an impasse to me.Report

      • Chris in reply to roger says:

        Roger, what proportion of the serfs attempted to escape to free towns?

        I’m not saying the ones that didn’t attempt to do so wouldn’t have preferred the free towns, but I’m saying that you’re drawing conclusions based on small percentages of the total population.

        In addition, sometimes people prefer to be around in-group members, even if freedom is lower there.

        I’m not letting the perfect be the enemy of the good. I’m just being realistic. You’re being idealistic.Report

      • roger in reply to roger says:

        Dude,

        You just accused me of being confused, then in the next sentence admonished me for telling others they are confused … because it shuts off conversation.??

        Can we please get on to discussions? Feel free to track the nature of the above thread. I made a point about constitutional choice to solve the utilitarian dilemma.* My position was legitimately challenged. I defended them, and it advanced in a fairly civil manner from there. Feel free to continue… Or not.

        *Vikram never weighed in even though I basically offered up a modified way to salvage utilitarianism. Perhaps it involved compromises he was unwilling to make.Report

      • Stillwater in reply to roger says:

        You just accused me of being confused, then in the next sentence admonished me for telling others they are confused …

        Roger, I didn’t say you were confused about your own views. I said you’re confused about liberals views and that you attempt to refudiate those views by attributing a confusion to them.

        Does that distinction make sense?Report

      • roger in reply to roger says:

        Sure, just as long as the allegation cuts both ways. I am fine btw with you pointing out my potential confusion. This pushes me to either communicate better or think things through better. I come to this site to float ideas and see how they survive.Report

      • Stillwater in reply to roger says:

        just as long as the allegation cuts both ways.

        Uh. Of course it does. It makes me think you don’t understand the point I’m making …. but bet that as it may. I totes agree.Report

      • roger in reply to roger says:

        I was giving examples of incremental steps. I could provide countless others, some gave freedom and choice to one person in one situation, others to billions of people with immeasurable degrees of freedom. I have no idea how that is idealistic.

        If I added that this was part of an inevitable sweep of nature and labeled it Whig History then the charge might be appropriate. I’ve already admitted though that the pendulum swings both ways. I am well aware that there are lots of forces pushing for less choice, freedom and exit rights. They may even win. Heaven forbid!Report

      • roger in reply to roger says:

        Sorry, last comment to Chris.Report

      • roger in reply to roger says:

        Stillwater,

        So is my response on why I am not a utilitarian halfway acceptable? Or another mish mash? You can be honest.Report

      • Stillwater in reply to roger says:

        So is my response on why I am not a utilitarian halfway acceptable?

        Sure. Because it seems to me that the other half is pure utilitarianism. Mix and choose as needed.Report

      • Stillwater in reply to roger says:

        I’m done with this discussion, tho. We disagree abut this stuff and we both think the other is being unfair at the meta level analysis of why that’s happening. I don’t see anything fruitful arising out of pursuing this any further.Report

      • Chris in reply to roger says:

        Roger, freedom is one value among many. Perhaps all things being equal, people will choose the more free society, where free is fairly loosely defined, but there are weightier considerations, and all things are never equal.

        Mostly, people choose power or not to choose at all.Report

      • roger in reply to roger says:

        But to clarify, I am establishing how freedom or choice can be used to voluntarily select institutional arrangements which satisfy any or all their values. I am laying out the rule utilitarian argument in defense of Vikram’s position. I certainly agree many people will choose poorly. They will experience feedback though and are able to learn. And of course failure to choose is in the end a choice as well.Report

    • Mad Rocket Scientist in reply to North says:

      But does that mean one shouldn’t try to measure to good/bad?

      Perhaps the value in Utilitarianism is not in the purest application of the philosophy, but rather the value in trying to meet some of the more laudable goals of it?

      Of course, the same can be said for a great many philosophies. It’s one of the reasons I get irked when people conflate a libertarian political philosophy with a desire for a libertarian utopia. No honest person thinks the utopia is going to happen, but rather we find value in the goals & strive to get a little closer to them.Report

      • North in reply to Mad Rocket Scientist says:

        I’m amused that you jumped right to libertarianism because personally I find it unrealistic as an ideology but stellar as a razor to apply to other ideologies. I’d say utilitarianism IMHO is best viewed the same way; inadequate and unworkable as a standalone ideology but near indispensible as a tool to use with other ideologies.Report

      • BlaiseP in reply to Mad Rocket Scientist says:

        Engels says the Ideal never makes it off the page into the real world: Letter to Mehring.

        Ideology is a process accomplished by the so-called thinker consciously, indeed, but with a false consciousness. The real motives impelling him remain unknown to him, otherwise it would not be an ideological process at all. Hence he imagines false or apparent motives. Because it is a process of thought he derives both its form and its content from pure thought, either his own or that of his predecessors. He works with mere thought material which he accepts without examination as the product of thought, he does not investigate further for a more remote process independent of thought; indeed its origin seems obvious to him, because as all action is produced through the medium of thought it also appears to him to be ultimately based upon thought.

        Every Ideology is crushed to atoms on contact with the real world and its processes, which remain blissfully unencumbered by any dependence upon thought. Unless we count fear, greed, rage and all those other mental processes as thought — some might.Report

      • @north

        The razor metaphor is well put, and pretty much expresses my thinking on the matter. The most frustrating thing about libertarianism is that I find I often cannot answer its critiques of my policy preferences.Report

      • Mad Rocket Scientist in reply to Mad Rocket Scientist says:

        Ideology is strategy – it never survives contact with the enemy.

        Politics is tactics – it’s what you do to try & achieve some of your strategic goals even though the original plan is shot to hell.Report

  11. Caleb says:

    I’m not going to pile on here with the standard core critiques of utilitarianism (which the other commenters have covered very well). Instead, I have a question for @vikram bath, which I have yet to have satisfactorily answered by anyone who self-describes as a utilitarian:

    How does your version of utilitarianism assign value to agents (those having and experiencing the “utils”) in a temporal setting?

    I don’t think I can phrase what I mean directly without significant misunderstanding, so I will use a thought-experiment:

    Suppose there are two societies on a planet: A and B. There are no inter-personal relations between the two; A and B are completely isolated. No one in either A or B would feel any emotional or spiritual angst from the loss of the other. For the sake of simplicity, assume they are of equal population. Now, B occupies land which, if A possessed, would unquestionably and measurably increase the utils experienced by every member of A. Suppose that A knows without a doubt that B would never acquiesce to their exploitation of B’s land. Suppose further that A possesses a device which could instantly, painlessly, and otherwise harmlessly vaporize every member of B. Now suppose A uses the device. Was their decision moral?

    I ask because, it seems to me, the “consequential” moral effects of the use of the device accrue only after its use. As such, there is no person currently in existence which was harmed by the device. In fact, it all comes up aces. Sure, everyone in B was “harmed” in that they no longer exist, but they no longer exist to experience this harm.

    So, under your conception of utilitarianism, was the use of the device immoral? If so, why?Report

    • zic in reply to Caleb says:

      To be honest, this is the rudest thing.

      We live in a world where we’ve had about 10,000 years of recorded history showing a march toward some basic human standards; they include things like lying, steeling, murdering. They suggest taking care of the needy is of some value. So because someone says there’s this organizing principle that’s not dependent on specific moral stances (instead, it understand there’s a need to weigh them in making a decision) doesn’t mean a lack of morals or abandoning progress, compassion, empathy, etc. It means that those things are part of the weighing; not outside it.

      (And really @caleb, my reply is to not only you, but all those you commend for taking Vikram to task)

      This stinks of the same idiocy I frequently hear when I admit to being an atheist: How can you be moral?

      If you are that bound up in ideology that you think it is morality, I feel a great deal of pity for you. But more to the point, your abstracting @vikram-bath Vikram’s thesis down a level, and failing to abstract up. Making relative, evidence based judgments would presume competing moral/ideological/theological structures and some way to balance out those demands. It does not demand the best outcome every time, but more frequent better outcomes (however you want to define that, though I’d push against non-best outcome definitions that decay into anarchy, chaos, and bleakness).Report

      • Caleb in reply to zic says:

        Huh. Interesting response.

        I’m not sure I understand the point you’re trying to make. But I’ll point out where I agree and disagree, and hopefully I’ll develop understanding.

        We live in a world where we’ve had about 10,000 years of recorded history showing a march toward some basic human standards; they include things like lying, steeling, murdering.

        I think I know what you mean here. But, it can be taken multiple ways. For example, your ‘march of progress’ toward “human standards” can easily be taken as an assertion that humans progressively have come to recognize the legitimacy of one or more categorical imperatives. I get that by “lying, steeling, murdering” you mean ‘behaviors which tend to reduce the utilitarian total.’ (At least, I think that’s what you are saying.) But by using those terms, you are appropriating the language of moral absolutism.

        They suggest taking care of the needy is of some value.

        Of value to whom, and at what point in time? If utilitarianism is to be our moral calculus, these are necessary terms.

        So because someone says there’s this organizing principle that’s not dependent on specific moral stances (instead, it understand there’s a need to weigh them in making a decision) doesn’t mean a lack of morals or abandoning progress, compassion, empathy, etc.

        No, of course not. No one who actually understands utilitarianism says so, even if they are critical of it. The point is that the organizing principle is independent of, and superior to, what you call “moral stances.” It acknowledges the point, and does not ignore it. The point is that the superiority of utilitarian calculus undermines the very definition of content-based “morality” as an absolute concept. Agents of utilitarianism are free to input their own definitions as to the good. But the output of that calculus is dependent on each agent’s input, not on an absolute standard. So yes, the output of a utilitarian calculus may indeed be what we currently designate as ” progress, compassion, empathy, etc.” Or, it may not be. Or, it may be called ” progress, compassion, empathy, etc.” but have entirely different content. So: does the content matter, or does the measurement?

        It means that those things are part of the weighing; not outside it.

        Exactly. So the weighing becomes the standard, and the inputs the variable. Is it possible to have a utilitarian society that upholds the values we currently call “justice, fairness, equity, charity, ect.?” Absolutely. Is it also possible for a utilitarian society to uphold values exactly opposite to those I named and call them equally good? Logially, yes. If you disagree, you must identify the mechanism that determines the content-based outcome.

        This stinks of the same idiocy I frequently hear when I admit to being an atheist: How can you be moral?

        I do not wish to engage in that particular debate at this time. But consider this parallelism:

        According to religious persons, there is a path of decisions you may take in order to be “moral.” Depending on the religion, this path may be singular or multi-variate. That is, there are choices you may make which adhere to the moral dictates of a religion, while you do not acknowledge (or even know about) that particular religion’s dictates. That is, you may very well determine a moral choice-path which falls within the scope of the religions’ moral boundaries.

        In this, your objection is valid. One need not adhere to any particular creed while fulfilling its dictates. Even moreso, one need not adhere to any creed when said creed is only one set of instructions to obtaining a particular goal of which there are many approaches.

        It is here that the analogy breaks down. The assertion of utilitarianism is not that of an alternate route to achieve the good, but is an undermining of the very concept. One does not set out to achieve a specific moral end by advancing a neutral method of calculation as the ultimate good. Ends and means are separate, this you must at least acknowledge.

        If you are that bound up in ideology that you think it is morality, I feel a great deal of pity for you.

        Pity you may feel, but that does not excuse you from explaining how ideology and morality are distinct. Ideology implies morality, and vise-versa. To separate them entirely would be an impressive feat.

        But more to the point, your [sic] abstracting@Vikram BathVikram’s thesis down a level, and failing to abstract up.

        I fail to understand this statement. Care to clarify?

        Making relative, evidence based judgments would presume competing moral/ideological/theological structures and some way to balance out those demands.

        Yes, but said competition does not equal identical validity. Is it or is it not that case that the vast majority (or everyone) on this planet can hold an idea, and yet it still be wrong? If yes, then why does this principle not translate to the moral realm?Report

      • zic in reply to zic says:

        Ideology implies morality, and vise-versa. To separate them entirely would be an impressive feat.

        Is murder an ideology? Beating a child? Theft?

        Large numbers of people can and in fact have held wrong ideas without it being either moral or immoral; the earth is the center of the universe, flat, disease is caused by ill humors and curses from the gods. This has nothing to do with morality, it has to do with knowledge and understanding, and knowledge changes over time as we gain more and more of it.

        Morality is what we do with that knowledge, not if the knowledge is correct or incorrect or any degree of the two combined. Was it immoral to treat someone with herbs we know know to be toxic that, 1,000 years ago, appeared to relieve horrible suffering?

        One does not set out to achieve a specific moral end by advancing a neutral method of calculation as the ultimate good. Ends and means are separate, this you must at least acknowledge.

        Why? I realize I probably sound ignorant in the face of sound and settled philosophy. Because I am. But this makes absolutely no sense to me; its as if you can dice human actions up into discreet pieces and measure them independent of other actions. You can’t. But you can try to measure them in the context of other actions, like following one wave on the surface of the water as it travels through the other wave forms it meets along the way. Again, it’s the tendency to abstract down (a single wave, alone) instead of up (a single wave in the context of complex winds above and currents below).

        The means to a desired outcome can have effects reaching far beyond, which is just about paralyzing dear Jaybird elsewhere on this thread. Far better, I think, to view means/outcomes as process that’s ongoing and to search for methods of evaluating processes in as many ways as possible.Report

      • Caleb in reply to zic says:

        @ zic

        Is murder an ideology? Beating a child? Theft?

        No. In our society, they are labels for certain actions. Insofar as those labels refers to particular actions, however, carrying out those actions imply a certain ideology. (That is, one where those actions are morally acceptable, at least at that time and for that actor.)

        Applying those labels to said action also implies an ideology. In fact, I would go so far as to say it requires an ideology if the application of those labels implies (or requires) a normative moral judgment. If one is truly objective, “murder, child abuse, and theft” are merely a set deterministic actions that the sacks of animated carbon on the third rock from the sun carry out from time to time according to their mental state and environmental stimuli. No ultimate categorical distinction from water flowing downhill. Applying labels of moral judgement requires ideological input.

        Large numbers of people can and in fact have held wrong ideas without it being either moral or immoral; the earth is the center of the universe, flat, disease is caused by ill humors and curses from the gods. This has nothing to do with morality…

        No? Then what is the source of the ‘10,000 year moral progress’ that you referenced in your first response?

        Morality is what we do with that knowledge, not if the knowledge is correct or incorrect or any degree of the two combined.

        Insofar as this statement excludes knowledge of morality (which I think it does, given a utilitarian premise) I agree. However, how does this statement not violate the fundamental concept of utilitarianism? After all, under utilitarianism, correct knowledge of the good (as defined by the optimal util experience by all relevant actors) is the only means to moral action. Lack of this knowledge cuts the actor adrift into a sea absolute moral uncertainty. I may misunderstand utilitarianism; but how does the potential for moral action not increase with more knowledge?

        Was it immoral to treat someone with herbs we know know to be toxic that, 1,000 years ago, appeared to relieve horrible suffering?

        No, of course not. But I’m confused. You seem to be arguing against cosequentialism, at the very least. In my ideology, if the intent was honest and good, then the action was moral. You seem to be arguing the same point. Under consequentialism, if the herb was harmful, then the outcome was bad, and therefore the act was immoral regardless of intent. In utilitarianism, there is a balance of utility calculus, but the outcome is likely the same: utility was decreases by application of the herb, therefore the act was wrong. Any other result requires scrutinizing the intent. Are you sure you’re a utilitarian?

        Why? I realize I probably sound ignorant in the face of sound and settled philosophy. Because I am. But this makes absolutely no sense to me; its as if you can dice human actions up into discreet pieces and measure them independent of other actions. You can’t.

        If ends and means are not distinct, the idea of purposefulness goes out the window. There are no “ends” or “goals”; only acts. If there are only acts, then we need not look at what is done by humans under a moral lens. We treat them like animals, or incredibly complex machines. ‘Are they acting they way we (or I, or whoever the decision-maker is) want? If not, how do we “fix” them? There is no content to moral analysis, only procedure. Procedure that can fit any outcome. Human will merely becomes a means to an open end, not an end in itself.

        But you can try to measure them in the context of other actions, like following one wave on the surface of the water as it travels through the other wave forms it meets along the way.

        An apt analogy. A wave has no purpose, and no agency. It is merely the product of external forces, and may be modified or destroyed at any time. The idea that humans likewise lack agency destroys the logical basis for any applied morality. In this view, all the actor has as motive power internalized standards that result from external factors, all of which are free for modification without any basis of critique.

        Again, it’s the tendency to abstract down (a single wave, alone) instead of up (a single wave in the context of complex winds above and currents below).

        I think I get what you mean. And it makes my point exactly. If we are all waves, then what we call “morality” is merely a product of the winds and currents. But the product itself is content-less, if the winds and the currents change, then what we call “morality” changes. If a sufficiently powerful actor gained control of the winds and the currents, they could control “morality.” Creation, manipulation, and destruction of certain waves would be per se valid. No other standard of outcome or purposefulness exists. That’s my point.

        Far better, I think, to view means/outcomes as process that’s ongoing and to search for methods of evaluating processes in as many ways as possible.

        Ongoing as in relation to what? You keep wanting to anchor your view of utilitarian morality in some form of objective progress, without telling us what that standard is.Report

    • Vikram Bath in reply to Caleb says:

      I wouldn’t call this question “rude”. I do find it…unrealistic though. This is a boundary-seeking philosophy problem. I do acknowledge the usefulness of such questions in showing whether a philosophy is complete and consistent in a philosophical sense, but not being able to answer the question doesn’t mean that your philosophy can’t be profitably employed.

      Now, to answer the question as a consequentialist, A’s actions were immoral.

      everyone in B was “harmed” in that they no longer exist, but they no longer exist to experience this harm

      Save it to the judge. Murder is bad even if the murderees aren’t around to complain. As far as moral conundrums go, this does not seem even slightly difficult.Report

      • zic in reply to Vikram Bath says:

        You are a better person then I, @vikram-bath, for I found it offensive in much the same way the presumption that I have no morals because I don’t believe in God is offensive. Such questions never seem to include a potential that I have more morals (I don’t know how one measure that, but that’s another matter) or the possibility that without God, I think right and wrong rests squarely on my shoulders, and I’m aware of the power of self justification.

        The lacking is probed, not the actual subject. That’s rude in its presumption.

        But I’ve never taken a philosophy class, I’m just a hedge witch.Report

      • Caleb in reply to Vikram Bath says:

        I wouldn’t call this question “rude”.

        I thank you for understanding. Your writing is intelligent and provocative. A lesser piece of writing would not stimulate such attention. You have done a very good job of making your case, and defending it.

        I do find it…unrealistic though.

        A charge I accept pending the consequences of that assertion. The “real” and the “unreal” are merely labels we attach to the event of certain occurrences. That something has happened has marginal effect on said labels. The most plausible of occurrences maybe asserted, but may yet not have happened. At the same time, I may assert an incredible account of a sequence of events, and yet be shown true by unimpeachable account.

        The happenstance of any said occurrence has little or no bearing on the morality of the whole. That no one has tried to rob the bank in Bethel, Alaska (I’m not sure if this is true or false, but assume the implausibility) does yet inform me that doing so would be immoral. Implausibility and immorality are totally distinct categories.

        I do acknowledge the usefulness of such questions in showing whether a philosophy is complete and consistent in a philosophical sense, but not being able to answer the question doesn’t mean that your philosophy can’t be profitably employed.

        Agreed. Any set of ideas can be helpfully employed given proper boundary conditions. However, the boundary conditions you have given your philosophy are (as I understand) nearly unlimited. That is: you accept the limitations of thought placed by Tod Kelly’s argument on ideology, but wish to make an exception for the utilitarian calculus as a means of universal human valuation. Very well. But said exception must meet the boundary conditions for its own existence. That is: it must be able to address any and all questions which fall into its ambit as a matter of course. These questions address the potential, as well as the actual.

        Within this sphere, the concept of “profitable employment” refers to a particular set of outcomes as defined by certain actors. This is not a breach of confidence or trust necessarily. But the concept does involve a particular ends given the means available.

        Murder is bad even if the murderees aren’t around to complain.

        This sounds an awful lot like a categorical imperative.

        Remember, under utilitarianism, the moral “evil” does not correlate with any particular action, but only with its effect. Any effect may be negative or positive. The negative (or positive) effect of any given action is its own impetus. This impetus contains the moral qualia of any given action.

        Your assertion of a moral qualia to a given action (depriving of life, for example) clashes with the valuation of that calculation. There can be no absolute moral valuation of human life given external value judgments.

        as far as moral conundrums go, this does not seem even slightly difficult.

        Okay, so where is temporal space do you draw the line? You do not address my fundamental question.Report

      • @zic

        I found it offensive in much the same way the presumption that I have no morals because I don’t believe in God is offensive.

        I’m not going to speak specifically to Caleb’s example (because I’m not sure I understand it), but my own qualms about Vikram’s utilitarianism are not based so much on the suggestion that it has no morals, or that utilitarians can never be moral, but more on the claim that utilitarians are moral and that they adopt moral principles, but that they don’t always acknowledge that fact.

        Something is sought by a utilitarian calculus, and what that something is, and why it is desirable, is a “moral” question (or at least a value question) in the sense that it addresses right and wrong (or preferences). Maybe it’s wrong for me to call the determination about what is desirable “ideology.” But I do think something non-utilitarian informs the determination.

        To use the example in the OP, a utilitarian might weigh the consequences of the minimum wage on employment and economic prosperity. Such weighing assumes that it’s generally better to have fuller employment than high unemployment. There is a starting preference for how things ought to look.Report

      • zic in reply to Vikram Bath says:

        @pierre-corneille, thank you for answering.

        There is obviously something I’m not grasping here. Take the example of employment; sorting out minimum wage, unemployment, and economic prosperity. I’d think the proper utilitarian response would be to consider what we know already; looking at the lessons of history. Do cultures thrive most when most workers have very low wages, when there are not enough jobs for most workers, or when there are higher wages? I would presume that asking the question and looking for answers on what conditions produce what results would be the preferred path to making a wise choice.

        Maybe high unemployment and low wages is good, maybe job competition fosters innovation from artists and entrepreneurs. Maybe wage competition is good because it increases disposable income. Maybe a flux between those over a span of years works best.

        Here, best is qualified as what produces the greatest prosperity and stability without infringing on basic civil rights for the greatest number of citizens. So if there’s a moral calculus involved, it rests there. But it don’t think it’s controversial to say that there are agreed upon moral underpinnings in our culture.

        I realize I don’t have the proper terms to get at what I’m trying to convey here; but I suspect much of it comes from the underlying thought process — perhaps my best analogy would be particle/wave. It’s easier to think of policy and outcomes as particles of data; but those particles are waves, they rarely have defined start/stop points, they exist in a state of flux, and to comprehend the outcomes, we need to put them in the context of flux.

        So my objection here is that the values to agents question totally creates a foundation of fixed points when, in fact, values themselves change, and the context of measuring needs to incorporate that change. To me, the important point seems to be to comprehend and expect changes in values of agents as the agents change and as their experience changes and the reevaluate situations.Report

      • @zic

        Thanks for your answer, and sorry for taking so long to read it. I think I agree with most of it, especially with this statement:

        Here, best is qualified as what produces the greatest prosperity and stability without infringing on basic civil rights for the greatest number of citizens. So if there’s a moral calculus involved, it rests there. But it don’t think it’s controversial to say that there are agreed upon moral underpinnings in our culture.

        I think I also agree with your last paragraph, although I’m not entirely sure what exactly you’re getting at.

        I suppose my principal motivation for saying what I’ve said in this thread is not to take down utilitarian or pragmatic approaches, but to suggest that something outside those approaches informs how we define the problem that needs to be solved. I think if a “principled pragmatist” doesn’t recognize that, then he/she might assume certain things to be certain that under some guise might be debatable.

        All that said, there’s probably a lot I’m not understanding about utilitarianism. I’m still trying to digest the point Chris made in his response to KenB above (and which I’ve just got around to reading).Report

      • zic in reply to Vikram Bath says:

        @pierre-corneille you’re not alone in not comprehending, I often feel ignorant and stupid reading things here, but nothing has made me struggle and feel completely without hope of understanding as this post has.

        I understand and agree with this:

        I suppose my principal motivation for saying what I’ve said in this thread is not to take down utilitarian or pragmatic approaches, but to suggest that something outside those approaches informs how we define the problem that needs to be solved. I think if a “principled pragmatist” doesn’t recognize that, then he/she might assume certain things to be certain that under some guise might be debatable.

        .

        but misunderstood assumptions is a problem everywhere, always. You can only adjust as the missassumptions (a collective noun for conflicting assumptions held by different parties in any debate) are revealed.Report

  12. Tod Kelly says:

    “Tod Kelly is setting out to murder ideology.”

    I prefer to think of it as sending ideology to live on a farm upstate, where it has lots of room to run outdoors and will be much happier.Report

  13. Jaybird says:

    Down here.

    If your argument is simply that people with power can do horrible things, you won’t get any disagreement from me. Apart from that, I no longer know what we’re talking about.

    That’s not my argument. My argument, with regards to the disconnect between intentions and outcomes, comes down to what’s interesting when we’ve got the various possible combinations.

    Good Intention, Good Outcome. Do you find this one particularly interesting? I don’t.
    Bad Intention, Bad Outcome. I don’t find this one particularly interesting either.
    Bad Intention, Good Outcome. I suppose that this one is interesting insofar as it’s pretty rare but I go through my databases and think about all of the moral/ethical arguments I’ve had and this one doesn’t show up that often. Maybe it should… but I can’t think of that many times that it has. If it were interesting, you’d think it’d show up more often.
    Good Intention, Bad Outcome. *THIS* is the example that people use when they talk about the importance of intentions. Pretty much every single g-darn time.

    So when we’re talking about intentions being important, it seems to me that we’re clearing space for Good Intention, Bad Outcome.

    Is this not the case?

    If it is the case, why is this example so important?Report

    • Stillwater in reply to Jaybird says:

      I don’t think it is the case. It’s one of four cases. Eg.,

      Good Intention, Good Outcome. Do you find this one particularly interesting? I don’t.

      If we’re talking about intention, I find this interesting. What’s a good intention? Why was this a good intention? How was it realized as a good outcome?

      Bad Intention, Bad Outcome. I don’t find this one particularly interesting either.

      What’s a ban intention? Etc.

      Bad Intention, Good Outcome. I suppose that this one is interesting insofar as it’s pretty rare but I go through my databases and think about all of the moral/ethical arguments I’ve had and this one doesn’t show up that often. Maybe it should… but I can’t think of that many times that it has. If it were interesting, you’d think it’d show up more often.

      Again, tho, if we’re concerned about good outcomes, then maybe there’s something to be learned from a bad intention that led to a good outcome. How’d that happen?

      Good Intention, Bad Outcome. *THIS* is the example that people use when they talk about the importance of intentions. Pretty much every single g-darn time.

      Well, except for all the times that good intentions actually lead to good outcomes. I mean, if you disregard *those* good intentions, then yes, we’re left with talking about good intentions gone bad.

      How does a good intention go bad? Well, in a benign case it might be that I try to help the old lady across the road but stumble on the curb and fall into her breaking her leg. That’s a good intention gone awry.

      What I think you have in mind is a different kind of good intention, tho. The Government Adminstered good intention. And how can that go awry? Well, not necessarily because of anything to do with the intention or even a realistic path to realizing the intention, but rather because government is comprised of various levers and pulleys which can corrupt the Good Intention – even if there’s a clear path to realizing it! – into something really really bad.

      So the complaint seems strikes me as more about complex systems rather than intentions perse. If that’s right, I don’t disagree with it. I said as much upthread. I just don’t have the same allergy to government that you apparently do.Report

      • Jaybird in reply to Stillwater says:

        What’s a good intention? Why was this a good intention? How was it realized as a good outcome?

        I imagine that, in utilitarianism, a good intention is an intention to maximize happiness. I imagine that, in deontology, a good intention is an intention to follow the rules. As for “why?”, I don’t know. How was it realized? I suppose I’d need an example and we can follow the chain.

        Same for bad intentions.

        (And, of course, the messy issue of how this isn’t a 1 vs. 0 issue but an analog where it’s fully possible to do something mostly good or kinda bad.)

        Again, tho, if we’re concerned about good outcomes, then maybe there’s something to be learned from a bad intention that led to a good outcome. How’d that happen?

        Again, I’d need examples. In my arguments about the importance of intentions, this particular breakdown doesn’t appear that often, if at all.

        How does a good intention go bad? Well, in a benign case it might be that I try to help the old lady across the road but stumble on the curb and fall into her breaking her leg. That’s a good intention gone awry.

        The examples I gave above involve questions of First Amendment Protection of condomless pornography or Prohibition… but if we want to discuss private action, I suppose we could discuss such things as homeschooling that teaches false things, faith healing, or anti-vax sentiment.

        What I think you have in mind is a different kind of good intention, tho. The Government Adminstered good intention. And how can that go awry? Well, not necessarily because of anything to do with the intention or even a realistic path to realizing the intention, but rather because government is comprised of various levers and pulleys which can corrupt the Good Intention – even if there’s a clear path to realizing it! – into something really really bad.

        Well, it does seem to me that, in practice, the disconnect between intent and outcome shows up all the time when the intent is supposedly good and the outcome is, as you say, really really bad… and, yeah, it certainly seems to help when you have government levels of momentum behind a policy when it comes to keeping the policy despite evidence that might change the mind of an individual actor… but this seems to lead me to the conclusion that intent really isn’t that interesting at the end of the day when we’re talking about working with complex systems.

        I’ve got people telling me that, no, it is interesting… I’m just not seeing how. If anything, intent seems to be an excuse for not changing what one is doing. While that is, kinda, interesting, I’m not seeing why it’s something that, apparently, we need to make some sort of moral allowance for.

        Even allowing that, I’ve no idea how much weight to give it.Report

  14. Patrick says:

    Some rambling thoughts:

    I think the most interesting objection offered to utilitarianism is that it can lead you to decide that a horrible thing is necessary.

    I think most people find this objection troublesome because they think at least one of two false things, possibly both simultaneously: one, that horrible things are never necessary; two, that their own philosophical framework can’t lead to horrible things being necessary.

    When you boil down into the weeds long enough, you find out that the definitions of “horrible”, “decide”, and “necessary” are pretty fluid terms.

    The selection of a moral philosophy is often an attempt to have a formula that keeps your soul clean.

    I’m perhaps too much of a moral pessimist, but I’m pretty sure you can’t get through life with a clean soul. And, as yet, I don’t have any particular reason to believe that any particular moral philosophy does a generally better job of keeping most of the dirt away than any other. Lord knows I’ve tried ’em on for size at one time or another.

    The reason for this is I think mystery, which is something the theists have going for them, they’re used to dealing with ineffable stuff. Too bad Bob isn’t around, he’d like this conversation.

    One method of investigation is usually insufficient to describe a physical phenomenon. One method of moral philosophy is usually insufficient to describe a psychic one.

    I’ll stick to trying to figure stuff out by looking at it wearing different pairs of glasses.

    All that said, at the end of The Cabin in the Woods, I’m shooting the guy to prevent Cthulhu from awakening. Maybe that makes me a utilitarian. I don’t, however, feel that a utilitarian calculus renders my soul clean for making that decision. So maybe it doesn’t.Report

    • Burt Likko in reply to Patrick says:

      Clearly you feel at the end of Cabin in the Woods that the guy doesn’t have it coming and there’s a moral stain on you for killing him.

      So the immense utility of the action alone isn’t enough to justify it for you. Nor should it. It’s a dilemma, a hard question (made harder by the fact that you have to decide right now before the ancient gods awake to destroy all of humanity).

      Your intuition that you’re morally stained from the killing despite the immense consequences at stake is the other part of your conscience, the one that says intent matters.

      Maybe I’d shoot the guy, too. Maybe I’d tell myself it was a matter of survival, that he’s dead one way or the other anyway, but will the rest of us die along with him? That’s a pretty good logical argument. But I don’t think that as persuasive as that argument is, I’d ever stop feeling guilty about it afterwards.Report

  15. Burt Likko says:

    A call-out from above. @jaybird asks in several places how we can know someone else’s intent. He didn’t seem satisfied with my pointing out that in a philosophical discussion, intent is typically a given fact.

    Let’s please dispense with the well-nigh tautology that we can never truly know the mind of another person. Stipulated that we can’t know such a thing with absolute certainty. We don’t need to. We need to know these things to a substantial certainty. In civil cases, we need to know them to a probability (that is to say, more likely than not). In criminal cases, we need to know them beyond a reasonable doubt (and not beyond a “shadow of a doubt”).

    Further, we can take notice of a causal relationship between action and consequence. That leads us to inferring that an actor is aware of the probable consequences of his actions. When an actor is aware of the probable consequences of his actions, in turn, it is fair to assume that the actor intends those consequences to come about. In law, we infer intent this way all the time. Here’s a case that has language quoted from it to juries every day in California:

    As a general rule, California law recognizes that every person is presumed to intend the natural and probable consequences of his acts. Thus, a person who acts willfully may be said to intend those consequences which (a) represent the very purpose for which an act is done (regardless of the likelihood of occurrence), or (b) are known to be substantially certain to result (regardless of desire). Gomez v. Acquistapace (1996) 50 Cal.App.4th 740, 746, internal citations and most punctuation omitted.)

    I do an action (A). A has a set of consequences (B) which, given the circumstances in which I do A, are the natural and probable results of A. My presumed intent, therefore, is that I want B. Let’s apply this rule in the context of a tort. Say, fraud. Not for nothing did I pick fraud as the example, because in a fraud situation, the person whose intent we are trying to discern has typically taken some pains to obscure his true intentions. In a fraud situation, we must infer what the defrauder knows and expects to happen and compare that to what the victim knows and expects to happen. It’s inherently context-driven.

    Upthread, we’ve already discussed well that intended results may not be actual results. Good intentions can sometimes yield bad results and bad intentions can sometimes yield good results. And I’m sure we can also dig down and find that definitions of “good” and “bad” are similarly reliant on consequence. This troubles me not even as I insist that consequence alone is not the sole yardstick upon which the moral standing of an action is measured. I’ve never argued, and never would argue, that consequence is unimportant.

    That’s because even a consequence-driven definition of intent still describes something different than consequences — because action A is rarely certain to produce consequence B. Sometimes, A produces ~B, or C, instead. That’s why good intent-bad outcome cases are interesting to this discussion. A desire to create outcome B is qualitatively different from outcome B.

    That’s why I suggest that intent is one lens through which we weigh the moral value of an action, and consequence is the other. Call it the “binocular method.” Gotta pass both tests. Good intent, bad outcome is condemned as a bad outcome. Bad intent, good outcome is condemned as a bad intent.Report

    • Jaybird in reply to Burt Likko says:

      Jaybird asks in several places how we can know someone else’s intent.

      Actually, if I may quibble, I ask how we’re supposed to measure/weigh it given my observation that it only seems to show up as relevant when it differs from outcome and, as far as I can tell, only when the intent is good and the outcome is bad. (In other cases, intent never seems to come up at all.)

      Again: I’ve no idea how much weight to give intent… even in theory where, you’d think, it’d be easiest to do so.

      Sadly, I still don’t.Report

    • BlaiseP in reply to Burt Likko says:

      That’s as good an exposition as I’ve ever seen of the issue.

      But as to Jaybird’s questions about knowing someone’s intent, I’d dissect out the begged question as follows:

      In Fatalismville, nobody has a choice about what they do. Its wretched citizens lurch about, guided by fate alone. It’s hilarious, watching them in the grocery store: picking up things at random, tossing them in their carts. Traffic court excuses all the tickets, obviously nobody intended to do anything they did. First Baptist Church of Fatalismville preaches very odd sermons, not about Sin and Error, Selfishness and Charity. Instead, it’s mostly about Accepting God’s Will.

      The real world is a suburb of Fatalismville. Bad things do happen to good people. Good people do terrible things with the best of intentions. Accidents happen.

      How could we know someone’s intent? Mens rea Malice aforethought.

      Here’s how anyone could reasonably sort out the angle between intent and outcome. A man’s driving home from work. A kid runs out in the street after his ball, the man can’t stop, runs over the kid. Stopping distance varies with speed. Photos are taken, the length of the skid mark is computed, we know how fast the guy was going — if he was over the speed limit, he’s negligent.

      Computing the angle of mens rea is locating the line of probable outcome of a given act and the actual outcome of that act. Speeding measurably increases stopping distance. So don’t speed down suburban streets. You have a choice. Speed or don’t speed. I fear driving down these tree-lined little streets. Almost ran a kid over one time. I don’t care what the speed limit lets me drive, I drive as if that kid is just out of sight, just waiting for me to come by.Report

  16. KatherineMW says:

    The problem with utilitarianism is that it assumes that there is a consensus definition of what constitutes “good”.

    Say we could prove that the current US security procedures – extensive government spying on the population with little restriction, and no ability of the population to know whether they are being spied upon – saved 100 lives per year by preventing terrorist attacks. Would utilitarianism say that proves this policies are good? Many people would likely agree with that. What if it was 50 lives per year? Ten? Five? One? Utilitarianism can’t tell us where that dividing line between “good” and “bad” lies because it’s a matter of contesting values.

    If we could prove that a ban on extra-large sodas does improve people’s health, would that make that policy inherently good? How much would their health have to be improved for utilitarianism to conclude that the benefits outweighed the cost? Would a 0.001% decrease in obesity that could be attributed to that policy be sufficient? There’s no “objective” way of weighing where the line between “it’s good” and “it’s bad” are, because it depends on how much weight you put on people’s right to do things that are harmful to themselves.

    What’s the trade-off between one person’s money and another person’s life? How much money is a life worth? If a policy had a trade-off where it decreased economic growth but saved lives, what’s the ratio of GDP lost to lives saved that would make that policy good? Utilitarianism doesn’t tell you that; the question is one of values.

    There’s an excellent argument for pursuing the goals of one’s ideology according to which policies are most effective in achieving those goals; most people in politics call that “pragmatism” these days. But a person’s ideology is what determines what those goals are. A utilitarian ethos can’t replace that.Report

    • zic in reply to KatherineMW says:

      The problem with utilitarianism is that it assumes that there is a consensus definition of what constitutes “good”.

      How so? Wouldn’t it, instead, keep assessing what’s the consensus of what’s good? Why does ‘good’ have to be a fixed definition? Why isn’t it possible to understand that this is something that changes as we have new information and new understanding?

      I really feel like I’m missing something here, and I’m trying very hard to understand it.Report

      • KatherineMW in reply to zic says:

        People disagree on what constitutes “good” – that’s the point of my post. Do we prioritize total economic growth or overall well-being? How do we balance safety and liberty? These aren’t matters of empirical fact; they can’t be resolved by increasing the amount of information we have. They’re matters of values, and our society contains people with a wide range of different values.Report

      • zic in reply to zic says:

        Of course.

        But they can also be measured in very many ways. I wouldn’t expect people to agree, I would expect us to measure things, try different things.

        An evolving conversation about what’s ‘good,’ seems the point. Otherwise, we make make scientists drink hemlock tea when they say the Earth isn’t the center of everything.Report

      • BlaiseP in reply to zic says:

        Perhaps this might help. In Katherine’s example, we have two competing Goods. The first is our Fourth Amendment right to privacy. The other is our right to be protected, as Americans, from another 9/11. We really do have enemies, as Americans. We’re big and soft and vulnerable in many places.

        How can we reconcile these two goods? It’s not a common sense proposition. We have a way to reconcile such contrary goods: an independent judiciary. We also have a free press to uncover secrets and shield laws to protect them. Notice that lots of people are questioning the FISA court about its role in rubber-stamping this massive intrusion into our privacy. Notice also that nobody’s tried to arrest Glenn Greenwald for reporting on the Snowden leaks.

        Contrary goods. Oversight. Warrants. Due process. We can cope with these contrary goods. But to do so, we get into a concept called Duty, arising from Deontology. The concept of Rights, arising from Law. The judge has the duty to uphold the law. The intelligence agencies have limits, The defendant has rights. The prosecutor has limits. The journalist and attorneys have privileged communication.

        But those concepts aren’t some abstract good. They’re codified: this far and no farther. John Stuart Mill, famous Utilitarian said

        It is no objection against this doctrine [of Utilitarianism] to say, that when we feel our sentiment of justice outraged, we are not thinking of society at large, or of any collective interest, but only of the individual case.
        [big snip]
        To have a right, then, is, I conceive, to have something which society ought to defend me in the possession of.

        The utilitarian argument gets very brittle when it goes beyond one person, to the general case. I admire JS Mill, have taught him. Fine thinker. But he knows where it all ends, in a welter of Conflicting Goods. The Kantian Imperative arises, the opposite of Utilitarianism. The two can be squared up, somewhat, in Consequentialism, which does seek that ever-evolving definition of what’s good.Report

      • zic in reply to zic says:

        BlaiseP, thank you for that link.Report