Huemer II: On Ordinary Political Intuitions

Jason Kuznicki

Jason Kuznicki is a research fellow at the Cato Institute and contributor of Cato Unbound. He's on twitter as JasonKuznicki. His interests include political theory and history.

Related Post Roulette

18 Responses

  1. Murali says:

    Even a libertarian order cannot function without some kind of political authority. If we want people to stably converge on the right kind of rules, we cannot expect that to happen if people were to fully pursue all their commitments in the best way they knew how. The fact of pluralism means that even people who are committed to respecting others’ autonomy will understand autonomy differently and have different beliefs about which rules best respect the rights that people properly have. Any functioning society is going to have to roughly agree on some dispute resolution mechanism*. This dispute resolution mechanism is political authority.

    *They don’t have to think that it is the best, they just have to coordinate on that mechanism. i.e. switching to another mechanism has more personal costs than staying with the current one.Report

    • Jason Kuznicki in reply to Murali says:

      I think you’re understanding political authority in a different sense than is meant in political philosophy.

      There may well be a strongest concentration of power. And it may be the case that society benefits if there is an agreed-upon dispute resolution mechanism, even if that mechanism is just made of ordinary people, and even if it screws up fairly often. The alternative may indeed be worse.

      But… here “political authority” means more like “the agents of the state, on issuing a command, enjoy greater moral consideration, ceterus paribus, than other agents issuing the same command.”

      So on the margin, I should be more motivated by state commands than by others. Perhaps even to the point where I am morally compelled to obey all state commands, although just being marginally more morally compelled is sufficient.

      That’s a different thing.Report

      • zic in reply to Jason Kuznicki says:

        So on the margin, I should be more motivated by state commands than by others. Perhaps even to the point where I am morally compelled to obey all state commands, although just being marginally more morally compelled is sufficient.

        This sounds like confusing the state with religion; authority rooted in command and belief despite evidence. When I look at the way people try to bend the state’s rules — paying their cleaning lady under the table, speeding 5 to 10 over, etc., I don’t find much evidence that the state’s commands are imbued with any perception of morality, that people feel compelled to obey; rather I find evidence of people ready and willing to challenge. Each and every day, somewhere out there, someone protests or collects petition signatures or organizes for grass roots to change a law they don’t like, want changed, or a new law they want created.

        In a democracy where people are able to agitate for change, I don’t think the perception of the state’s power-to-compel roots in the state’s authority; it roots in our common agreement of the state’s authority (though I admit changing that authority can be challenging). We mostly all agree that murder is bad; even when done by the state — I remind you the the death penalty is still controversial. We imbue the state with power because enough of us agree on some things. And when enough of us disagree, things change. Women can vote and serve in combat roles now, soon, we won’t discriminate against same-sex couples. The death penalty is no longer an option for punishing a murderer in many states. There are not debtor prisons, no town farms. As our collected perceptions of right and wrong change, the power we embue to the state changes.

        Change is the constant.Report

      • Murali in reply to Jason Kuznicki says:

        But… here “political authority” means more like “the agents of the state, on issuing a command, enjoy greater moral consideration, ceterus paribus, than other agents issuing the same command.”

        But this part of political authority follows from the previous account. Part of what it means that something is a political authority and thus the agreed upon dispute resolution mechanism, is that the pronouncement issuing from the relevant officials must be taken to resolve that dispute. If such agents of the political authority did not enjoy that deference, then there would be no practical sense in which they resolved the dispute. The prima-facie moral requirement to obey political authority follows from that authority’s dispute resolution capability. Political authority is therefore also undermined by it conducting the dispute resolution in such a way as to be no better or even somewhat worse than the situation where that dispute is not resolved.Report

  2. NewDealer says:

    I don’t think being suspicious or doubtful of Anarchy makes me a Panglossian if I am reading you correctly.

    Yes there need to be lots of changes including to the basic structures of government perhaps but this does not make anarchy a viable form of government. I think anarchy is going to probably lead to things like War Lords and local gangs taking over.

    There seems to be a tendency of thought by many people to make a reasonable observation and have it lead to an unreasonable conclusion. I know a lot of people on my side of the political fence are suspicious of Big Pharma. This is fair enough. Big Pharma has put dangerous products on the market and hidden or downplayed the side effects to get approval. There have been bad consequences to many people because of this. However, where I differ from others on the left is that I have seen people make leaps of logic to cast doubt on the entirety of western medicine and science because of this knee-jerk anti-corporatism. This is the anti-vaccine crowd. Or people who post stuff on how honey and cinnammon can fight most diseases.

    Agreeing that a lot of reform or better is possible does not require a belief that anarchy is a good. I find the idea PollyannaishReport

    • Jason Kuznicki in reply to NewDealer says:

      What would make you Panglossian is to say “ordinary people’s intuitions about politics are valuable, so when we aggregate them, like in a democracy, we get something that no individual or smaller group can ever argue with.”

      I will have much more to say about the possibility of anarchy at some point or another. To make an exceptionally long story short, I am not convinced by it.Report

      • NewDealer in reply to Jason Kuznicki says:

        Ah never mind. I misread.

        My defense of representative democracy can be seen as Panglossian by those who really believe in Anarchy. And I’ve known some on the far-left who think that anarchy-utopia is possibleReport

  3. Shazbot5 says:

    Does Huemer have no explanation for why others (erroneously, from his POV) believe that they should believe in legitimate political authority? Just a big accident? Or does he have a causal story about what causes us to believe that there is legitimate political authority? (And by “us” I mean pre-theoretical, regular folks, who haven’t read or thought about political philosophy much)?

    Also, would Huemer go so far as to say that obedience is never a moral virtue? It seems to me he wants to say that obedience to the state specifically is never morally justified, despite a lot of people’s strong belief that some level (and there will be disagreements about how much and when we should be obedient) is justifiably a good thing.Report

  4. Bayesian Philosopher says:

    Your point about Bayesianism is a bit simplistic, in a manner that is to the detriment of the argument. It is not true that Huemer’s claim is as ‘unBayesian as one can get’. There is a very lively debate going on in Epistemology (on ‘Peer Disagreement’) about what the correct response to learning that your peers disagree. Aumann’s notion that one cannot rationally ‘agree to disagree’ doesn’t necessarily mean that one lowers one credence in every proposition just because somebody else disagrees with you.

    Splitting the epistemic difference with your peer only makes sense if your ‘peer’ really is a peer, if you start out with common priors, and if you’ve received the same evidence. None of these requirements are obviously met in the case Huemer is talking about. I’d go one step further and argue that not only are these requirements never met in the political philosophy context, but, at least as far as normative claims go, can never be met in principle either, because there is no evidence to conditionalize on in the first place.

    But this debate is a bit much for comments to a blog post. The point is that the question of how to respond to peer disagreement is independent of the question(s) of ‘Bayesianism’. There are many, many forms of Bayesianism, of course, but I don’t see how any (interesting) version of them will require you to modify your belief in political authority given others’ views. Bayesianism does not equal attributing equal weight to political ‘intuitions’. The two discussions are distinct. The view you’re ascribing to Bayesianism requires a prior that all of us are equally ‘likely’ (on what notion of probablity? and why equally?) to have ‘correct intuitions’ (how can intuitions here be correct? how much role do ‘intuitions’ play in moral theorizing? Don’t let bad x-phi rhetoric about intuitions throw you off) about moral/political philosophy, and to assume that our peers have correctly responded to new ‘evidence’ (what evidence?) that bears on such claims (and assuming that we even know how to compare the contrasting credences between ourselves and our interlocutors).
    A Bayesian argument WOULD go through if we based our political philosophy on the notion of agreement. In other words, if you believed that the fact that people agree on proposition X is evidence for X’s truth, then of course, the relative disagreement about X (or similarly natured propositions) should undercut your belief in X. But I think Bayesians can safely reject both. There is no reason to think that people’s assent to moral/political claims is evidence for those claims truth. There’s no reason to assume any reliable/causal link at all between the two.

    Lastly, I’m sure you know Huemer better than I do, but I don’t get the sense that Huemer is writing in ignorance of any of this or that he’s proudly proclaiming himself unBayesian. Huemer is explicitly addressing your claim and stating that he disagrees in this case with the Peer Disagreement view. He’s saying that there is something funky in giving equal weight to your beliefs and to those of others. It is incoherent to believe something without, at the same time, believing it more likely true than not (or, even more plausibly: that it is incoherent to ‘believe’ a proposition while ascribing to it a credence of less than 0.5) [believing it more likely false than true]. One could take this view together with an Equal Weight View in Peer Disagreement and go one of two ways: 1) Reject one’s own beliefs; 2) Reject the Equal Weight View. I think that Huemer thinks that (1) is incoherent. I’m inclined to agree with him. Equal Weight works in special cases only.Report

    • Jason Kuznicki in reply to Bayesian Philosopher says:

      I was indeed simplifying… a bit. But one certainly isn’t permitted to begin with the prior as it were that one’s own conclusions are more likely to be true than those of others. That appears to be the implication of the passage in question, and that’s simply not allowed.

      You and I seem to be in agreement, though, in concluding that we ought not to update merely because others disagree. We would have to have reason to believe that everyone else was updating their conclusions properly before we began relying on them, and as I pointed out, it’s obvious that most people aren’t.Report

      • Stillwater in reply to Jason Kuznicki says:

        But one certainly isn’t permitted to begin with the prior as it were that one’s own conclusions are more likely to be true than those of others.

        Hmm. I don’t know. Maybe not according to a particular conception of theory, but isn’t that true in practice? Eg., if I believe my conclusions to be true, haven’t I already factored in the probability (intuitively!) that others who disagree me are more likely to be wrong than I am?

        At the end of the day, I think the issue is a conflict between how to weight beliefs given emotional commitments to them and facts that justify them. And of course, what constitutes a fact. So the normal disputes about certainty and what constitutes a peer-who-disagrees- map onto this analysis 1:1.Report

        • Bayesian Philosopher in reply to Stillwater says:

          I think Stillwater is on to something with the statement ‘haven’t I already factored in the probability?’. The question is whether accounting for your own fallibility is best done on the first order level (as part of what determines your credence) or on the second order level, once your credence function is in. If the former, then it’s double counting to adjust it once you notice that your peer disagrees with you. If the latter, there is a worry about an infinite regress when you start worrying about (for instance how to factor in peers who disagree with you about how to factor in peer disagreement).

          But addressing Jason’s points that:
          “But one certainly isn’t permitted to begin with the prior as it were that one’s own conclusions are more likely to be true than those of others. That appears to be the implication of the passage in question, and that’s simply not allowed”

          1. I’m not sure that the prior is stating that one’s own conclusions are more likely to be true than others. It’s a question of whether you factor others judgments into your first order or second order beliefs. Whether you can judge your prior to be more likely to be true or not, will depend on what you take the nature of priors to be in the first place. Arguably, a prior just is whatever it is you believe at first, before you start experimentation. I don’t think that view commits you to the notion that your prior is more likely to be true. Another possibility is that a prior just is your initial suspicion. Here too, there isn’t necessarily a commitment that your prior is more likely to be true. But I think that having a prior on your prior (a second order prior) is that simple to make sense of. My point, however, was that in order to argue that you should give equal weight to the priors or posteriors of others, you need to assume that there really is good reason to assume that they are equally likely to arrive at the correct conclusion as you are. I’m not sure that this assumption is met in cases like this.
          2. I too would be reluctant to argue that I’m just probably MORE likely than you to arrive at truth with a prior. But the question still stands whether the fact that you (or I) believe P, counts as evidence at all that P is true. I’m not sure that it does, except in special cases. If it doesn’t, then I don’t think that treating your opinion/belief that P as a data point that I conditionalize on makes much sense. So even if I don’t think I’m more likely to be correct than you are, it’s not clear that I should treat disagreement as evidentially relevant.
          3. You say that it’s not permitted/allowed to begin with priors that… I’ve argued above that this is not the best characterization of the view you’re opposed to, but my main point originally was simply that this view of Peer Disagreement is not the same thing as (and is independent of) Bayesianism. I think your point here illustrates this best. On the Subjectivist Bayesian approach (by far the dominant approach among Bayesians) what you’ve written here is a non-starter, because, other than conforming to the Kolmogorov Axioms of Probability, there are no restrictions on the prior at all! So you can begin with any prior you like as long as you keep yourself probabilistically consistent (even Conditionalization as an updating rule is up for grabs!). The view that you seem to be sympathetic to combines Bayesian updating with a commitment to a particular prior about how reliable an epistemic agent you are, as opposed to your peers (notice that even taking someone to be a peer involves a prior about her). This is much more than just a Bayesian view (and it can be stated in non-Bayesian terms). That’s a respectable position, of course, but the controversy surrounding its plausibility doesn’t seem to hinge on accepting Bayesianism.Report

    • Stillwater in reply to Bayesian Philosopher says:

      Wow. What a comment! Awesome.

      But this debate is a bit much for comments to a blog post.

      Errr. No it’s not. More please.Report

    • but I don’t see how any (interesting) version of them will require you to modify your belief in political authority given others’ views

      Sure it does. Here is how it goes:

      Suppose a proposition P: Thus to say P is to say that P is true
      Further suppose that the statement that Jack believes P is Jp.

      Suppose it is the case that p(Jp|P) > p(Jp|~P)
      It necessarily follows that p(P|Jp) > p(~P|Jp)

      So, if you fail to adjust your credence when you discover that other people believe P, that must only be the case if you have no reason to think that their belief that P has any relation to whether or not it is the case that P.

      If vast numbers of people are just as likely to agree with you as disagree and you have no first order methodological advantage over them, then you should basically withhold judgment.Report

      • Bayesian Philosopher in reply to Murali says:

        Murali,
        I’m not sure I get your point, so if I’ve missed it, please forgive me. But I want to resist what you claim on three levels.
        1. I want to resist your first premise. Why should I take it as more likely that Jack will believe P, if P is true, than if P is false? I don’t see why this premise should be granted, especially in the moral/political case.
        2. I don’t think that the second inequality follows from the first. Unless I’m misunderstanding your point, it seems to me that the second inequality only follows if P(P)>P(~P).
        3. Regardless, my point wasn’t that one cannot articulate an Equal Weight View as a Bayesian, it was merely to state that Bayesianism doesn’t commit you to the view that you should treat others as just as likely to be correct as you are even if they are just as likely to agree or disagree with you. As you say yourself, you only need to adjust your credence of P when you discover that others believe/don’t believe P if you believe that their belief bears on P. But whether it does or doesn’t has little (perhaps nothing) to do with Bayesianism.Report

        • Sorry, I was not being entirely clear also. Let me try to clarify. Ordinarily, when we think that someone’s beliefs have some truth tracking property, then their belief that P is evidence of P. Now, of course, this may or may not be the case when it comes to moral beliefs. In fact, when it comes to moral intuitions, I think there are severe problems with thinking that our moral intuitions track the truth. Because of that, we must reduce the credence in our own moral intuitions to 0.5. But if we have confidence in our own intuitions, but none in others’, we must have some story to tell about why we treat our pre theoretical intuitions differently from others’. I doubt that there is any plausible story to tell. Also, given the numbers of people who think that they are in an epistemically superior position, but who are not, the probability of you being in an epistemically superior position given that you think you are is very low. There are things that might be said about your epistemic practices which would warrant moving that probability upwards, but I doubt intuitionists like Huemer have that warrant.

          On matters of the inequalities, I made a mistake. I can only blame sleepiness and sloppiness. It should have been that:

          Suppose it is the case that p(Jp|P) > p(Jp|~P)
          It necessarily follows that p(P|Jp) > p(P) > p(P|~Jp)*

          But that means that if we find out that Jp then we should move our conditional confidence in P from p(P) to p (P|Jp).

          So, as long as we think that other people’s beliefs track truth even slightly better than chance, our discovery of their beliefs should give us reason to shift our credences accordingly. How much is a different question.

          *Here is a derivation of that:

          suppose p (A|B) > p (A|~B)
          then
          p (A|B) = p (A ∩ B)/p(B)

          But, p (A ∩ B) > p (A) x p (B) since A and B are dependent.

          therefore

          p (A ∩ B)/p (B) > [ p (A) x p (B) ] / p (B)
          p (A|B) > p (A)

          suppose e1 = p (A|B) – p (A)

          Therefore 0 < e1 p(A|~B)

          p (B|A) = p (B ∩ A)/p(A)
          = (p (A|B) x p (B))/ p(A)
          > [ p (A) x p (B) ] / p (A)
          > p (B)
          Recall that iff p (A|B) > p (A), then p (A) > p (A|~B)

          therefore since p (B|A) > p (B)

          p (B|A) > p (B) > p (B|~A)Report

          • Jason Kuznicki in reply to Murali says:

            So, as long as we think that other people’s beliefs track truth even slightly better than chance, our discovery of their beliefs should give us reason to shift our credences accordingly. How much is a different question.

            Quite so. Which is why we need to establish that (a) I really am personally more likely to track truth than random chance would dictate, and (b) all others are less likely to track truth than random chance. I cannot derive this from the mere fact that I hold a set of beliefs.Report