Huemer II: On Ordinary Political Intuitions
A long reply to Michael Huemer, below the fold.
In chapter six of Michael Huemer’s The Problem of Political Authority, we find:
[Some] advocates of political authority suggest that anarchism should be rejected because it is simply too far out of the mainstream of political opinion. The belief in political obligations, writes George Klosko, “is a basic feature of our political consciousness.” He believes that we should accept common opinions as prima facie evidence in normative matters, particularly when philosophical opinion is divided. David Hume goes farther: “The general opinion of mankind has some authority in all cases; but in this of morals ’tis perfectly infallible.” If there is no political authority, it is natural to ask, then how have so many people come to have such a firm belief in it? Is it not more likely that I and the handful of other anarchists have made a mistake, than that almost everyone else in the world has?
Ultimately, I disagree with that argument. All things considered, I think it more likely that others are mistaken than that I am. (Obviously, I would not hold a belief that I myself did not consider more likely true than false.)
Now, I would not personally suggest that belief in political authority is a “basic” feature of consciousness; this to my mind does some sneaky normative work that it ought not to do. (Are people who lack this belief defective human beings, then?) But at any rate, the belief that a command bears some additional moral force by virtue of its having come from the state does seem to be pretty darn common. And even intuitive.
The problem for me lies in Huemer’s final two sentences, which are about as un-Bayesian as one can get. They even come close to affirming the truth value of a claim… simply because I am the one affirming the truth value of the claim!
This would be very convenient, if logic permitted.
But if we more reasonably assume that each of us is endowed with some positive, imperfect, yet non-negligible capacity as a truth detector, and if each of us is more or less alike in that regard, and if the overwhelming majority of people’s intuitions still settle on political authoritarianism, well then, we ought to consider, barring a very convincing demonstration of error coming from a provably superior truth-detector, that the common intuition is correct. As my colleague Julian Sanchez has noted, this bodes ill for certain brands of doctrinaire libertarianism:
I’ve become more of a Bayesian about politics: I cannot help but notice that lots of folks who are as smart or smarter than I have rather radically different views about what sort of polity is best, and I cannot quite bring myself to conclude that they’re simply watching shadows dance on the cave walls, while I have glimpsed the Forms. And so I don’t, these days, much find myself thinking about the specific contours of libertopia. Instead, I tend to find myself thinking in terms like: “Well, let’s push in this direction and see how it works.” You have to be careful there too, of course, since depending on the details, a government-market hybrid (say) will just give you the disadvantages of both. (See: Healthcare System, United States.) But I think this is the direction you end up pushed in if you take Hayek’s warnings about “constructivist rationalism” sufficiently seriously. On this model, libertarianism isn’t so much a final picture of a just society as a specific sort of toolkit…
This is a view I somewhat share, although I would caution those who would adopt it away from the error of thinking that popular intuitions about politics are significantly more likely to be correct when they are aggregated and compared to any eccentric view.
This sort of thinking leads quickly to Panglossianism of the original and very worst sort — political Panglossianism, the idea that we already live right now in the best of all possible political worlds. (“If it’s really so awful,” asked the nineteenth century, “why haven’t we already abolished slavery? It can’t be that bad!”)
The key divergence between politics as it is usually conducted and the model of political life that posits that we are a collection Bayesian updaters is simply that most people are not Bayesian updaters.
On the contrary, most people are demonstrably, even proudly, un-Bayesian. In this they are just like Huemer, and just like I often am, too, if I’m being honest. There is not really any very strong reason arising from the epistemological account of probability to think that our politics is converging on, or has converged on, the truth.
This means that democratic commands aren’t worthy of additional moral weight for probabilistic reasons, and it also means that we need not give up on eccentric political ideas merely because the majority doesn’t share them. If most people aren’t Bayesian updaters, then true Bayesian updaters ought not to listen to them.
But it is also to say that we all have political intuitions, and they all (probably) stink.
The analogy to medicine is again instructive; all the medical intuition in the world, shared and passed along from one generation to the next, did not suffice to appreciably lower the mortality rate from ancient times until the advent of the germ theory of disease. Which, by the way, was vastly counterintuitive to the very smart people who were not raised to find it intuitive. My three-year-old knows more about germs than the very learned doctors of the eighteenth century. Should we really suppose that politics, a subject easily as complex as medicine, would be so much more intuitive?
And yet, if we’re still committed to intuitionism (after all that!), shouldn’t we at last use political intuitions for political reasoning, and non-political intuitions for non-political things?
Later in chapter six, Huemer addresses what he sees as the particular fallibility of political intuition: Humans are demonstrably and horrifyingly deferent to authority. He walks readers through the Milgram experiment and suggests that politics might be quite similar: An authority figure asks us to do something objectively horrible, and we for the most part comply.
A lot of politics really is like this. As a libertarian, I’m committed to saying that nearly all of it is, and I agree entirely with him when he writes:
The widespread acceptance of political authority has been cited as evidence of the existence of (legitimate) political authority. The psychological and historical evidence undermines this appeal. The Nazis, the American soldiers at My Lai, and Milgram’s subjects were clearly under no obligation of obedience–quite the contrary–and the orders they were given were clearly illegitimate. From outside these situations, we can see that. Yet, when actually confronted by the demands of the authority figures, the individuals in these situations felt the need to obey. This tendency is very widespread among human beings. Now suppose, hypothetically, that all governments were illegitimate, and that no one were obligated to obey their commands (except where the commands line up with pre-existing moral requirements). The psychological and historical evidence cannot show whether this radical ethical hypothesis is true. But what the evidence does suggest is that if that hypothesis were true, it is quite likely that we would still by and large feel bound to obey our governments.
Huemer claims to detect a bias in others, and I think he’s almost certainly correct about it. But we can’t infer from the fact that others are demonstrably biased in favor of a conclusion that the conclusion itself is wrong. That would be a bad reasons fallacy, wouldn’t it? There might be an irrational bias toward political authority in most or nearly all people — and I think there is one — but that doesn’t mean that we must reject (or accept) political authority.
Even a libertarian order cannot function without some kind of political authority. If we want people to stably converge on the right kind of rules, we cannot expect that to happen if people were to fully pursue all their commitments in the best way they knew how. The fact of pluralism means that even people who are committed to respecting others’ autonomy will understand autonomy differently and have different beliefs about which rules best respect the rights that people properly have. Any functioning society is going to have to roughly agree on some dispute resolution mechanism*. This dispute resolution mechanism is political authority.
*They don’t have to think that it is the best, they just have to coordinate on that mechanism. i.e. switching to another mechanism has more personal costs than staying with the current one.Report
I think you’re understanding political authority in a different sense than is meant in political philosophy.
There may well be a strongest concentration of power. And it may be the case that society benefits if there is an agreed-upon dispute resolution mechanism, even if that mechanism is just made of ordinary people, and even if it screws up fairly often. The alternative may indeed be worse.
But… here “political authority” means more like “the agents of the state, on issuing a command, enjoy greater moral consideration, ceterus paribus, than other agents issuing the same command.”
So on the margin, I should be more motivated by state commands than by others. Perhaps even to the point where I am morally compelled to obey all state commands, although just being marginally more morally compelled is sufficient.
That’s a different thing.Report
So on the margin, I should be more motivated by state commands than by others. Perhaps even to the point where I am morally compelled to obey all state commands, although just being marginally more morally compelled is sufficient.
This sounds like confusing the state with religion; authority rooted in command and belief despite evidence. When I look at the way people try to bend the state’s rules — paying their cleaning lady under the table, speeding 5 to 10 over, etc., I don’t find much evidence that the state’s commands are imbued with any perception of morality, that people feel compelled to obey; rather I find evidence of people ready and willing to challenge. Each and every day, somewhere out there, someone protests or collects petition signatures or organizes for grass roots to change a law they don’t like, want changed, or a new law they want created.
In a democracy where people are able to agitate for change, I don’t think the perception of the state’s power-to-compel roots in the state’s authority; it roots in our common agreement of the state’s authority (though I admit changing that authority can be challenging). We mostly all agree that murder is bad; even when done by the state — I remind you the the death penalty is still controversial. We imbue the state with power because enough of us agree on some things. And when enough of us disagree, things change. Women can vote and serve in combat roles now, soon, we won’t discriminate against same-sex couples. The death penalty is no longer an option for punishing a murderer in many states. There are not debtor prisons, no town farms. As our collected perceptions of right and wrong change, the power we embue to the state changes.
Change is the constant.Report
But… here “political authority” means more like “the agents of the state, on issuing a command, enjoy greater moral consideration, ceterus paribus, than other agents issuing the same command.”
But this part of political authority follows from the previous account. Part of what it means that something is a political authority and thus the agreed upon dispute resolution mechanism, is that the pronouncement issuing from the relevant officials must be taken to resolve that dispute. If such agents of the political authority did not enjoy that deference, then there would be no practical sense in which they resolved the dispute. The prima-facie moral requirement to obey political authority follows from that authority’s dispute resolution capability. Political authority is therefore also undermined by it conducting the dispute resolution in such a way as to be no better or even somewhat worse than the situation where that dispute is not resolved.Report
I don’t think being suspicious or doubtful of Anarchy makes me a Panglossian if I am reading you correctly.
Yes there need to be lots of changes including to the basic structures of government perhaps but this does not make anarchy a viable form of government. I think anarchy is going to probably lead to things like War Lords and local gangs taking over.
There seems to be a tendency of thought by many people to make a reasonable observation and have it lead to an unreasonable conclusion. I know a lot of people on my side of the political fence are suspicious of Big Pharma. This is fair enough. Big Pharma has put dangerous products on the market and hidden or downplayed the side effects to get approval. There have been bad consequences to many people because of this. However, where I differ from others on the left is that I have seen people make leaps of logic to cast doubt on the entirety of western medicine and science because of this knee-jerk anti-corporatism. This is the anti-vaccine crowd. Or people who post stuff on how honey and cinnammon can fight most diseases.
Agreeing that a lot of reform or better is possible does not require a belief that anarchy is a good. I find the idea PollyannaishReport
What would make you Panglossian is to say “ordinary people’s intuitions about politics are valuable, so when we aggregate them, like in a democracy, we get something that no individual or smaller group can ever argue with.”
I will have much more to say about the possibility of anarchy at some point or another. To make an exceptionally long story short, I am not convinced by it.Report
Ah never mind. I misread.
My defense of representative democracy can be seen as Panglossian by those who really believe in Anarchy. And I’ve known some on the far-left who think that anarchy-utopia is possibleReport
Does Huemer have no explanation for why others (erroneously, from his POV) believe that they should believe in legitimate political authority? Just a big accident? Or does he have a causal story about what causes us to believe that there is legitimate political authority? (And by “us” I mean pre-theoretical, regular folks, who haven’t read or thought about political philosophy much)?
Also, would Huemer go so far as to say that obedience is never a moral virtue? It seems to me he wants to say that obedience to the state specifically is never morally justified, despite a lot of people’s strong belief that some level (and there will be disagreements about how much and when we should be obedient) is justifiably a good thing.Report
I mean”some level of obedience”Report
Your point about Bayesianism is a bit simplistic, in a manner that is to the detriment of the argument. It is not true that Huemer’s claim is as ‘unBayesian as one can get’. There is a very lively debate going on in Epistemology (on ‘Peer Disagreement’) about what the correct response to learning that your peers disagree. Aumann’s notion that one cannot rationally ‘agree to disagree’ doesn’t necessarily mean that one lowers one credence in every proposition just because somebody else disagrees with you.
Splitting the epistemic difference with your peer only makes sense if your ‘peer’ really is a peer, if you start out with common priors, and if you’ve received the same evidence. None of these requirements are obviously met in the case Huemer is talking about. I’d go one step further and argue that not only are these requirements never met in the political philosophy context, but, at least as far as normative claims go, can never be met in principle either, because there is no evidence to conditionalize on in the first place.
But this debate is a bit much for comments to a blog post. The point is that the question of how to respond to peer disagreement is independent of the question(s) of ‘Bayesianism’. There are many, many forms of Bayesianism, of course, but I don’t see how any (interesting) version of them will require you to modify your belief in political authority given others’ views. Bayesianism does not equal attributing equal weight to political ‘intuitions’. The two discussions are distinct. The view you’re ascribing to Bayesianism requires a prior that all of us are equally ‘likely’ (on what notion of probablity? and why equally?) to have ‘correct intuitions’ (how can intuitions here be correct? how much role do ‘intuitions’ play in moral theorizing? Don’t let bad x-phi rhetoric about intuitions throw you off) about moral/political philosophy, and to assume that our peers have correctly responded to new ‘evidence’ (what evidence?) that bears on such claims (and assuming that we even know how to compare the contrasting credences between ourselves and our interlocutors).
A Bayesian argument WOULD go through if we based our political philosophy on the notion of agreement. In other words, if you believed that the fact that people agree on proposition X is evidence for X’s truth, then of course, the relative disagreement about X (or similarly natured propositions) should undercut your belief in X. But I think Bayesians can safely reject both. There is no reason to think that people’s assent to moral/political claims is evidence for those claims truth. There’s no reason to assume any reliable/causal link at all between the two.
Lastly, I’m sure you know Huemer better than I do, but I don’t get the sense that Huemer is writing in ignorance of any of this or that he’s proudly proclaiming himself unBayesian. Huemer is explicitly addressing your claim and stating that he disagrees in this case with the Peer Disagreement view. He’s saying that there is something funky in giving equal weight to your beliefs and to those of others. It is incoherent to believe something without, at the same time, believing it more likely true than not (or, even more plausibly: that it is incoherent to ‘believe’ a proposition while ascribing to it a credence of less than 0.5) [believing it more likely false than true]. One could take this view together with an Equal Weight View in Peer Disagreement and go one of two ways: 1) Reject one’s own beliefs; 2) Reject the Equal Weight View. I think that Huemer thinks that (1) is incoherent. I’m inclined to agree with him. Equal Weight works in special cases only.Report
I was indeed simplifying… a bit. But one certainly isn’t permitted to begin with the prior as it were that one’s own conclusions are more likely to be true than those of others. That appears to be the implication of the passage in question, and that’s simply not allowed.
You and I seem to be in agreement, though, in concluding that we ought not to update merely because others disagree. We would have to have reason to believe that everyone else was updating their conclusions properly before we began relying on them, and as I pointed out, it’s obvious that most people aren’t.Report
But one certainly isn’t permitted to begin with the prior as it were that one’s own conclusions are more likely to be true than those of others.
Hmm. I don’t know. Maybe not according to a particular conception of theory, but isn’t that true in practice? Eg., if I believe my conclusions to be true, haven’t I already factored in the probability (intuitively!) that others who disagree me are more likely to be wrong than I am?
At the end of the day, I think the issue is a conflict between how to weight beliefs given emotional commitments to them and facts that justify them. And of course, what constitutes a fact. So the normal disputes about certainty and what constitutes a peer-who-disagrees- map onto this analysis 1:1.Report
I think Stillwater is on to something with the statement ‘haven’t I already factored in the probability?’. The question is whether accounting for your own fallibility is best done on the first order level (as part of what determines your credence) or on the second order level, once your credence function is in. If the former, then it’s double counting to adjust it once you notice that your peer disagrees with you. If the latter, there is a worry about an infinite regress when you start worrying about (for instance how to factor in peers who disagree with you about how to factor in peer disagreement).
But addressing Jason’s points that:
“But one certainly isn’t permitted to begin with the prior as it were that one’s own conclusions are more likely to be true than those of others. That appears to be the implication of the passage in question, and that’s simply not allowed”
1. I’m not sure that the prior is stating that one’s own conclusions are more likely to be true than others. It’s a question of whether you factor others judgments into your first order or second order beliefs. Whether you can judge your prior to be more likely to be true or not, will depend on what you take the nature of priors to be in the first place. Arguably, a prior just is whatever it is you believe at first, before you start experimentation. I don’t think that view commits you to the notion that your prior is more likely to be true. Another possibility is that a prior just is your initial suspicion. Here too, there isn’t necessarily a commitment that your prior is more likely to be true. But I think that having a prior on your prior (a second order prior) is that simple to make sense of. My point, however, was that in order to argue that you should give equal weight to the priors or posteriors of others, you need to assume that there really is good reason to assume that they are equally likely to arrive at the correct conclusion as you are. I’m not sure that this assumption is met in cases like this.
2. I too would be reluctant to argue that I’m just probably MORE likely than you to arrive at truth with a prior. But the question still stands whether the fact that you (or I) believe P, counts as evidence at all that P is true. I’m not sure that it does, except in special cases. If it doesn’t, then I don’t think that treating your opinion/belief that P as a data point that I conditionalize on makes much sense. So even if I don’t think I’m more likely to be correct than you are, it’s not clear that I should treat disagreement as evidentially relevant.
3. You say that it’s not permitted/allowed to begin with priors that… I’ve argued above that this is not the best characterization of the view you’re opposed to, but my main point originally was simply that this view of Peer Disagreement is not the same thing as (and is independent of) Bayesianism. I think your point here illustrates this best. On the Subjectivist Bayesian approach (by far the dominant approach among Bayesians) what you’ve written here is a non-starter, because, other than conforming to the Kolmogorov Axioms of Probability, there are no restrictions on the prior at all! So you can begin with any prior you like as long as you keep yourself probabilistically consistent (even Conditionalization as an updating rule is up for grabs!). The view that you seem to be sympathetic to combines Bayesian updating with a commitment to a particular prior about how reliable an epistemic agent you are, as opposed to your peers (notice that even taking someone to be a peer involves a prior about her). This is much more than just a Bayesian view (and it can be stated in non-Bayesian terms). That’s a respectable position, of course, but the controversy surrounding its plausibility doesn’t seem to hinge on accepting Bayesianism.Report
Wow. What a comment! Awesome.
But this debate is a bit much for comments to a blog post.
Errr. No it’s not. More please.Report
but I don’t see how any (interesting) version of them will require you to modify your belief in political authority given others’ views
Sure it does. Here is how it goes:
Suppose a proposition P: Thus to say P is to say that P is true
Further suppose that the statement that Jack believes P is Jp.
Suppose it is the case that p(Jp|P) > p(Jp|~P)
It necessarily follows that p(P|Jp) > p(~P|Jp)
So, if you fail to adjust your credence when you discover that other people believe P, that must only be the case if you have no reason to think that their belief that P has any relation to whether or not it is the case that P.
If vast numbers of people are just as likely to agree with you as disagree and you have no first order methodological advantage over them, then you should basically withhold judgment.Report
Murali,
I’m not sure I get your point, so if I’ve missed it, please forgive me. But I want to resist what you claim on three levels.
1. I want to resist your first premise. Why should I take it as more likely that Jack will believe P, if P is true, than if P is false? I don’t see why this premise should be granted, especially in the moral/political case.
2. I don’t think that the second inequality follows from the first. Unless I’m misunderstanding your point, it seems to me that the second inequality only follows if P(P)>P(~P).
3. Regardless, my point wasn’t that one cannot articulate an Equal Weight View as a Bayesian, it was merely to state that Bayesianism doesn’t commit you to the view that you should treat others as just as likely to be correct as you are even if they are just as likely to agree or disagree with you. As you say yourself, you only need to adjust your credence of P when you discover that others believe/don’t believe P if you believe that their belief bears on P. But whether it does or doesn’t has little (perhaps nothing) to do with Bayesianism.Report
Sorry, I was not being entirely clear also. Let me try to clarify. Ordinarily, when we think that someone’s beliefs have some truth tracking property, then their belief that P is evidence of P. Now, of course, this may or may not be the case when it comes to moral beliefs. In fact, when it comes to moral intuitions, I think there are severe problems with thinking that our moral intuitions track the truth. Because of that, we must reduce the credence in our own moral intuitions to 0.5. But if we have confidence in our own intuitions, but none in others’, we must have some story to tell about why we treat our pre theoretical intuitions differently from others’. I doubt that there is any plausible story to tell. Also, given the numbers of people who think that they are in an epistemically superior position, but who are not, the probability of you being in an epistemically superior position given that you think you are is very low. There are things that might be said about your epistemic practices which would warrant moving that probability upwards, but I doubt intuitionists like Huemer have that warrant.
On matters of the inequalities, I made a mistake. I can only blame sleepiness and sloppiness. It should have been that:
Suppose it is the case that p(Jp|P) > p(Jp|~P)
It necessarily follows that p(P|Jp) > p(P) > p(P|~Jp)*
But that means that if we find out that Jp then we should move our conditional confidence in P from p(P) to p (P|Jp).
So, as long as we think that other people’s beliefs track truth even slightly better than chance, our discovery of their beliefs should give us reason to shift our credences accordingly. How much is a different question.
*Here is a derivation of that:
suppose p (A|B) > p (A|~B)
then
p (A|B) = p (A ∩ B)/p(B)
But, p (A ∩ B) > p (A) x p (B) since A and B are dependent.
therefore
p (A ∩ B)/p (B) > [ p (A) x p (B) ] / p (B)
p (A|B) > p (A)
suppose e1 = p (A|B) – p (A)
Therefore 0 < e1 p(A|~B)
p (B|A) = p (B ∩ A)/p(A)
= (p (A|B) x p (B))/ p(A)
> [ p (A) x p (B) ] / p (A)
> p (B)
Recall that iff p (A|B) > p (A), then p (A) > p (A|~B)
therefore since p (B|A) > p (B)
p (B|A) > p (B) > p (B|~A)Report
So, as long as we think that other people’s beliefs track truth even slightly better than chance, our discovery of their beliefs should give us reason to shift our credences accordingly. How much is a different question.
Quite so. Which is why we need to establish that (a) I really am personally more likely to track truth than random chance would dictate, and (b) all others are less likely to track truth than random chance. I cannot derive this from the mere fact that I hold a set of beliefs.Report