Libertarians Are Not Like Beryllium
When I was in the my final year of grad school, it was not uncommon for someone to ask me, “What has all that education taught you about yourself?” And you know, that was a pretty good question, since at that point I’d spent years studying the human mind, and I am (or at least some would have me believe that I am) human. Hopefully, then, I’d learned something about myself other than exactly how many hours I can stay awake while working on a dissertation and still write a coherent sentence about something more complicated than my love of Honey Nut Cheerios, right? I always answered the question in the same way, with a story (as we southerners are wont to do).
The first project I worked on when I first got to grad school was a series of experiments that showed, in essence, that what I can consciously retrieve about a concept, and what I have stored somewhere in my brain about that concept, are not coextensive. Instead, our representation of a concept tend to get packed away when we’re not using them, and when it comes time to unpack it, the brain, ever the economizer, only unpacks the parts of the concepts that it feels it needs at the moment. The rest remains unavailable to me (and not just consciously; even unconscious processes don’t access it), even though it might be important at the time and I just don’t know it. As I thought about this, and as we tested hypothesis about how this might work, something dawned on me: my conscious mind had very little, if anything, to do with all of this. All of the work in determining what I thought of whatever concept I was unpacking — from the retrieval of the concept, the determining what might be relevant, and the retrieval of that select information — was going on below the level of my awareness. And with this (unpleasant, quite frankly) realization always with me, I learned more and more about the mind, and just how little of what it does is consciously available. We are, as one psychologist put it, “strangers to ourselves.” So that’s my answer: what I learned about myself is that I have little if any control over the vast majority of what’s going on in my head. I don’t really know me at all.
Now I told you all of that to tell you this: many of the more heated, and less productive discussions around these parts of late have begun with, or at least derailed as the result of, someone making blanket statements about what the members of some ideological out-group believe. You know the sorts of conversations of which I speak: the ones that begin with statements like, “Libertarians all think [insert something really heartless here],” “conservatives all believe [insert something only an insane person would believe here],” or “progressives all fail to understand [insert something market-related here].” One obvious reason that such statements breed generally unproductive discussions is that those people for whom your out-group happens to be their in-group are going to tend to react to such broad and unflattering statements with defensiveness, denial, and a feeling of fish you, you mother fisher. The last of these is often reflected in the tone of these discussions. But there’s another reason that they’re unproductive, as well: if you make such a statement, you’re probably wrong about the members of that group with whom you happen to be talking at the moment (you’ve overgeneralized, you’ve selectively chosen what you want to consider about the group, the reasoning that led you to this belief was anything but unbiased, you’re taking the worst of a group and treating everyone as if they were as bad, etc.), but because of the way your mind works without you really having any say in the matter, you’re going to have a difficult time seeing that. This has to do with the way you represent social categories and the people you place in them.
The Importance of Names and Psychological Essentialism
Long ago when I was a big time moderately well-known slightly better than obscure science blogger, I wrote a post with (part of) that title (reproduced here). In that post, I talked about some studies, the details of which I won’t get into here (but you should feel free to read them there… I’ll wait… damn you’re a slow reader… OK, finally!), which show something that most of us already knew intuitively: the names we give to individual things are important. They’re important because they signal that the individual thing so-named belongs to a specific category, and categories serve one main purpose: to license inferences about individuals based on our knowledge of the categories to which it belongs. So, for example, if I know that this long, slender, slithering thing in front of me is a cobra, I can infer that it’s poisonous (and then run away!).
Most of the time, this labeling effect is a good thing, because it means much less work for our brains, and our brains are busy and use a lot of resources, so anytime we can save them some work, we’re doing them a big favor (which is nice of us, isn’t it?). However, our reliance on categories to make inferences, and our association of labels with categories, gives labels a lot of power, and anything that has power is sometimes going to abuse it. And one of the reasons this power we give to labels is ripe for abuse is something cognitive psychologists like to call psychological essentialism. Psychological essentialism says, in essence, that our mental theory about the way the world works includes the belief that most categories (human-made artifacts are, to some extent, an exception, though an odd one that I’d be happy to talk about if you’re interested) have some underlying essence that makes them what they are, and that any individual who’s placed in that category must have in order to be a true member. And this essentialism gets attached to the labels, so that merely applying the label automatically results in the inference that individual so-labeled has the essence that defines the category to which the label refers.
Consider an example from the literature (which you already know about, because you read that old post I linked to, right? I said right?): if you tell kids or adults (according to a study published well after I wrote that post)* that someone eats a lot of carrots, and then call that person a carrot-eater, they’re much more likely to believe that the eating carrots is a stable, essential trait of the person than if you just say that they eat a lot of carrots without giving them the label. So, if you ask them if the carrot-eater is likely to be eating carrots at some date far in the future, they’re going to say yes, whereas they’ll be uncertain about their future carrot-eating if you don’t call them carrot-eaters.
I’m sure you can see how this can be problematic. Carrot-eating is a harmless sort of inference, but what f I call you a criminal, rather than merely saying that you committed a crime? The same sort of essentialist thinking kicks in, and I’m much more likely to think that the traits I associate with criminals apply to you, not only now, but stably over time, because I’m going to think that they’re part of your essence, the underlying nature of you. If my representation of “poor people” includes features like “lazy,” “irrational,” and “morally inferior” as part of the essence of the category, then if someone calls you a poor person, suddenly those features get attached to you in my mind.
Social Categories, Stereotypes, and that Psychological Essentialism Thing Again
You’re all smart folks, so you can probably see where I’m going with this. The philosophically-inclined among you might be thinking that essentialist beliefs about natural kinds, like tigers or cobras or hydrogen molecules, are perfectly fine, because regardless of our metaphysics, there’s a sense in which natural kinds have essences, or at least behave as though they do. But all sorts of problems arise when we treat social categories like natural kinds, because social categories simply aren’t like natural kinds, and they don’t “behave” like natural kinds. A great deal of research over the last 15 years or so has looked at the role of psychological essentialism in social categories, and unfortunately, that research has consistently found evidence that we treat religion, race (also), ethnicity, gender, mental illness, and other social categories like natural kinds, complete with essences.
As with carrot eaters, essentialist thinking about religions, or races, or in our little corner of the interwebs, political ideologies, means that we think that the traits that we associate with a particular religion or political faction are going to be automatically assigned to someone we see as a member of those groups, and they’re going to be seen as immutable. Once you’re a carrot-eater, you’re a carrot-eater for life (or at least until something major happens that can alter your essence). But it gets worse: research has shown that essentialist thinking causes us to look for similarities between group members while ignoring differences, resulting in what amounts to stereotypical thinking about group members. It goes even further: when we encounter information that conflicts with our essentialist beliefs about a group, our minds freak out a bit, and we start thinking about the individual or group even more selectively (it triggers “motivated” reasoning, a concept with which some of you may be familiar). Think of this as a sort of essentialist confirmation bias with serious consequences for how we treat people who are members of other groups. Oh, that reminds me, we don’t tend to think of our own groups with the same degree of essentialism: that’s reserved for out-groups. And one more thing: the essentialist thinking is so strong, we can make inferences about the group from one individual, and since we tend to remember the bad rather than the good behavior of members of out-groups, one individual member of an out-group behaving badly can cause us to believe that such bad behavior is a part of what it means to be a member of that group!
OK, I know all of that is a lot to swallow, so I’ll try to wrap it all up now and relate it back to us, because let’s face it, we’re what’s important, amIright? We all have in our minds representations of the political factions to which we do not belong, and because we’re human (most of us… I’m not so sure about that Jaybird character), we’re going to tend to think of those factions as natural kinds with essences that all of the people who belong to those factions share. But those are out-groups, so we’re not going to arrive at our representations of those essences without some motivated thinking, some essentialist confirmation bias. If someone you don’t like is a member of that group, or some member of that group holds a belief you find abhorrent, you’re going to associate those negative things with any new person you put into the group. And if these new people we’ve placed into our category in our heads behave in a way that is inconsistent with our representation of the category’s essence, we’re going to tend to ignore that, or at least discount it in a way that makes the information already contained in the essence of the category count more, even when we’re thinking about the person who’s just shown us that their beliefs or their behavior doesn’t square with the essence.
Now, these processes are going to be in play no matter what? I started this post with the story about me learning that we’re strangers to ourselves for a reason: we don’t have control over this stuff. It’s not our fault, it’s just the way our brains are built, and it’s all going on below the level of our conscious awareness, where we have no say in the matter. We all do it, even the best of us, but now that we know we do it, we can at least take some steps to mitigate the effects. We may not be to blame, but now that we know, it’s our fault if we don’t even try, right? And the first step is remembering what we know about labels: they’re like essence magnets. As soon as a label comes into play, the essences stick to them, and guide our thinking. And while the label of my political faction primes your essentialist thinking about that faction, because it’s an out-group for you, it does no such thing for me, so I get offended when you start telling me what I must think because it’s what people of my sort think, essentially. So when I tell you that I don’t think that way, you’re less likely to hear me, or you’re going to interpret my explanation of what I do think in the light of what the essence of my type tells you I must think, which, it will turn out, is precisely what I just said I don’t think. The next thing you know, we’re going to be at each others’ throats, and no good will come of it.
My advice then is quite simple, and it applies on this blog and pretty much anywhere else you encounter other humans: when dealing with individuals, if possible, avoid labels. Sometimes, even here, labels are indispensable, and we’re going to have to use them because the benefits outweigh the risks, but most of the time, when you’re talking to somebody, and you’re trying to discuss ideas, placing those people into a group by applying a label to them, whether its “libertarian” or “liberal” or “carrot-eating-conservative-segway-ridin’-numismatic,” is just going to get in the way.
*The new study, which you can read here, suggests that very young children aren’t overly influenced by labels, but as we grow up, labels become “category markers” that point to our representations of the categories’ essences.