Constructing the original position 3: The veil of ignorance
Previously, we established that some kind of initial contract or choice situation accurately modelled the moral reasons in favour of particular systems of social rules over other competing systems. There are an amazing variety of possible choice situations. There could be one consisting of different people with different antecedent preference orderings negotiating with eachother and coming to an agreement. There could be one where all parties antecedently were perfectly morally motivated and completely informed. All parties would antecedently agree and the result would be the equivalent of one person choosing. Then there could also be one where a self interested person chose from behind a veil of ignorance. Of course, each of these are still broad descriptions and there could be considerable variation in choice situations within each broad type. This week, we will narrow down the kind of choice situation to last one: Of a self-interested person behind the veil of ignorance. This post is already very long and space constraints prevent me from pursuing a full discussion of all the features of the veil of ignorance. The discussion for this will be broken up over a number of weeks.
Rawls’s veil of ignorance consists of multiple parts, and it may not necessarily be the case that one single set of reasons can account for all the parts. The central part of the veil of ignorance is the fact that people’s personal identities are kept hidden from the parties. The veil of ignorance obscures, among other things: people’s talents, preferences, proclivities, dispositions, race, gender, wealth, religious beliefs and other personal attributes.
In order to see why this is the case, recall that the fundamental social rules ought to be to the benefit to the members of society. We also recall that we can use an initial contract situation to transform a prima facie reason in favour of a system of social rules into a prima facie reason for the party to prefer that system and that we do this because we have the tools to solve rational choice problems, but lack the tools to directly balance different people’s competing reasons.
The basic idea is about the universal applicability of moral reasons. Even moral relativists will have to concede that a limited version of this idea obtains at least within the borders of a society. Since society is a system of coordination, all parties in any particular social interaction must identify the same norm in order to be able to coordinate. This can best be illustrated by an example.
There are times when Tom a grey cat, wants to hit Jerry, a brown mouse, over the head with a mallet. Jerry doesn’t like being hit over the head with a mallet. Similarly, there are times when Jerry wants to hit Tom over the head with a mallet and Tom similarly does not like it then either.
If Tom has the right to hit Jerry over the head, we have two possibilities. In the first one, the same rule applies to Jerry as well. Jerry has the right to hit Tom over the head. In the second possibility, Jerry lacks this right. In order for Jerry to comply with this rule, he must know in advance that the rule does not apply to him. However, a society does not consist of just two people. It consists of many people and it would be impractical to deliver to each and every person within society a tailored set of rules that at once suits them and at the same time dos not clash with the set of rules for any other person. Coordination rules are needed precisely because there are so many people advancing a variety of claims against one another. Social rules, therefore, cannot individually designate to Tom and Jerry what is and is not permissible for them. They have to provide certain general criteria which some of the members of society, Jerry among them, fail to fulfil and therefore not qualify to have the right to bash others over the head. Such general criteria can themselves be expressed as rules. For example:
Catpower: All cats have the right to bash any other non-cat on the head.
The nature of the rule is such that if the situation was reversed, where Tom is now a mouse and Jerry is a cat, Jerry would have the right to hit Tom on head. In order for coordination to take place, Not only must the social rule exist, it must be mutually recognised as a social rule that provides reasons for action. If, for example, Jerry fails to recognise this, he would fail to see any reason to comply with it.
As mentioned earlier, one of the main reasons for using the social contract device is that decision theory allows us to negotiate the right balance of reasons. However, this can only be done if the initial contract is in fact equivalent to a choice made by a single party. When presented with different agents with different preferences, we are still unable to model what in theory would be a rational aggregation of those preferences. In fact, Arrow’s theorem dictates that there is no rational way to aggregate those preferences without picking out one party whose preferences are decisive. However, if any such dictator was picked, since he would have neither the complete information nor the proper motivation to fully respond to all the relevant reasons.
Since we are unable to theoretically deal with different disagreeing agents, we must translate moral reasons into a choice situation made by parties who independently agree. i.e. the parties must have identical belief and motivational sets. In order for our contract theory to actually deliver the principles of justice, the parties must be constructed so that they take into account all the relevant moral reasons. So, if Catpower is just, then our contract device must be able to configure any and all contracting parties to see Catpower as just. Whether or not a particular party is Tom or Jerry is an irrelevant consideration when determining whether or not they will choose Catpower as a social rule.
There are two broad approaches to constructing such a situation. The first approach makes the parties fully aware of all information and each party concerned for all the members of society to whatever extent is morally appropriate. However, this requires us to know in advance what proper concern for others involves. However, if we already knew this, we would not need to attempt a contract theory in the first place.
The alternative presented here is to start from what we do know. We know how to model a self interested choice. If we are provided a set of goods and a set of goals said goods are to be used for, we can in theory model rationality by working out which strategy of utilising goods best achieves the set of goals. And even if the amount of goods we had available to us varied depending on the strategy we adopted in ways that did not entirely align with the most direct way to achieve the stated goals, we would still be able to in most cases determine which of two given strategies was superior. And while our answer will tend to become more indeterminate as we withheld information from the choosing party, so long as there is at least some determinacy to the goal, the set of goods and the constraints, there is in principle some discoverable strategy or set of strategies which is superior to other sets.
Analytically, in the above choice situation, when the choosing party chooses one strategy over another, it implies that there are more self-interested reasons in favour of the chosen strategy than the rejected one; so long as self interest is spelled out in terms of a determinate set of goals and considerations. The second approach is to set the situation up such that a self interested agent will fully consider all the moral reasons in order to satisfy his own ends. In this particular case, the moral reasons in favour of a social rule are that it benefits all the members of society. We want our agent to choose a social rule that is to everyone’s benefit in order for him to achieve his own benefit. A person who cares only for his own benefit will choose in such a way that everyone benefits only if he did not know who he was going to end up as.
I will try to pre-empt one kind of objection. It may seem as if I have simply assumed that particular features of a person, including their identity, are irrelevant from a moral point of view. Rather, the above discussion has made explicit what is an implicit feature of our social morality. Whether or not a person possesses a particular characteristic has no bearing on whether that characteristic is a fundamentally morally salient one; especially when that characteristic is not universally shared among the relevant members of society. To illustrate, let us return to our Tom and Jerry example. Let us suppose that it is actually the case that being Tom, whether or not Tom turns out in other possible worlds to be a cat or a mouse, is a property of fundamental moral salience such that the mere fact that there is more pro-tanto moral reason to benefit a person if that person is Tom. As we have noticed before, in any contract situation where the contracting party takes into account all the relevant moral considerations, even if the contracting party chooses such a principle, this has nothing to do with the actual particulars about that party. The contracting party will choose said principle regardless of what his personal identity is.
If a function is invariant across its range with respect to a particular putative, variable, then said object is not a variable that figures into the function. Similarly, if there is some representational device that aggregates all the relevant moral reasons, but remains invariant with respect to the personal identity of the parties, then personal identity is not one of the considerations or variables of the device. Therefore personal identity does not count as a morally relevant consideration. We can extend this argument to include any non-universally shared personal characteristic such as race, religious persuasion, sexual orientation, gender, socio-economic position, talents, dispositions and preferences etc.
One potential problem that arises as we prevent people from knowing who they are going to turn out to be is that people in a pluralistic society have different conceptions of the good and therefore different standards according to which states of the world can be evaluated. According to which standard are we going to evaluate benefits and costs? This is a difficult question that will be treated more fully later in this chapter. Nevertheless, I will for now state that, while the ends that the parties are choosing rules for becomes much less determinate, it is not completely indeterminate. However, what this means is that we our device is unable to generate specific rules, and instead can at most deliver somewhat less determinate, more general conditions that any social rule that is to be considered just must fulfil. i.e. we can at most extract just principles instead of just rules.