Constructing the original position 2: Initial Choice Situations

Murali

Murali did his undergraduate degree in molecular biology with a minor in biophysics from the National University of Singapore (NUS). He then changed direction and did his Masters in Philosophy also at NUS. Now, he is currently pursuing a PhD in Philosophy at the University of Warwick.

Related Post Roulette

17 Responses

  1. Roger says:

    I agree with the step of agreeing to the basic system of rules. I get it.

    My question is why you transition from terminology of preferences in one paragraph to “moral reasons” or moral preferences or “morally justified” in the next paragraph. Aren’t you assuming that the only pertinent preferences are moral preferences? Or is the term moral being used as a synonym for good?

    Could you clarify please?Report

    • Murali in reply to Roger says:

      In the previous post I established that the fact that a person “benefits” from social coordination in one system of rules means that there is a prima facie moral reason in favour of that system.

      I said that, however, we didn’t know how to put all those prima facie reasons together.

      In this post, I draw attention to a basic general truth about things we prima facie prefer. Then, by doing a variable substitution, I show that when I substitute the right variables, and set up the right conditions, what is chosen is the thing which there is most moral reason in favour of.Report

      • Stillwater in reply to Murali says:

        I wanna follow up on Roger’s comment since it strikes me as very close to a confusion I’m having.

        You say that “insofar that a system of social rules benefits some member of society, there is a prima facie moral reason in favour of that system. Where the term benefit is used to refer to any positively valenced thing we think we should get from engaging in social cooperation. i.e. by benefits I mean merely the “proper objects of social coordination” whatever that may turn out to be.”

        Like Roger, I’m not sure it why it has to be a moral reason. It could be, for example, a preference for peace rather than conflict, or individual choice rather than (excessive) limitations on harms. I’m not sure those types of preferences ought to be cashed out in moral terms, primarily because I don’t think the people who advocate that way are advocating in moral terms. So there’s a disconnect between the psychological/emotional beliefs people are acting on and an ideal where those beliefs need to be justified in objective morality.

        Which leads to my next worry. You wrote:

        Presumably, if we designated F to be the benefit and O to be a system of rules, then a rational person prima facie prefers systems of rules for which there are prima facie moral reasons in favour of.”

        One worry is that it’s entirely possible that someone behind the veil (a thin veil, perhaps) would equate O and F, it seems to me. For lots of US conservatives, the benefit is the system of rules. Another is that I’m not entirely sure why someone would be inclined to favor O if it leads to F (they would if it were a necessary condition) when all they really care about is F. And yet another is that I’m still not convinced that a rational person requires moral reasons for choosing O, let alone for preferring F. I think a rational choice requires some justification for O given F, that much I agree with, but I don’t think it has to be capital M moral.Report

        • Murali in reply to Stillwater says:

          So there’s a disconnect between the psychological/emotional beliefs people are acting on and an ideal where those beliefs need to be justified in objective morality.

          This goes back to the previous post. There I connect the idea that there are some things that we think we should get out of social cooperation. We all have different ideas about what those benefits are, and we don’t even think about those benefits in moral terms. At least not all the time. However, whenever we think that someone has been sufficiently alienated from those benefits while engaging in social coordination, we think that that person has been oppressed. i.e. the concept of oppression is linked to the idea that there are some “benefits” that are the proper aim/object of social coordination. We also think that to the degree to which a society is avoidably oppressive, it is not just. And a society is just to the extent that its basic institutions are morally justified. So, whatever it is that is due to people as a result of participation in social coordination, it is a moral reason in favour of a system if that system gives it to those who so coordinate.Report

        • Murali in reply to Stillwater says:

          One worry is that it’s entirely possible that someone behind the veil (a thin veil, perhaps) would equate O and F, it seems to me. For lots of US conservatives, the benefit is the system of rules

          I’m not sure how this is a worry. It would seem to be a trivial instance of the general rule.

          Another is that I’m not entirely sure why someone would be inclined to favor O if it leads to F (they would if it were a necessary condition) when all they really care about is F.

          If all they care about is F, they will choose whichever O gives them the most F (if they are maximising and there is no veil of ignorance)

          And yet another is that I’m still not convinced that a rational person requires moral reasons for choosing O, let alone for preferring F. I think a rational choice requires some justification for O given F, that much I agree with, but I don’t think it has to be capital M moral.

          You’re right. A person doesn’t need moral reasons for choosing O. But the fact that a rational person would prima facie choose some system represents a prima facie moral reason in favour of that system. That doesn’t mean that he chose O for moral reasons.Report

  2. Shazbot3 says:

    Hey Murali,

    First off “next”, as in I want to see the next post.

    I think you should try make this point a little sharper with an analogy or metaphor. All the technical details make it hard to argue about the substance of what you are saying, and an analogy with something less technical could help.

    Here is what I take you to be saying in simple terms. (Correct my errors, please.) If there are two systems of rules X and Y that are exactly equal in their outcomes with one exception, i.e. that X benefits person A while Y doesn’t, then X is morally better (or more “just,” maybe) than Y. And you mean to use “benefits” in the broadest possible way, not necessarily in terms of money or pleasure, but just getting A something she “prefers.” (BTW, I think the worry that you address in the footnote about whether your position is just some sort of preference-based rule-utilitarianism will deserve much more than a footnote and your committee will ask about it. You can’t hide your views in the footnotes.) The problem comes when we try to figure out which two systems of rules, say P and Q, that benefit different individuals, say B and C, is morally better (or more just).

    We can’t. So we need some way of judging which system is more just that abstracts away from individuals.

    Yes?Report

    • Murali in reply to Shazbot3 says:

      No, benefit is not just something the person prefers. Instead I am doing the reverse. Let us construct a situation where the person prefers whatever we want to think is the thing which is the proper object of social cooperation.Report

      • Murali in reply to Murali says:

        i.e. Actual people may not end up preferring the benefit that they ought ot get from social cooperation at all.Report

      • Shazbot5 in reply to Murali says:

        But how do we figure out what “the proper object of social cooperation” is, unless we assume that it is what people prefer to have for themselves and others? It seems to me that almost by the definition of “cooperation”, we cooperate to get what we want.

        The advantage of contractarianism (as Hobbes clearly believes) is that you can explain how “the good” is not some mysterious Platonic form in the sky, but is the same thing as “what I believe is in my interests”, while still explaining that it is in each person’s interests to not kill or not steal and to generally act morally. (IMO, Hobbes is a kind of preference utilitarian or rather someone who believes in what GE Moore called an “interest theory” of the good, i.e, the good is just that which is in our interests, and “our interests” are just an expression of our preferences.)

        I am not quite sure how Rawls or you can avoid being a preference utilitarian and still be a contractarian. (N.B. Kant is not a contractarian. He certainly doesn’t think it is in your interests, always or necessarily, to act morally. He thinks that violating the dictates of categorical imperative is irrational in some vague way that an egoist wouldn’t really care about. For Kant, the moral law has a different status than a mere contract.)Report

  3. Roger says:

    Murali,

    Ok, let me go back to your prior post and start there…

    ,” …we see the basic idea that there is some benefit that members of a society ought to get and that getting less or none of this benefit provides them with legitimate reasons to leave society.
    The dynamic we observe here is that each member of society is at the same time expected to benefit from and contribute some benefit to society. The benefits to coordination are therefore not exactly one sided, but mutual or reciprocal.”

    Wouldn’t it be more accurate to say that we expect members to follow the institutional rules? These may require contributing some benefit to society or may just require following the rules.  In other words, your contribution may be more narrow than benefiting others?

    “But social coordination for mutual advantage just is what we call cooperation. The basic idea that we can extract is the idea that social rules are supposed to benefit all the members of society. Let us note a further conceptual relation. We think that society is oppressive to the extent that it alienates or fails to provide some benefit that we think society ought to provide. Our concept of what counts as oppression depends on what we think the benefits of social cooperation should be. Where people defect from social cooperation because they were alienated from some benefit which we didn’t think they were supposed to receive we think that they are being unreasonable. This is because we don’t think that they were oppressed. If on the other hand, we were to be convinced that the aforementioned benefits were in fact properly one of the aims of social cooperation, we would think such people who were alienated from those benefits were oppressed. Moreover, even if a particular benefit was owed to a person, but that it was not the proper aim of social cooperation, while we would think that person unfortunate in a number of ways, we would not think that the person was oppressed. In addition, to the degree that a particular society is oppressive, we do not think it is just.”

    So a just society is one which delivers the benefits of cooperation to members commensurate with the members contributions or observance of the rules? An unjust society oppresses members and does not provide the benefits that were deserved?

    “It is impossible to conceive of a society that is both oppressive and just in the same respects at the same time. Therefore, we can infer that insofar that a system of social rules benefits some member of society, there is a prima facie moral reason in favour of that system. This also means that insofar as system A is pareto-superior to system B in terms of benefits, A is more just than B. Nothing we have said so far, however, gives us any way to aggregate or balance claims, see whether one claim over-rides another or undercuts it. Theories of justice differ from each other because they have different notions of what counts as a benefit and they differ on how the various prima facie moral reasons interact with one another.”

    Thus a society which provides more benefits with no additional costs and no additional harms is morally superior (and more just) to one with less benefits or more costs and harms? (Note this still says nothing about the sticky situations where there are tradeoffs of more costs and harms compared to more benefits.)

    “Presumably, if we designated F to be the benefit and O to be a system of rules, then a rational person prima facie prefers systems of rules for which there are prima facie moral reasons in favour of. If we were to then set up an initial contract situation where the parties who were doing the choosing are P-like, the system of rules they will choose if they were rational would be the system there are the most moral reasons in favour of. This is not to say that any such device will do, only that there is some kind of initial choice/ contract situation such that the output of the choice or contract situation is the most just (most morally justified) system of rules.”

    Wouldn’t it be more accurate to say that to the extent that we want our institutions to be morally just, we will rationally choose ones which lead to the most benefits at the least costs and harms? However, there is no assumption that people will choose institutions only on the dimension of moral justness, nor that any two people will agree with how we balance the costs and benefits?Report

    • Murali in reply to Roger says:

      So you’ve got me mostly right.

      Wouldn’t it be more accurate to say that we expect members to follow the institutional rules?

      Except that sometimes when we think the rules are bad and that following those rules does not deliver what we think following social rules is supposed to deliver, we think that it is unreasonable to exclude people from society. Also in some cases, it could just be us thinking that the rule is its own benefit. Remember that I am using the word benefit loosely enough to encompass this possibility as well. I am not trying to make a judgment here, but just trying to characterise the way we all think about social rules and the point of following them, and the point to having them in the first place and how they link up with each other.

      So a just society is one which delivers the benefits of cooperation to members commensurate with the members contributions or observance of the rules? An unjust society oppresses members and does not provide the benefits that were deserved?

      yup, though I want to be careful about the word deserve. It is a bit more loaded than I am willing to commit to at this point.

      Thus a society which provides more benefits with no additional costs and no additional harms is morally superior (and more just) to one with less benefits or more costs and harms? (Note this still says nothing about the sticky situations where there are tradeoffs of more costs and harms compared to more benefits.)

      Yup. including the caveat.

      Wouldn’t it be more accurate to say that to the extent that we want our institutions to be morally just, we will rationally choose ones which lead to the most benefits at the least costs and harms? However, there is no assumption that people will choose institutions only on the dimension of moral justness, nor that any two people will agree with how we balance the costs and benefits?

      You misunderstand what I am doing here. We have these prima-facie moral reasons in favour and against various systems of social rules. However, from where we currently stand, we don’t know how to put those reasons together, how to balance them or whether balancing is the right thing to do in the first place.

      However, we do have an established, rich and robust framework to deal with personal preferences in a variety of situations under a varitey of constraints. And this basically works in a P chooses object O which has preferred features F kind of model. Here we know how to deal with conflicting preferences, how to adjudicate, how to balance, when to balance or whether to balance at all.

      What I want to do is use this formal apparatus. So, I make the appropriate substitutions such that the final preference that comes out amounts to that which the moral reason favour all things considered. And I show that to do this, the object O is naturally the system of rules (since that is what is being chosen) and that if I want the preference to come out with the morally best system, the features F should be the thing which ceteris paribus the more of make a system morally better. And we can remain confident that the formal apparauts is useful and can handle cases where some things are benefits only to a certain extent because we can construct utility functions where the utility peaks at a certain point. Now, there may be a question as to what counts as a benefit and what we are going to feed into our device, but that will be answered in subsequent posts. What I am doing here is just to show that some version of such a device even if highly artificial will deliver us rules that are morally justified.Report

      • Roger in reply to Murali says:

        Sorry to belabor, I am not formally educated in either political or ethical philosophy. I am fine with using your rational choice or institutional choice methodology.  However, we can only say it is “morally just” to the extent that people choose the institution based solely on those terms above that we defined as just.  To the extent they choose based upon other reasons, we cannot say anything about how morally just the chosen institution is. Right?

        As an example, I may want to choose society based upon two factors. One is how well it benefits people while minimizing costs and harms.  The other is that it makes me king, because kings live the good life and attract all the hot chicks. If I chose institutions just on the initial factor, it would be morally just.  If I blended my chosen institutions in any way using any of the second factor it is no longer necessarily morally just.  Right?Report

        • Murali in reply to Roger says:

          However, we can only say it is “morally just” to the extent that people choose the institution based solely on those terms above that we defined as just.

          Yup, and because we are not talking about actual people, but hypothetical people on paper, we can just make them prefer what the benefits are supposed to be and not prefer anything else.Report