Reasonable-ness, by what standard?

Murali

Murali did his undergraduate degree in molecular biology with a minor in biophysics from the National University of Singapore (NUS). He then changed direction and did his Masters in Philosophy also at NUS. Now, he is currently pursuing a PhD in Philosophy at the University of Warwick.

Related Post Roulette

33 Responses

  1. Christopher Carr says:

    “But by that route requires us to reserve judgement on a great many issues including for example, the existence of other minds, or the external world or even about whether or not we should have high epistemic standards in the first place.”

    I believe it’s been demonstrated that skepticism about the existence of other minds is the only reasonable position.Report

    • I believe it’s been demonstrated that skepticism about the existence of other minds is the only reasonable position.

      I dont know. We could make an inference to the best explanation. But it is not clear whether any such inference counts as knowledge. Or whether the results of such an inference are so tentative that said doxastic state can be counted as a scepticism.Report

    • Chris in reply to Christopher Carr says:

      Perhaps you mean “demonstrated” in a way that I do not understand. How would such a thing be demonstrated? And what degree of skepticism?

      Ultimately, anything more than token skepticism about other minds leads to solipsism. Solipsism and reason are not exactly best buds.Report

      • Christopher Carr in reply to Chris says:

        Belief in other minds ultimately requires making a leap of faith. That not making this leap of faith leads to solipsism is a paradox.Report

        • Chris in reply to Christopher Carr says:

          I assume, then, you are a skeptic about any conclusion that can’t be arrived at by deductive reasoning? Because the “leap of faith” involved in the inferring other minds is the “leap of faith” that results from any non-deductive reasoning: inductive, abductive, counterfactual, etc. There goes science.Report

          • Murali in reply to Chris says:

            The leap of faith required for the problem of other minds requires you to generalise from an example of  one. That’s quite a bit different.Report

            • Chris in reply to Murali says:

              I don’t think that’s quite true. The analogical arguments, which are the most common, do require that you reason from one individual to many, but not from one data point. This is an important distinction: we’re not saying one swan is white, therefore all swans are white. We’re saying the whiteness of the one swan is associated with x, y, z, a, b, c, d…. n (where n follows an incredibly high number of other letters), therefore wherever we find x through n, whiteness is likely to follow. That’s not a very big leap. If it is, then we’re back to only deductive reasoning.Report

          • Chris in reply to Chris says:

            By the way, I tried to post this comment earlier, but I think it got lost in one spam filter or another. There are, in fact, deductive arguments for other minds, which is where I was leading with my questions to Christopher. Specifically, the line of reasoning best articulated in Norman Malcolm’s classic paper “Knowledge of Other Minds,” which he got from Witt.genstein (I have a feeling it’s that name that triggers the filter) in the Investigations. I can’t find the paper online, but if you look it up you’ll find that it’s reprinted in several books. So a leap of faith may not be necessary.Report

            • Stillwater in reply to Chris says:

              Externalists have argued that the semantic properties of our words require the existence of an external world (Putnam in particular). Kripke has argued – I’ve been looking for the book but can’t find it – something along the Wittg. line of rule following, leading to the existence of other minds. (It’s been a long time since I read this stuff so my recollections are fuzzy.)Report

              • Chris in reply to Stillwater says:

                The book is Witt.genstein on Rules and Private Language (it’s definitely that name that triggers the spam filter). In it, Kripke makes an argument similar to the one Malcolm makes, but Malcolm’s is probably more accessible.Report

    • That doesn’t follow.  Solipsism requires you to believe that you are the product of a unique process while every other human is the product of some other process.  In short you have to believe that humans were created in 2 different ways and one of those ways has only happened once.

      By contrast, the proposition that other humans have minds merely requires 1 process for making humans.  That proposition relies on a simpler universe than solipsism, making it more probable in the absence of contradictory data.

      As a general rule it is more likely that one is typical than one is unique.  It is therefore reasonable to assume that in the absence of contradictory evidence.Report

      • Chris in reply to James K says:

        James, it does follow from Christopher’s argument, which isn’t about process (though I definitely appreciate process arguments). If it requires a leap of faith to infer other minds, it also requires a leap of faith (a prior one) to infer that my mental experiences are of something outside of them, since we only have direct experience of the mental experiences themselves. With Christopher’s position, you get a reductio that, short of Cartesian maneuvers (which I doubt Christopher wants to make) leads you to solipsism.Report

      • Christopher Carr in reply to James K says:

        Or, there could be an infinite number of processes happening exactly once each. But yeah, I’m not a solipsist, just playing Devil’s Advocate.Report

    • Stillwater in reply to Christopher Carr says:

      Solipsism is just a type of skepticism of the external world, no? I can no more prove the existence of a table or coffee cup than I can prove the existence of minds other than my own. On the assumption that we’re trapped in the contents of our own minds, tho, some things may seem reasonable and others not. For example, if I were a solipsist, then there ought be nothing constraining me from treating people reallyreally poorly, since the moral properties I reasonably attribute them (why do I do this if there are no other moral beings?) would in fact be completely unjustified and therefore completely unreasonable.

      But I don’t act that way. So while solipsism is perhaps a reasonable view to accept, acting as if I was the only mind isn’t a reasonable view to accept.Report

  2. E.C. Gach says:

    Consider our pre-theoretical intuitions. Different people have different intuitions about things (e.g. morality). However, they all cannot be right. i.e. any person relying on his moral intuitions is just as likely as not to get things correct.”

    I’m curious if you think this is true in aggregate.  For instance, are you claiming that the correlation between moral intuitions, and what one later reasons to be morally true is somewhere around %50, and that one’s moral intuitions are just as likely to be the same or similar to another person’s as opposite or different?

    Also, I’m curious if, while you might argue that most people have rather different and irreconciliable notions of “morality” in practice, i.e. the conclusions we derive from it in a given situation, most if not nearly all people have an extremely similar understanding of what “to be moral” or “to do the right thing” means in the abstract?  So that even if you and I have different moral intutions to start, the fact of an intuition to something being either moral or immoral, and what those categories mean, is largely shared among people.Report

    • Murali in reply to E.C. Gach says:

      either moral or immoral, and what those categories mean, is largely shared among people.

      That’s a different kind of beast. That is simply the meaning we ascribe to the word moral and in most cases meaning is settled by convention. Linquistic intuitions are different from Moral intuitions. The former can be confirmed empirically by asking people what the hel they mean when they use a particular term.Report

      • E.C. Gach in reply to Murali says:

        Right, but the phenomenon the word describes is not simply contrived.  I’m not talking so much about the intuition to make it a word, but rather the intuition to “do the right thing.”

        People can all have different conceptions of what is “right,” but the fact that we’re concerned about that idea at all, and on average it appears that we are, seems an important shared frame of reference.Report

        • Murali in reply to E.C. Gach says:

          I’m not talking so much about the intuition to make it a word, but rather the intuition to “do the right thing

          Errm, I’m not sure how widely that intuition is held. According to some anecdotal estimates, only about 20% of people care about doing the right thing whatever that turns out to be. the rest of us are more like Huck Finn who would help Slave Jim escape whether or not it is the right thing to do.

          I’m not sure whether we can strictly speaking infer from the fact of an actual widespread commitment to be moral to the notion that we should be morally committed. I think that given that our moral intuitions are engendered either by nature or nurture, the fact that some of our intuitions are not subject to extensive disagreement makes it as error prone as others which are subject to such disagreementReport

          • E.C. Gach in reply to Murali says:

            Maybe I should rephrase it as an intuition to Justice.  Or perhaps I should just stop calling it an intuition, and remove its epistemic flavor by going with the traditional phrase of “sentiment.”

            To the degree that both “truth” and “justice” are concerns only for some level of social and mental conciousness, I think intuitions remain important.  In the same way that “what is moral” relies heavily on certain facts about living creatures’ experiences, what is “true” would seem to be to connected to facts about experience to exclude experiences like intuition or sentiment from it.Report

      • Chris in reply to Murali says:

        Well, the linguistic case is more complicated than that, but the moral one is less complicated than you seem to be implying. We can, as people like me do on a regular basis, emirically test people’s moral intuitions. And it turns out that, at a certain (relatively low) level of abstraction, they look a lot a like. So much so that some have begun to theorize that, as with language in the dominant linguistic paradigm, there is a universal moral grammar that shapes our intuitions.Report

    • BlaiseP in reply to E.C. Gach says:

      Morality only makes itself manifest in actual judgement calls, made in the moment.   As I said downstream, Reasons are merely so many messages returned from the Black Box of Policy and Axiom when we shove a test case into that Black Box.   The only abstractions lie within the Black Box, derived and fine-tuned by hundreds of test cases shoved into it.

      The Priest and the Levite in the story of the Good Samaritan had Reasons for leaving the man in the road.   Chief among those reasons were ritual uncleanness.  To get a grip on that parable in context, you must understand the Priests and Levites had a little vacation town down along the Dead Sea where they’d stay when they weren’t doing service in the Temple in Jerusalem.   If they’d touched a dead body, they couldn’t serve in the Temple.

      It’s often hard to tell a wounded man from a dead man:  they weren’t about to find out, either.   They just went to the other side of the road and thought they were doing the right thing.

      The priests handled sacrifices, raw meat and grain and such.   The entirely reasonable prohibition of keeping the priests’ hands clean of the germs from a dead human body ought to seem pretty obvious.   The problem arose in the context of Jesus answering a question about the Most Important Commandment.   He answered in two parts, love God and love your neighbor as yourself, to which his questioners responded “And who is my neighbor?”

      In the parable of the Good Samaritan, Jesus points out how this once-reasonable ruleset had become “brittle”, that’s what we call a ruleset which has no flexibility.   It trended to the God outputs and neglected the Neighbor outputs.    Jesus ends the parable with another question, a fundamental reset of the rules engine:

      Now which of these three do you think seemed to be a neighbour to him who fell among the robbers?”

      He said, “He who showed mercy on him.”

      Then Jesus said to him, “Go and do likewise.”

      Go and Do.   It’s not abstract in the face of the man who fell among robbers.  We must act on the truth of your convictions.Report

  3. BlaiseP says:

    Let me approach this from a rules-based AI perspective.  Before we can approach Reasons, we must first set up the axioms and policy.  Let’s take this Black Nationalism apart and try to build a ruleset.

    What’s a nation?  It has several characteristics, without which it isn’t a nation.   It has a territory, citizens and autonomous government.   In this case, it would also have a Black attribute.

    Expanding into this Black attribute, what are the discriminants we might apply to sort out Black from Other, thus arriving at some criteria for citizenship in the Black Nation.   This opens a can of worms, the same can we see in the Native American nations.   Just how Black does someone have to be to qualify?   These horrible discriminants are, well, discriminatory and the longer this can stays open, the more worms put in an appearance.

    Could a White person apply for citizenship in the Black Nation.   Hell, white Congresscritters can’t apply for membership in the Congressional Black Caucus.

    Therefore, all this business about Blackness or Whiteness is so much self-referential cant, completely arbitrary in its definitions.   No reasonable person could possibly subscribe to such a pernicious allegiance.Report

  4. Stillwater says:

    Murali, this is a nice post and it certainly gets the ball rolling wrt a more rigorous understanding of what ‘reasonableness’ means. I have a few worries tho. One is that reasonableness along the lines you suggest seems to me like it could merely reduce to coherence with an antecedently held collection of beliefs. So, suppose a person holds a collection of prior beliefs S where each belief is subjectively weighted (by level of commitment say). For this person, the reasonableness of holding P could be determined by its consistency wrt the totality of other already held (standard coherence theory).

    Alternatively, if the correspondence of belief P with the world (ie, the facts) is a necessary condition for reasonableness (something you mentioned above), then belief P will be reasonably held according to a Bayesian analysis like you provided, but independently of the antecedently held belief set of the person making the judgment.

    Here’s the problem, tho, and what led to my comment in the previous post: it seems to me that the standard view of reasonableness is increasingly moving in the direction of coherence with antecedently held beliefs (where consistency is the main criterion) – and in particular, coherence with a narrow range of heavily weighted beliefs –  and away from correspondence with the external world.

    Maybe at the end of it coherence and correspondence are both useful and both necessary, and teasing out the distinction between the two is impossible. But even then, I want to say (tho I’m not sure how the argument would go at this point) that the difference in the two approaches is the degree to which each is employed in determining the acceptance of (that is, the reasonableness of) new belief P. And this is especially problematic in light of the fact that particular beliefs within belief set S can be heavily weighted and prioritized (ie., they’re closer to the center of the belief web). So reasonableness could, and I think to some degree has, become something entirely subjectively determined.

     Report

  5. Stillwater says:

    i.e. any person relying on his moral intuitions is just as likely as not to get things correct.”

    I think there are different meanings of ‘intuition’ in play here. On one understanding, an intuition is just what your gut tells you. And those, for the most part, aren’t pre-theoretical intuitions since what your ‘gut’ tells you is shaped by all the beliefs and experiences (and theories!) you’ve accumulated up to that point. So those types of intuitions aren’t pre-theoretical in any sense of the word (or so it seems to me).

    Another conception of an intuition is more technical: it’s a judgment based on an abstraction which cleaves off irrelevant features that might lead to an idiosyncratic response. So on this view, intuition is quite like an upper limit on conceivability rather than what your currently grumbling ‘gut’ tells you. One example of a belief justified by an intuition is ‘that A = A is a necessary truth. The conclusion that A=A is a necessary truth isn’t determined by going thru cases, but rather that it’s impossible to conceive of a situation in which A = -A is true. The same goes for the conjunction function, and basic logical relations between ‘if,then’ and ‘&’ and ‘or’. So judgments based on intuitions are those judgments which it’s impossible to conceive of otherwise.

    Likewise, I think moral intuitions – where intuitions are understood in this technical sense – are a useful tool for determining moral judgments and yield a pretty convincing case for their truth (for my part, I don’t see how many (not all) types of moral judgments could be justified otherwise) . So the analogy here would be that intuitions ground our moral judgments in pretty much the same way that intuitions ground our logical judgments: by considering abstract cases in which irrelevant features are eliminated. (This is the role classic hypotheticals play in revealing or determining moral judgments.) The analogy breaks down on a number of levels, but not, I think, wrt justification. For lots of moral judgments (not all), and especially judgments about actions comprised of only a single moral dimension, if moral principle M is true it’s because not-M is inconceivable (in that type of situation). Of course, moral principles come into conflict and resolving those conflicts isn’t determined – at least most of the time – by our intuitions, but rather pragmatics or compromise or whatever. So the above considerations apply to basic moral judgments, not complex actions or situations comprised of many (basic) moral dimensions.Report

    • Murali in reply to Stillwater says:

      The thing is, is that people equivocate between the two meanings. If people really referred to the kinds of judgements we make when we judge logical truths to be true (logical judgements), then when people are making moral arguments, they wouldnt be making certain kinds of argumentative moves.

      Very often when moral theorists appeal to intuitions, they dont just appeal to appeal to the inconceivability of the alternative, rather, they appeal to the implausibility. Here’s an example.

      Here’s an example. People often think that the following counts as an argument against classical utilitarianism:

      1. Under classical utilitarianism, if I gained more pleasure from torturing babies than the pain the baby felt, it would be morally better for me to torture babies.

      2. 1 is counterintuitive

      3. Therefore Classical Utilitarianism is false

      When people say that 1 is counterintuitive, al they are saying is that their gut seriously rebels at the notion that 1 could be true. Often in less clear situations there will be lots of cases where people’s intuitions are going to differ. The question at hand is what makes cases where people’s intuitions differ different from cases where intuitions don’t. If there really isnt a difference in the types of intuitions involved, then how do we suppose that one is not error prone when the other intuition clearly is.Report

  6. Murali says:

    @Stillwater
    One is that reasonableness along the lines you suggest seems to me like it could merely reduce to coherence with an antecedently held collection of beliefs. So, suppose a person holds a collection of prior beliefs S where each belief is subjectively weighted (by level of commitment say). For this person, the reasonableness of holding P could be determined by its consistency wrt the totality of other already held

    Well, given the way I set things up above, if coherentists were just as likely to take the opposite view on something as otherwise, then mere coherentism is insufficiently reasonable. i.e. if there were lots of people with different priors, then basing your beliefs on those sets of priors just isnt reasonable as it is no more likely to lead you to the truth than into error. Therefore, it is not reasonable.Report