Teaching Philosophy to Weaponized AI

Avatar

Andrew Donaldson

Born and raised in West Virginia, Andrew has since lived and traveled around the world several times over. Though frequently writing about politics out of a sense of duty and love of country, most of the time he would prefer discussions on history, culture, occasionally nerding on aviation, and his amateur foodie tendencies. He can usually be found misspelling/misusing words on Twitter @four4thefire.

Related Post Roulette

26 Responses

  1. This isn’t new. If I leave the burner on in my apartment and it causes a fire while I’m not there, no one is going to say we need philosophers to save us from thinking stoves. They’ll say I was negligent.

    I think what we really need are computer scientists who understand that math is a language that can be used to describe anything and everything – not more AI cargo cults.Report

    • I’ve always disliked and not been particularly good at math. The one math teacher I ever had who really managed to get me to enjoy it always used the line “Math is a language” and taught it as such. Framing it that way I very much agree with.Report

  2. Avatar Chip Daniels says:

    New from ITT Tech- Join the exciting, fast paced world of philosophers!

    Yes, soon you too can be driving a Lambo surrounded by hot groupies, as you discuss Sartre and Foucault!

    Dial now- robot operators standing by!Report

    • That is very good Chip. Philosophers grabbing the business model of prosperity gospel televangelist could work. If nothing else their paid advertisements late at night should be entertaining watches.Report

      • Avatar LeeEsq says:

        During a Jordon Peterson thread on LGM, a commentator did note that lots of people do kind of need self-help work for every day leaving. By leaving this cultists and scam artists, the philosophers have done no one any favors.Report

  3. Avatar Maribou says:

    The idea that philosophers are particularly likely to be good at moral questions (even ethicists) is one I fundamentally disagree with.

    They’re particularly good at parsing questions, refining questions, strengthening questions….

    But answering them?

    Just as hampered as the rest of us.Report

    • Avatar Jaybird says:

      We need to get fifteen or sixteen different answers and then have AIs that embody the answers fight each other to the death.

      Last one standing is obviously the right one.Report

    • Avatar Murali says:

      @maribou

      Depends on the question I suppose.

      The problem of progress in philosophy is a fraught enough question that I wish to caution (and perhaps more than caution) against people claiming that philosophers don’t really know any better than non-philosophers. For one, I believe that doing my PhD has purpose. I believe, when I am doing my research, that I am adding to the stock of philosophical knowledge. I believe that I am advancing valid arguments based on true premises. I believe that I am answering questions.

      More importantly, moral knowledge must be possible in order for us to be able to hold each other morally responsible. Yet you cannot consistently both hold that people who spend more time and effort trying to answer moral questions are no closer to answering them than anyone else and that moral knowledge is possible.

      If moral truths are amenable to reasoned argument, then people who spend more time on these arguments, discovering them, refining them etc are better able to judge what the conclusions of those arguments should be.

      If moral truths are not so amenable, then it is unclear how anyone could have moral knowledge. Some people may have true moral beliefs, but they would not be justified.Report

      • Avatar Maribou says:

        The argument that moral truths are more amenable to reasoned argument than to other types of learning is, in my view, part of the error of philosophy. Particularly analytic philosophy but not very much more than other styles.

        I don’t need that to be clear to reasoned argument, particularly not argument which has been refined down to reason and left other kinds of knowing aside. There is more than one valid way to know things, and moral intuition can be as or more *accurate* than moral reason.

        Spending 5 minutes (or 500 hours which is closer to my actual practice) on the history of philosophy before and during WW2 makes my belief in the non-moral-superiority of philosophers, both in terms of acts and in terms of ways of explaining things, quite clear to me.

        Philosophers are human, all too human, just like the rest of us.

        As I said above, reason gets you better at asking the questions. Nothing you’ve said even begins to convince me it makes you better at answering them. And the fact that, within the system of philosophy, it does make sense that you would be better at answering them strikes me as a flaw of the system rather than a proof.Report

        • Avatar Maribou says:

          That said, I do believe that *individuals* can learn from philosophy and that philosophy is the best way for some, perhaps even many, individuals to become more moral beings. And there have been moral giants among the ranks of philosophers.

          I just don’t believe it’s universally better than other methods of coming to grips with morality. There’s too much evidence to the contrary.Report

        • Avatar Murali says:

          The question as it were is how there could be other ways of knowing moral truths.

          Normally, we come to know things by some combination of reasoning and perception.

          There is a matter of coming to terms with what moral intuitions are. We will start with what they are not. Intuitions are not the product of any conscious reasoning. The lack of reflectiveness in our intuitions should at least make us suspicious about whether we could justifiably believe that our intuitions are justified. So, while it is possible that intuitions are a matter of responding unconsciously to reasons we have but are not consciously aware of, we need to take care before endorsing this possibility.

          Moral intuitions are not a matter of simple perception of moral properties. The familiar case of perception of a property involves some causal link between a spacio-temporally located property with some sensory apparatus. Not only are moral properties not spacio-temporally located, its even unclear what the sensory apparatus for intuitions are. The most that could be said is that perception is kind of like a metaphor for what is going on with intuition. But even if that is true, that still does not tell us what is going on with intuitions.

          Whatever is going on with moral intuitions, given the fact that people often have wildly different intuitions about the same cases, intuitions cannot be particularly reliable. This lack of reliability undermines the notion that intuitions give rise to moral knowledge.

          This also explains the existence of extensive moral disagreement among moral philosophers. Too many moral philosophers rely on intuitions to theorise about morality. If intuitions were reliable, then reasoning on the basis of intuitions can only improve on those intuitions. If they are unreliable, then relying on them cannot lead to moral knowledge.Report

          • Avatar Maribou says:

            “Normally, we come to know things by some combination of reasoning and perception.”

            No. At least, that is not more than trivially true, if that, and certainly I don’t accept it as a relevant premise.

            I don’t think we’re likely to come to an agreement about this. Remember I live with someone whose training is in philosophy. I’m not in dissent because I don’t agree that within your framing of the world your conclusions hold, or because I’m unaware of what it is, I’m in dissent with your framing in the first place.Report

  4. Avatar Chip Daniels says:

    I’m pretty skeptical about true AI that could take advantage of philosophy.
    Partly because of the limitations of philosophy itself as Maribou noted, but also in how we imagine intelligence itself.

    Intelligence as we use it isn’t just really really fast computing; there is something else going on, hardwired into us from our DNA outward.

    And of course there is this:

    The military spends millions of dollars and thousands of hours training people in things like Law of Armed Conflict, Rules of Engagement, Lawful Orders, and so on.

    And we still end up with John Yoo.Report

    • Avatar James Kerr says:

      @chip-daniels

      Intelligence as we use it isn’t just really really fast computing; there is something else going on, hardwired into us from our DNA outward.

      I think a big problem is that we don’t really understand what intelligence is right now, and that makes deliberately engineering it really hard.

      The military spends millions of dollars and thousands of hours training people in things like Law of Armed Conflict, Rules of Engagement, Lawful Orders, and so on.

      I predict that shortly after the first time an autonomous weapon system refuses an order on ethical grounds, the weapon’s ethical programming will be patched out.Report

      • Avatar Jaybird says:

        Eh, it’s like “strength”.

        Does it mean “the ability to pick heavy things up”? Does it mean the ability to have bursts of speed? Does it mean the ability to run, like, for hours at a time?

        Well… yes. But someone who can do one really well might not be able to do others really well and saying something like “Usain Bolt isn’t strong because he can’t bench 400” doesn’t make sense.

        (I suppose, in D&D, Constitution measures the ability to run a marathon and Strength measures the ability to open doors/lift gates. But we also use the word “strong” to refer to people who take a bullet, or get bit by a snake, or who get a disease of some sort and fight it.)

        Intelligence is probably similar. We can refer to several things at once with this portmanteau word. Computers are exceptionally good at 1 or 2 or 3 of these things and, scarily, they’re good at all of them at once. And they’re incapable, like, functionally incapable, of doing other things that the portmanteau refers to.

        We’d best watch out which order we create the next few things that intelligence refers to in computers. We might end up with utilitarians. We might end up with deontologists.

        We’re not going to end up with virtue ethicists.Report

    • I’ve taught those classes…some folks you just can’t reach, to be honest.Report

  5. Avatar fillyjonk says:

    Teach the AI the “right” philosophy (I don’t know enough philosophy to know for sure which one is “right,” but the Existentialists might do) and we’ll be safe; it will just figuratively lie down and brood for the rest of its existence.Report

  6. Avatar Chip Daniels says:

    That they were foolish was well known;
    CONSTRUCTING ANGELS
    Whose wings would carry every dream transcendent;
    Yet what soul feels no trace of pity for those who
    saw TOO LATE
    they had created
    FALLYN ANGELS
    In whose every measure
    was written the folly
    of their
    CREATOR?Report

  7. Avatar Mark Van H says:

    So, Google and other tech companies that have contracts or dealings with the military should ask themselve the same questions that every other company that have dealings with the military should ask.

    What a perfectly logical thing to do. I am not sure why that would require philosophers though.Report

  8. Avatar Mike Schilling says:

    This is easy. You teach AI weaponry about the trolley problem and have it do the opposite.Report

Leave a Reply

Your email address will not be published. Required fields are marked *