Teaching Philosophy to Weaponized AI
The philosophical questions of morality in warfare are as old as recorded human history. With the rise of AI, drones, and technology, those age-old questions are only getting more complicated.
“When it comes to AI and weapons, the tech world needs philosophers” by Ryan Jenkins
Washington Post:
What’s harder is figuring out, going forward, where to draw the line — to determine what, exactly, “cause” and “directly facilitate” mean, and how those limitations apply to Google projects. To find the answers, Google, and the rest of the tech industry, should look to philosophers, who’ve grappled with these questions for millennia. Philosophers’ conclusions, derived over time, will help Silicon Valley identify possible loopholes in its thinking about ethics.
The realization that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too. We know we ought not lie, but what if it’s done to protect someone’s feelings? We know killing is wrong, but what if it’s done in self-defense? Our language and concepts seem hopelessly Procrustean when applied to our multifarious moral experience. The same goes for the way we evaluate the uses of technology.
In the case of Project Maven, or weapons technology, in general, how can we tell whether artificial intelligence facilitates injury or prevents it?
The Pentagon’s aim in contracting with Google was to develop AI to classify objects in drone video footage. In theory, at least, the technology could be used to reduce civilian casualties that result from drone strikes. But it’s not clear whether this falls afoul of Google’s guidelines. Imagine, for example, that artificial intelligence classifies an object, captured by a drone’s video, as human or nonhuman and then passes that information to an operator who makes the decision to launch a strike. Does the AI that separates human from nonhuman targets “facilitate injury?” Or is the resulting injury from a drone strike caused by the operator pulling the trigger?
No matter the advancement of technology, at some point some human will influence that technology and what it does. The military spends millions of dollars and thousands of hours training people in things like Law of Armed Conflict, Rules of Engagement, Lawful Orders, and so on. That we may soon have to worry about weapons systems knowing the same is a brave new world indeed.
What say you? Login and Comment.
This isn’t new. If I leave the burner on in my apartment and it causes a fire while I’m not there, no one is going to say we need philosophers to save us from thinking stoves. They’ll say I was negligent.
I think what we really need are computer scientists who understand that math is a language that can be used to describe anything and everything – not more AI cargo cults.Report
I’ve always disliked and not been particularly good at math. The one math teacher I ever had who really managed to get me to enjoy it always used the line “Math is a language” and taught it as such. Framing it that way I very much agree with.Report
New from ITT Tech- Join the exciting, fast paced world of philosophers!
Yes, soon you too can be driving a Lambo surrounded by hot groupies, as you discuss Sartre and Foucault!
Dial now- robot operators standing by!Report
That is very good Chip. Philosophers grabbing the business model of prosperity gospel televangelist could work. If nothing else their paid advertisements late at night should be entertaining watches.Report
During a Jordon Peterson thread on LGM, a commentator did note that lots of people do kind of need self-help work for every day leaving. By leaving this cultists and scam artists, the philosophers have done no one any favors.Report
The idea that philosophers are particularly likely to be good at moral questions (even ethicists) is one I fundamentally disagree with.
They’re particularly good at parsing questions, refining questions, strengthening questions….
But answering them?
Just as hampered as the rest of us.Report
We need to get fifteen or sixteen different answers and then have AIs that embody the answers fight each other to the death.
Last one standing is obviously the right one.Report
Works for meReport
Wow. Can we maybe televise the cage matches? And give each philosophical school team colors and a cool mascot? I can see a new sport a-borning…..Report
Oooh. I was thinking that they’d kill each other with 1s and 0s.
Putting them into robots would be much more fun to watch.
We can do both and then we just have to get the two winners to agree to work together!Report
I, for one, welcome our new murderbot philosopher overlords.Report
Agreed Report
@maribou
Depends on the question I suppose.
The problem of progress in philosophy is a fraught enough question that I wish to caution (and perhaps more than caution) against people claiming that philosophers don’t really know any better than non-philosophers. For one, I believe that doing my PhD has purpose. I believe, when I am doing my research, that I am adding to the stock of philosophical knowledge. I believe that I am advancing valid arguments based on true premises. I believe that I am answering questions.
More importantly, moral knowledge must be possible in order for us to be able to hold each other morally responsible. Yet you cannot consistently both hold that people who spend more time and effort trying to answer moral questions are no closer to answering them than anyone else and that moral knowledge is possible.
If moral truths are amenable to reasoned argument, then people who spend more time on these arguments, discovering them, refining them etc are better able to judge what the conclusions of those arguments should be.
If moral truths are not so amenable, then it is unclear how anyone could have moral knowledge. Some people may have true moral beliefs, but they would not be justified.Report
The argument that moral truths are more amenable to reasoned argument than to other types of learning is, in my view, part of the error of philosophy. Particularly analytic philosophy but not very much more than other styles.
I don’t need that to be clear to reasoned argument, particularly not argument which has been refined down to reason and left other kinds of knowing aside. There is more than one valid way to know things, and moral intuition can be as or more *accurate* than moral reason.
Spending 5 minutes (or 500 hours which is closer to my actual practice) on the history of philosophy before and during WW2 makes my belief in the non-moral-superiority of philosophers, both in terms of acts and in terms of ways of explaining things, quite clear to me.
Philosophers are human, all too human, just like the rest of us.
As I said above, reason gets you better at asking the questions. Nothing you’ve said even begins to convince me it makes you better at answering them. And the fact that, within the system of philosophy, it does make sense that you would be better at answering them strikes me as a flaw of the system rather than a proof.Report
That said, I do believe that *individuals* can learn from philosophy and that philosophy is the best way for some, perhaps even many, individuals to become more moral beings. And there have been moral giants among the ranks of philosophers.
I just don’t believe it’s universally better than other methods of coming to grips with morality. There’s too much evidence to the contrary.Report
The question as it were is how there could be other ways of knowing moral truths.
Normally, we come to know things by some combination of reasoning and perception.
There is a matter of coming to terms with what moral intuitions are. We will start with what they are not. Intuitions are not the product of any conscious reasoning. The lack of reflectiveness in our intuitions should at least make us suspicious about whether we could justifiably believe that our intuitions are justified. So, while it is possible that intuitions are a matter of responding unconsciously to reasons we have but are not consciously aware of, we need to take care before endorsing this possibility.
Moral intuitions are not a matter of simple perception of moral properties. The familiar case of perception of a property involves some causal link between a spacio-temporally located property with some sensory apparatus. Not only are moral properties not spacio-temporally located, its even unclear what the sensory apparatus for intuitions are. The most that could be said is that perception is kind of like a metaphor for what is going on with intuition. But even if that is true, that still does not tell us what is going on with intuitions.
Whatever is going on with moral intuitions, given the fact that people often have wildly different intuitions about the same cases, intuitions cannot be particularly reliable. This lack of reliability undermines the notion that intuitions give rise to moral knowledge.
This also explains the existence of extensive moral disagreement among moral philosophers. Too many moral philosophers rely on intuitions to theorise about morality. If intuitions were reliable, then reasoning on the basis of intuitions can only improve on those intuitions. If they are unreliable, then relying on them cannot lead to moral knowledge.Report
“Normally, we come to know things by some combination of reasoning and perception.”
No. At least, that is not more than trivially true, if that, and certainly I don’t accept it as a relevant premise.
I don’t think we’re likely to come to an agreement about this. Remember I live with someone whose training is in philosophy. I’m not in dissent because I don’t agree that within your framing of the world your conclusions hold, or because I’m unaware of what it is, I’m in dissent with your framing in the first place.Report
I’m pretty skeptical about true AI that could take advantage of philosophy.
Partly because of the limitations of philosophy itself as Maribou noted, but also in how we imagine intelligence itself.
Intelligence as we use it isn’t just really really fast computing; there is something else going on, hardwired into us from our DNA outward.
And of course there is this:
And we still end up with John Yoo.Report
@chip-daniels
I think a big problem is that we don’t really understand what intelligence is right now, and that makes deliberately engineering it really hard.
I predict that shortly after the first time an autonomous weapon system refuses an order on ethical grounds, the weapon’s ethical programming will be patched out.Report
Eh, it’s like “strength”.
Does it mean “the ability to pick heavy things up”? Does it mean the ability to have bursts of speed? Does it mean the ability to run, like, for hours at a time?
Well… yes. But someone who can do one really well might not be able to do others really well and saying something like “Usain Bolt isn’t strong because he can’t bench 400” doesn’t make sense.
(I suppose, in D&D, Constitution measures the ability to run a marathon and Strength measures the ability to open doors/lift gates. But we also use the word “strong” to refer to people who take a bullet, or get bit by a snake, or who get a disease of some sort and fight it.)
Intelligence is probably similar. We can refer to several things at once with this portmanteau word. Computers are exceptionally good at 1 or 2 or 3 of these things and, scarily, they’re good at all of them at once. And they’re incapable, like, functionally incapable, of doing other things that the portmanteau refers to.
We’d best watch out which order we create the next few things that intelligence refers to in computers. We might end up with utilitarians. We might end up with deontologists.
We’re not going to end up with virtue ethicists.Report
“We’re not going to end up with virtue ethicists.”
Now you’re just making me sad about our future fast-food Robotic Empire.Report
I’ve taught those classes…some folks you just can’t reach, to be honest.Report
Teach the AI the “right” philosophy (I don’t know enough philosophy to know for sure which one is “right,” but the Existentialists might do) and we’ll be safe; it will just figuratively lie down and brood for the rest of its existence.Report
“That they were foolish was well known;
CONSTRUCTING ANGELS
Whose wings would carry every dream transcendent;
Yet what soul feels no trace of pity for those who
saw TOO LATE
they had created
FALLYN ANGELS
In whose every measure
was written the folly
of their
CREATOR?Report
So, Google and other tech companies that have contracts or dealings with the military should ask themselve the same questions that every other company that have dealings with the military should ask.
What a perfectly logical thing to do. I am not sure why that would require philosophers though.Report
This is easy. You teach AI weaponry about the trolley problem and have it do the opposite.Report