Teaching Philosophy to Weaponized AI
The philosophical questions of morality in warfare are as old as recorded human history. With the rise of AI, drones, and technology, those age-old questions are only getting more complicated.
“When it comes to AI and weapons, the tech world needs philosophers” by Ryan Jenkins
What’s harder is figuring out, going forward, where to draw the line — to determine what, exactly, “cause” and “directly facilitate” mean, and how those limitations apply to Google projects. To find the answers, Google, and the rest of the tech industry, should look to philosophers, who’ve grappled with these questions for millennia. Philosophers’ conclusions, derived over time, will help Silicon Valley identify possible loopholes in its thinking about ethics.
The realization that we can’t perfectly codify ethical rules dates at least to Aristotle, but we’re familiar with it in our everyday moral experience, too. We know we ought not lie, but what if it’s done to protect someone’s feelings? We know killing is wrong, but what if it’s done in self-defense? Our language and concepts seem hopelessly Procrustean when applied to our multifarious moral experience. The same goes for the way we evaluate the uses of technology.
In the case of Project Maven, or weapons technology, in general, how can we tell whether artificial intelligence facilitates injury or prevents it?
The Pentagon’s aim in contracting with Google was to develop AI to classify objects in drone video footage. In theory, at least, the technology could be used to reduce civilian casualties that result from drone strikes. But it’s not clear whether this falls afoul of Google’s guidelines. Imagine, for example, that artificial intelligence classifies an object, captured by a drone’s video, as human or nonhuman and then passes that information to an operator who makes the decision to launch a strike. Does the AI that separates human from nonhuman targets “facilitate injury?” Or is the resulting injury from a drone strike caused by the operator pulling the trigger?
No matter the advancement of technology, at some point some human will influence that technology and what it does. The military spends millions of dollars and thousands of hours training people in things like Law of Armed Conflict, Rules of Engagement, Lawful Orders, and so on. That we may soon have to worry about weapons systems knowing the same is a brave new world indeed.
What say you? Login and Comment.