War in the Age of Intelligent Machines
By Bryn Shaffer
This March, Saint Mary’s University played host to an engaging and heated debate on the use of autonomous weapons of war, as part of the Automaton! From Ovid to AI public lecture series. The debate, video of which is available at CBC News, was between acclaimed BBC commentator and activist Dr. Noel Sharkey, and Dalhousie philosophy professor Dr. Duncan MacIntosh. It was moderated by Dalhousie professor of philosophy Dr. Letitia Meynell.
You could tell before the debate began, by the way the crowd was cramming themselves into every inch of standing space in the already large auditorium, that we were in for an engaging and thought-provoking discussion.
After introductions by Dr. Sarty, dean of graduate studies at Saint Mary’s University, our two contestants positioned themselves: Dr. MacIntosh would be arguing in favour of autonomous weapons of war, and Dr. Sharkey would be arguing against them. Dr. Meynell stood up to the mic to lay out the ground rules, then she rang the proverbial bell and we were off.
Dr. MacIntosh started off by asserting that regardless of the points he would raise in favour of autonomous weapons of war, he did not want to be misconstrued as in favour of war or violence itself. His position at its core was that sometimes peaceful means of engagement are not possible, and in those cases, autonomous weapons are the superior choice.
He went on to argue that one of the major advantages of autonomous weapons is that they spare more human lives in the long run, and that they prevent soldiers from experiencing the horrors of war. Further, he posited that the use of autonomous weapons would be comparable to the use of soldiers, as they are much the same in how they operate based on commands. However, the advantage to using autonomous weapons compared to soldiers is that they are able to carry out hazardous missions humans cannot be sent on.
Another point Dr. MacIntosh raised in favour of the use of autonomous weapons, is that they would execute less prejudice informed killings, thus resulting in fewer deaths. However, due to the inability of robots to make informed judgment calls, he stated that they should not be sent on missions that require the discretion of people. As an example of such a mission, he described a situation where a child soldier is looking to surrender to opposing forces.
Then Dr. MacIntosh’s time ran out, and it was Dr. Sharkey’s turn to take the floor. Dr. Sharkey began by asserting that militaries have a science fiction perspective on the ability of robots. To explain further, he presented slides demonstrating how autonomous weapons actually function and provided some examples of current weapons being used and in development. He articulated that a simplified way of understanding autonomous weapon function is that they receive a sensory input (such as from a heat sensor), which triggers a motor output (such as firing a gun).
Dr. Sharkey then explained that autonomous weapons once activated, are completely under computer control. This can be a problem because, as he said, “robots are easy to fool”. This is also a problem because current technology is not reliable enough in discerning civilians from armed combatants. Further, because we cannot predict how the algorithms operating the weapons will behave in every circumstance, especially when autonomous weapons come into contact with each other, unpredictable and dangerous outcomes could occur on the battlefield. These algorithms are also biased in nature, which could lead to indiscriminate targeting practices.
Dr. Sharkey also argued the use of these weapons will accelerate and amplify the pace of battle. And while autonomous weapons can be more accurate in targeting, the problem rests with what they are choosing to target. Dr. Sharkey was clear in reminding us that these machines are weapons, which are ultimately in the hands of humans. Meaning that while the robots may not be able to be prejudice, as Dr. MacIntosh said, they are the tools of those who are prejudiced. Further, these weapons won’t be able to be reserved to those in the military. Here, Dr.Sharkey provided the example of drone swarms operated by police forces using pepper spray on crowds.
After both parties provided their views, they were each given a brief opportunity to respond and defend their points. Dr. MacIntosh started by asserting his position that autonomous weapons should be used as threats to deter violence, rather than actually put into use. To this, Dr.Sharkey responded that while we often hope for weapons to be never be used, this is not the reality, as they are always implemented in the execution of mass murder.
The floor was then opened up for questions from the audience, which sparked interesting responses from both speakers. Questions ranged from issues of legality, and of science fiction, to questions of morality and implementation. By the end of the event, the audience was left with much to think on concerning the future of robotics and warfare, and what role we all will play in in that future.
Throughout the debate I found myself thoroughly engaged, on the edge of my seat, biting my tongue. Not only did the debate’s topic engage me, but also the often heated interactions between the speakers. When Dr. MacIntosh proposed the need for autonomous weapons as a means of intervention in what he termed dictator and third world states, Dr. Sharkey was quick to respond on the neo-colonial narrative being put forward. On another point, Dr. Sharkey called out Dr. MacIntosh as naive in his proposal that it is currently possible to create unbiased programming. At these points I found myself really aligning with Dr. Sharkey, wondering to what lengths nations such as the USA might go to justify and reframe their use and creation of autonomous weapons as ethical and unbiased.
So what can we take away from this battle of wits on autonomous weapons? Are killer robots truly an inevitable future of warfare? Is there hope for a more peaceful alternative? As someone who gets chills at the prospect of an Amazon drone landing in my yard, I greatly hope for the latter. As Dr. Sharkey argued, these weapons of destruction are currently being thought of in science fiction terms, but are being used and created in the real world, where real consequences exist. While the notion of human lives being spared is an alluring prospect of autonomous war, we must realize that those who build and advocate for killer robots understand this fantasy as truly nothing more than a carrot driving their horses into battle. War without death, under the implementation of autonomous weapons, is an oxymoron at best. At worst, killer robots are a near future that will affect all of us, both on and off the battlefield.
If you are interested in learning more about the speakers or about how you can join the campaign against killer robots check out the links below.
CBC’s The National coverage of the event can be viewed in their video “Stopping Killer Robots Before They Get To Us First.”
Dr. Noel Sharkey’s Twitter for the campaign against killer robots can be accessed here.
Dr. Duncan MacIntosh’s profile can be read here.
Bio
Bryn Shaffer is an MA student in the Women and Gender Studies Program at Saint Mary’s University where she is exploring the topics of gendered robot design in both science fiction and reality.