Killer Robots: The Future of War?
Artificial intelligence (AI) will transform our lives. It will touch almost every aspect of society: business, education, transportation, medicine, even politics. In most places, this will be a good thing, removing drudgery and improving productivity. But there is one place that I fear its arrival, and that is in the military.
(Read Sherry Turkle’s Britannica essay on robots and humanity.)
The world will be a much worse place if, in 20 years’ time, the military are using lethal autonomous weapons systems (LAWS), for there are no laws about LAWS. The media like to call them “killer robots.” The problem with calling them “killer robots” is that this conjures up a picture of The Terminator. But it is not The Terminator that worries me or thousands of my colleagues working in AI. It is much simpler technologies that are, at best, a decade or so away. Take an existing Predator drone and replace the human pilot with a computer—this is technically possible today.
The attractiveness of such technologies is obvious. The weakest link in a drone is the radio link back to base. Drones have been sabotaged by jamming their radio link. Have the drone fly, track, and target for itself, and you have the perfect weapon from a technological perspective. It will never sleep. It will fight 24/7. It will have superhuman accuracy and reflexes.
There are, however, many reasons why this will be a terrible development in warfare. This will be a revolution in warfare. The first revolution in warfare was the invention of gun powder. The second was the invention of nuclear weapons. And this will be the third. Each was a step change in the speed and efficiency with which we could kill our opponents.
These will be weapons of mass destruction. Previously, if you wanted to do harm, you had to have an army of soldiers to wage war. Now, you would need just one programmer. Like every other weapon of mass destruction before it, like chemical, biological, and nuclear weapons, we will need to ban such weaponry.
These will be weapons of terror. They will fall into the hands of terrorists and rogue states that will have no qualms about turning them on civilian populations. They will be an ideal weapon with which to suppress a civilian population. Unlike humans, they will not hesitate to commit atrocities, even genocide.
These will not be more ethical than human soldiers. We don’t know today how to build autonomous weapons that will follow international humanitarian law and don’t know of any computer systems that can’t be hacked. And there are plenty of bad actors out there who will override any safeguards that might be put in place.
These weapons will destabilize an already shaky geopolitical order. It will only take a modest bank balance to have a powerful army. They will lower the barriers to war. We may even have “flash” wars when opposing robots get into unexpected feedback loops.
These will be the Kalashnikovs of the future. Unlike nuclear weapons, they will be cheap and easy to produce. This doesn’t mean they can’t be banned. Chemical weapons are cheap and easy to produce but have been banned. And we don’t need to develop autonomous weapons as a deterrent against those who might ignore a ban—we don’t develop chemical weapons to deter those who might sometimes use chemical weapons. We already have plenty of deterrents, military, economic, and diplomatic, with which to deter those who choose to ignore international treaties.
Above all, there is a deep moral argument that we give up an essential part of our humanity if we hand over to machines the decision of whether someone lives or dies.
Let’s not go down this road.
This essay was originally published in 2018 in Encyclopædia Britannica Anniversary Edition: 250 Years of Excellence (1768–2018).