Developments in automation and artificial intelligence are advancing at a dizzying pace. “Old-fashioned” humans are set to be replaced in many professions. Last week, delegates met at the United Nations to discuss whether states could automate soldiers. The discussions in Geneva were about machines that can identify, attack and kill enemies without a human directly pulling the trigger. What are the military and ethical aspects of these so-called “lethal autonomous weapons systems”?
This is a moral not a technological question. Fully autonomous weapons platforms may not yet be operational, but the technology is coming. The feasibility of the technology is clear when you look at weapon systems that already exist. For example, the German Bundeswehr already has an automated air defense system that can potentially intercept missiles without human involvement. Humans still need to authorise the interception, but it’s conceivable they could one day be removed from the process. Artificial Intelligence can be used for cars, boats, airplanes and drones. The arms race will probably mean this is possible sooner rather than later, especially since the relative costs for autonomous weapon systems are low compared to humans.
Now for the moral dilemma! As a wise man once said, Artificial Intelligence can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. However, when an autonomous weapon is deployed, you don’t need to send soldiers to risk their lives. Properly programmed, they may even be less prone to poor decisions than people in stress situations. Industry experts are still alarmed: more than a hundred roboticists, scientists and entrepreneurs have signed an open letter demanding a global ban, including celebrities such as Elon Musk, Stephen Hawking and Steve Wozniak – not just enemies of technical progress!
What do our readers think? We had a comment from Phillip who believes that in the future a human will ultimately always be pulling the trigger, even if drones become more and more common. Is he right? Then why have the UN meeting in Geneva last week?
To get a reaction, we spoke with Thomas Küchenmeister of Facing Finance, a German member organisation of the international “Stop Killer Robots” campaign. What would he say to Phillip?
It’s naive to think that humans will always make decisions about life and death in terms of weapons systems. Unfortunately, the development of weapon systems shows that the direction of travel is towards greater delegation and the automation of decision-making. We are now at a stage where weapons with autonomous capabilities already exist. They already exist. They are described as ‘crude’ or ‘immature’ systems, but that does not make them less dangerous, on the contrary! But even if we incorporate more complex intelligence into weapons, this technology will never be able to understand context or take account of international law. Therefore, I would not agree with Phillip. Unless we can enforce a ban now. That’s what our campaign is all about.
Next up, we had a comment from Aaron, who is convinced that if you use drones instead of soldiers in war, you lose all sense of the value of human life because you do not have to bear your own losses. Does Herr Küchenmeister agree with him?
Yes I agree with Aaron. If one delegates this decision over life and death to a machine, then automatically the inhibition threshold decreases in relation to the use of force. Machines do not have a conscience. The machine will not worry if the ‘target’ is holding a gun or a shovel, but will simply execute its command according to its specifications.
Finally, we had a pithy one-word summary of the discussion from Thomas , who simply said: “Terminator”. Will we soon have to fear such killer robots in action? Or is that more for Hollywood? What are the chances of a ban?
[Terminator] is always the most striking example, but it’s not what we’re talking about. We’re talking about the situation on the ground today and what is currently already being developed. We are dealing with weapon systems with sensors and algorithms that then fire without humans having to intervene.
A ‘Terminator’ is an advanced Artificial Intelligence that has a mission but makes decisions for itself. We are not that far yet. Therefore, it is important to distinguish what it is all about. There is still no universally agreed definition of what ‘fully autonomous weapons’ even are. Some say that the debate is just about Terminator-style robots, but I personally believe that all autonomous weapons systems should be put to the test of international law.
I do not know if there will be a ban. Some countries, such as Germany and France, are trying to impose a binding ban under international law. The situation is dangerous because technological development is so rapid. We cannot take our time anymore, we have to set limits now. The technical feasibility is already given. Many scientists at the conference in Geneva have confirmed that the technology has been around for a while. Arms companies say they can potentially already build autonomous weapons, they can do anything, they are just waiting for the political ‘go’…
Should autonomous drones be allowed to kill? Do we need a UN convention on the prohibition of fully-automated drones? Or is this just science fiction? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!