Developments in automation and artificial intelligence are advancing at a dizzying pace. “Old-fashioned” humans are set to be replaced in many professions. Last week, delegates met at the United Nations to discuss whether states could automate soldiers. The discussions in Geneva were about machines that can identify, attack and kill enemies without a human directly pulling the trigger. What are the military and ethical aspects of these so-called “lethal autonomous weapons systems”?
This is a moral not a technological question. Fully autonomous weapons platforms may not yet be operational, but the technology is coming. The feasibility of the technology is clear when you look at weapon systems that already exist. For example, the German Bundeswehr already has an automated air defense system that can potentially intercept missiles without human involvement. Humans still need to authorise the interception, but it’s conceivable they could one day be removed from the process. Artificial Intelligence can be used for cars, boats, airplanes and drones. The arms race will probably mean this is possible sooner rather than later, especially since the relative costs for autonomous weapon systems are low compared to humans.
Now for the moral dilemma! As a wise man once said, Artificial Intelligence can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. However, when an autonomous weapon is deployed, you don’t need to send soldiers to risk their lives. Properly programmed, they may even be less prone to poor decisions than people in stress situations. Industry experts are still alarmed: more than a hundred roboticists, scientists and entrepreneurs have signed an open letter demanding a global ban, including celebrities such as Elon Musk, Stephen Hawking and Steve Wozniak – not just enemies of technical progress!
What do our readers think? We had a comment from Phillip who believes that in the future a human will ultimately always be pulling the trigger, even if drones become more and more common. Is he right? Then why have the UN meeting in Geneva last week?
To get a reaction, we spoke with Thomas Küchenmeister of Facing Finance, a German member organisation of the international “Stop Killer Robots” campaign. What would he say to Phillip?
It’s naive to think that humans will always make decisions about life and death in terms of weapons systems. Unfortunately, the development of weapon systems shows that the direction of travel is towards greater delegation and the automation of decision-making. We are now at a stage where weapons with autonomous capabilities already exist. They already exist. They are described as ‘crude’ or ‘immature’ systems, but that does not make them less dangerous, on the contrary! But even if we incorporate more complex intelligence into weapons, this technology will never be able to understand context or take account of international law. Therefore, I would not agree with Phillip. Unless we can enforce a ban now. That’s what our campaign is all about.
Next up, we had a comment from Aaron, who is convinced that if you use drones instead of soldiers in war, you lose all sense of the value of human life because you do not have to bear your own losses. Does Herr Küchenmeister agree with him?
Yes I agree with Aaron. If one delegates this decision over life and death to a machine, then automatically the inhibition threshold decreases in relation to the use of force. Machines do not have a conscience. The machine will not worry if the ‘target’ is holding a gun or a shovel, but will simply execute its command according to its specifications.
Finally, we had a pithy one-word summary of the discussion from Thomas , who simply said: “Terminator”. Will we soon have to fear such killer robots in action? Or is that more for Hollywood? What are the chances of a ban?
[Terminator] is always the most striking example, but it’s not what we’re talking about. We’re talking about the situation on the ground today and what is currently already being developed. We are dealing with weapon systems with sensors and algorithms that then fire without humans having to intervene.
A ‘Terminator’ is an advanced Artificial Intelligence that has a mission but makes decisions for itself. We are not that far yet. Therefore, it is important to distinguish what it is all about. There is still no universally agreed definition of what ‘fully autonomous weapons’ even are. Some say that the debate is just about Terminator-style robots, but I personally believe that all autonomous weapons systems should be put to the test of international law.
I do not know if there will be a ban. Some countries, such as Germany and France, are trying to impose a binding ban under international law. The situation is dangerous because technological development is so rapid. We cannot take our time anymore, we have to set limits now. The technical feasibility is already given. Many scientists at the conference in Geneva have confirmed that the technology has been around for a while. Arms companies say they can potentially already build autonomous weapons, they can do anything, they are just waiting for the political ‘go’…
Should autonomous drones be allowed to kill? Do we need a UN convention on the prohibition of fully-automated drones? Or is this just science fiction? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!
15 comments Post a commentcomment
Yesl
Should autonomous drones be allowed to kill? No. I fully comprehend humans curiosity and desire to push boundaries with technology. This will be a slippery slope. What next with learning robots and machines? Would you really want one in full control? Im betting the answer is “no” so why would anyone want one that can decide to kill?
Of course, well done.
It’s because the drones would be handle by Mr. Everybody, including the hackers, extremists and so on.
We aren’t robot, we are humans.
Aren’t we?.
The day AI will realise how pathetic/useless humans are, they’ll logically don’t give a shit about preserving us :D
Should autonomous drones be allowed to kill?
This question, in and of itself, is dangerous thinking to the general public. The mere fact it would even be under consideration shows how low the so called leaders we have in these offices have stooped to.
Time to read your rights and for all to see.
https://www.equalityhumanrights.com/en/human-rights/human-rights-act
We have the right, under these laws, to refuse to conglomerate with company that is not suitable to our mental comfort. In other words, not to be forced to be amongst or to mix with individuals or groups we fear or feel threatened by, or, indeed, just don’t like.
We have the right to freedom of expression. Which should indicate some kind of compensation if your right to freedom of thought and the exchange of those thoughts, are impinged upon.
Who is to stop them?
No, they shouldn’t. People should stop making wars. So they want to kill someone else but they don’t want anyone from their side getting killed. The situation can easily get out of control and non-intended targets could be killed.
Also, the secret services should stop killing people (rather than get robots to do it).
A ban on ‘fully autonomous weapons systems’ means nothing.
1) AI is like a ‘double-use technology’, so advancements in civil AI mean advancements in military AI, and vice-versa.
An international ban might slow down the developments in many countries, but not for the big players, thus narrowing the scope of some of the problems.
2) Once deployed in a battlefield, how to prove they killed autonomously? It would be a matter of a click to allow them to be fully autonomous, or make them request a human to decide, whether to kill or not. Also, the degree of freedom could depend on context, and the evolution of the scenario, which would allow to free human resources to take the decision only in the more critical — or blurry — scenarios.
3) TIME is critical, and in a full scale war, requesting always the intervention of a human to whether kill or not, would mean time lost and thus way more of your weapons/robots destroyed.
4) War has its own laws, and the only sin is losing, as history teaches us.
To lose is the only sin.
God bless Europe.
07/05/2018 Andrus Ansip, European Commissioner for the Digital Single Market and Vice President of the European Commission, has responded to this comment.
07/05/2018 Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy at the Centre for European Policy Studies (CEPS), has responded to this comment.
Better we should debate about sustainable programmes for energy management or for a new era of commons handling.. the contemporary defensive policy should be an outcome of a general policy of environmental respect (and of course respect for individuals such as the main strategies of e.u. of economical debarment of a possible treat).
As long as you let them loose in the European Parliament
No one should be allowed to kill. And drones should not be allowed to do anything apart from obeying humans.
https://youtu.be/_mqDjcGgE5I?t=35
What will stop them
Here is an example of the future we can ‘look forward to.’ A future we, as voters living in a fragile democracy, cannot control. The move is bringing us closer to Slave Planet. The aim of Globalization.
https://www.youtube.com/watch?v=dR-hXMjHVtI
And here, already at our fingertips, is the final destruction of the female of our species.
https://www.youtube.com/watch?v=W0_DPi0PmF0
Sophia is more than dangerous. She is ‘the terminator’ who cannot ‘yet’ do these things. She tells us!
A relevant article on this:
http://www.bbc.co.uk/news/uk-politics-42153140