Would you trust an algorithm with your safety? Artificial Intelligence is already having an impact in the world of business, media, and government, so why shouldn’t it also be applied to security and defence? From pattern matching algorithms sifting through financial and communications records, to facial recognition software processing CCTV images, to programmes piloting drones and other weapons systems, AI is perhaps closer than you think. Will it complement human capabilities and enhance international security? Or will it create a nightmare world of constant monitoring and surveillance?
During the Debating Security Plus (DS+) online brainstorm (the report of which will be published on 20 September 2018 at Friends of Europe’s annual Policy Security Summit), participants were asked to discuss the impact of technology on security, including machine learning and Artificial Intelligence. It’s clear that these new technologies are coming. If we don’t develop them, then somebody else will. Do we have the right regulatory framework in place to prevent abuses of these new technologies?
What do our readers think? We had a comment sent in from Don, who lists a series of ways he hopes that AI will be used to enhance our security in the future, from reviewing financial records and looking for suspicious activity, to monitoring borders and keeping them secure. This isn’t science fiction; in the future, it really is likely that security services will rely more and more on machine learning and AI to support their activities. So, should we trust AI to keep us safe? Or should we worry about things like false positives and inherent bias?
To get a reaction, we spoke to Edoardo Camilli, CEO of Hozint, a company that works with both AI and human analysts to produce threat assessments. What would he say?
Well, a very short answer would be: No, I don’t fully trust AI. I think it’s an amazing type of technology that will have more and more space in the security field, and that will have more and more application in many different areas. But at the level of technological development we have at the moment, it’s not fully reliable.
The reasons are many. Let’s say the technology in the machine learning process still needs to be more elaborated to find good sources and good quality in terms of data. We have a very good example, I think, in May of this year when Scotland Yard released the statistics about a facial recognition camera they used in some test, and according to this report the camera failed in 91% of cases to identify the right people, so let’s say potential criminals. This obviously opens a lot of discussion about all these facial recognition cameras and the machine learning that is behind it, and we need to understand that despite all the technological development not only in computer science but also in the quality of cameras, still the data is not the best. This may create a bias, as you mentioned before, for instance we don’t have a huge data set on minorities in some countries. So, it’s very difficult to train an algorithm if you have a poor data set. I think there is a lot of improvement, but at this stage I don’t see AI replacing humans in many of the security tasks. It can be very helpful in some circumstances, but in others it can create more trouble than it’s worth.
To get another perspective, we also put the same question to Dr Gautam Shroff, Vice President & Chief Scientist and Head, TCS Research at Tata Consultancy Services. What would he say?
When it comes to security, I think machine learning is already being used. Especially in things like fraud prevention in financial transactions, it’s already being used. In terms of whether it’s a better solution or not, that’s ultimately to be decided by the statistics. So, so far, what we’ve figured out is that machine learning using deep learning gives significant advantages and false positives are going to be there whatever technique one ends up using. In traditional security scanning, the number of false positives is very large and you have huge manual efforts to clean them up and it adds to the time to resolve an issue. Using machine learning, you’re able to bring that down significantly. Many financial services already do that. So as technology works better, it’s definitely going to be used.
In terms of whether one feels safe using any technology, depends on how effective it is. When it comes to security, there is nothing that is 100 percent effective. So if machine learning adds to the decision and the recall and improves one’s confidence in the results, it’s perfectly fine. I don’t see it being very different from what was being done earlier, except that you’re able to learn from experience. I think that’s the key. There’s nothing to be fundamentally worried about. It’s just that you’re getting technology that is getting better.
When it comes to bias, it depends a lot on how biased the data is. So, there’s no inherent bias in machine learning if you learn to streamline human data and human data is biased, then our instruments are being biased. So one has to say: work on the data to ensure that bias has not crept in inadvertently.
Finally, we put the same comment to Mary Wareham, Advocacy Director of the Arms Division of Human Rights Watch and Global Co-Ordinator of the Campaign to Stop Killer Robots. How would she respond?
So, I work for Human Rights Watch in the arms division on various different weapons systems; landmines, cluster bombs, incendiary weapons. But, over the last five years we’ve been campaigning to prohibit what we call ‘fully autonomous weapon systems’ or, more colloquially, known as ‘killer robots’.
These are physical weapon systems that are not too far off some of the weapons that we see right now. What concerns us is the autonomy and how it is used in critical functions of how you select a target and how you fire on that. Currently, humans are responsible for those actions. But our concern is that by incorporating autonomy into weapons systems we’re going down a dangerous path that could end with the machines being responsible for deciding who’s a legitimate target and when to fire on it or not. For us, that crosses a moral line; it’s unacceptable for us for machines to be permitted to take human life on the battlefield, or in policing and border control and other circumstances. But this is one of the things that we have to confront going forward.
So, at Human Rights Watch and in the Campaign to Stop Killer Robots, we’re not opposed to Artificial Intelligence in general, and the use of autonomy. That’s happening around the world in different militaries; they often like to talk about how autonomy can be used to do the dirty, the dull, the dangerous functions in war-fighting; you know, the laundry, the cleaning of ships, the sending in of explosive ordinance disposal robots. We see that there are beneficial purposes, but for us the real danger here is when you incorporate autonomy into a weapons system to the extent that you no longer have the human in control of the kill decision. That’s a step too far, and that’s why we’re campaigning to stop killer robots.
Do you trust AI to keep you safe? Or are we putting too much faith in the power of algorithms? Could automating our security be dangerous? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!