Would you trust an algorithm with your safety? Artificial Intelligence is already having an impact in the world of business, media, and government, so why shouldn’t it also be applied to security and defence? From pattern matching algorithms sifting through financial and communications records, to facial recognition software processing CCTV images, to programmes piloting drones and other weapons systems, AI is perhaps closer than you think. Will it complement human capabilities and enhance international security? Or will it create a nightmare world of constant monitoring and surveillance?

During the Debating Security Plus (DS+) online brainstorm (the report of which will be published on 20 September 2018 at Friends of Europe’s annual Policy Security Summit), participants were asked to discuss the impact of technology on security, including machine learning and Artificial Intelligence. It’s clear that these new technologies are coming. If we don’t develop them, then somebody else will. Do we have the right regulatory framework in place to prevent abuses of these new technologies?

What do our readers think? We had a comment sent in from Don, who lists a series of ways he hopes that AI will be used to enhance our security in the future, from reviewing financial records and looking for suspicious activity, to monitoring borders and keeping them secure. This isn’t science fiction; in the future, it really is likely that security services will rely more and more on machine learning and AI to support their activities. So, should we trust AI to keep us safe? Or should we worry about things like false positives and inherent bias?

To get a reaction, we spoke to Edoardo Camilli, CEO of Hozint, a company that works with both AI and human analysts to produce threat assessments. What would he say?

Well, a very short answer would be: No, I don’t fully trust AI. I think it’s an amazing type of technology that will have more and more space in the security field, and that will have more and more application in many different areas. But at the level of technological development we have at the moment, it’s not fully reliable.

The reasons are many. Let’s say the technology in the machine learning process still needs to be more elaborated to find good sources and good quality in terms of data. We have a very good example, I think, in May of this year when Scotland Yard released the statistics about a facial recognition camera they used in some test, and according to this report the camera failed in 91% of cases to identify the right people, so let’s say potential criminals. This obviously opens a lot of discussion about all these facial recognition cameras and the machine learning that is behind it, and we need to understand that despite all the technological development not only in computer science but also in the quality of cameras, still the data is not the best. This may create a bias, as you mentioned before, for instance we don’t have a huge data set on minorities in some countries. So, it’s very difficult to train an algorithm if you have a poor data set. I think there is a lot of improvement, but at this stage I don’t see AI replacing humans in many of the security tasks. It can be very helpful in some circumstances, but in others it can create more trouble than it’s worth.

To get another perspective, we also put the same question to Dr Gautam Shroff, Vice President & Chief Scientist and Head, TCS Research at Tata Consultancy Services. What would he say?

When it comes to security, I think machine learning is already being used. Especially in things like fraud prevention in financial transactions, it’s already being used. In terms of whether it’s a better solution or not, that’s ultimately to be decided by the statistics. So, so far, what we’ve figured out is that machine learning using deep learning gives significant advantages and false positives are going to be there whatever technique one ends up using. In traditional security scanning, the number of false positives is very large and you have huge manual efforts to clean them up and it adds to the time to resolve an issue. Using machine learning, you’re able to bring that down significantly. Many financial services already do that. So as technology works better, it’s definitely going to be used.

In terms of whether one feels safe using any technology, depends on how effective it is. When it comes to security, there is nothing that is 100 percent effective. So if machine learning adds to the decision and the recall and improves one’s confidence in the results, it’s perfectly fine. I don’t see it being very different from what was being done earlier, except that you’re able to learn from experience. I think that’s the key. There’s nothing to be fundamentally worried about. It’s just that you’re getting technology that is getting better.

When it comes to bias, it depends a lot on how biased the data is. So, there’s no inherent bias in machine learning if you learn to streamline human data and human data is biased, then our instruments are being biased. So one has to say: work on the data to ensure that bias has not crept in inadvertently.

Finally, we put the same comment to Mary Wareham, Advocacy Director of the Arms Division of Human Rights Watch and Global Co-Ordinator of the Campaign to Stop Killer Robots. How would she respond?

So, I work for Human Rights Watch in the arms division on various different weapons systems; landmines, cluster bombs, incendiary weapons. But, over the last five years we’ve been campaigning to prohibit what we call ‘fully autonomous weapon systems’ or, more colloquially, known as ‘killer robots’.

These are physical weapon systems that are not too far off some of the weapons that we see right now. What concerns us is the autonomy and how it is used in critical functions of how you select a target and how you fire on that. Currently, humans are responsible for those actions. But our concern is that by incorporating autonomy into weapons systems we’re going down a dangerous path that could end with the machines being responsible for deciding who’s a legitimate target and when to fire on it or not. For us, that crosses a moral line; it’s unacceptable for us for machines to be permitted to take human life on the battlefield, or in policing and border control and other circumstances. But this is one of the things that we have to confront going forward.

So, at Human Rights Watch and in the Campaign to Stop Killer Robots, we’re not opposed to Artificial Intelligence in general, and the use of autonomy. That’s happening around the world in different militaries; they often like to talk about how autonomy can be used to do the dirty, the dull, the dangerous functions in war-fighting; you know, the laundry, the cleaning of ships, the sending in of explosive ordinance disposal robots. We see that there are beneficial purposes, but for us the real danger here is when you incorporate autonomy into a weapons system to the extent that you no longer have the human in control of the kill decision. That’s a step too far, and that’s why we’re campaigning to stop killer robots.

Do you trust AI to keep you safe? Or are we putting too much faith in the power of algorithms? Could automating our security be dangerous? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!

IMAGE CREDITS: (c) / BigStock – ekkasit919

31 comments Post a commentcomment

What do YOU think?

  1. avatar

    AI will do what the programmers tell it to do so whether its the authorities or the AI run by the authorities it makes no difference. It’s the ideology of the people telling you they are working for your safety you should worry about. The Schegen area is the prime example of this slight of hand.

  2. avatar

    Depending who programs the IA….

  3. avatar
    EU Reform- Proactive

    Agree with Edoardo. Trust- No! A supplementary tool- yes!

    How does the EU use AI to shape its expansion & regulatory policies & harvest our thoughts? Similarly- not to be trusted!

    Please create an algorithm to expose day to day dishonesty and fake news.

  4. avatar

    No way, how can you trust an artificial intelligence when the backdoor is open to every bank criminal, to every Politician criminal that is not elected to be part of the European Community but is placed there! We expecting a German President, to replace one that he was never elected but placed from Germany, a German President from a radical party!

  5. avatar
    catherine benning

    Do you trust AI to keep you safe?

    Are we going to be verifiably informed as to who is programming these ‘helpers’? Are we going to be able to inspect directly and verify the code they are designed to work from?

    Additionally, how we will ever be sure who is running the political scene behind any of these possibly dangerous human job replacers? Why have this methods of service when millions of those born to this planet need employment? The equipment will have to be serviced constantly and replaced, then insured, then taxed and MOT’d. Not to mention need for reprogramming on a very regular basis. Just like the painting of the Forth Bridge, it will never end. So, how much are they really going to save?


    I watched an old film yesterday, 1950’s I think and I realised why the work force is so depleted, all menial jobs being replaced at an unbelievable pace. Here are a few.

    Road sweepers, garbage collection, toll booth conductors, elevator gate men and lift operators, food safety inspectors, nit nurses in school, bank clerks, nursing sisters, nursing matrons, school caretakers, janitors, street toilet janitors and lady assistants keeping them functioning, church wardens, park gardeners, abattoir inspectors, street constables, in house delivery men employed by companies. The list goes on and on. Think about it and how all the economic share of funds has trickled up to the top rather than down to the bottom. And all done by political stealth.

    However, if you think about it, all these jobs kept our cities and town civilised, clean and of a standard we had become accustomed to. This fake news that only immigrants will do this kind of work is an outrageous lie. tTe truth behind that is, they don’t want to use our taxes to pay for the upkeep of our infrastructure, which was the reason we were sold on the idea of paying taxes in the first place. Our councils are neglecting their duty in these areas in order to fork out for masses of immigrants invaders to be housed and fed by us for no good reason. Now why would they want to do that?

    The drain on the finances of our society is impoverishing us all at a rate that is criminal. Can anyone explain who decided this was a good idea for our Western standard of living? Who these people are and when are they going to be held accountable for the error of their ways?

  6. avatar

    Perhaps more than Trump!!!!!????

  7. avatar

    do you trust the internet to keep us safe? do you trust telephones to keep us safe? do you trust the telegraph to keep us safe? do you trust locomotion to keep us safe?

    Your question is as stupid as those

  8. avatar
    EU Reform- Proactive

    “…….keep you safe?” WE all are Europe! Can WE still be saved- from the EU?

    …….”from a present totalitarian bureaucracy to one not based rigidly on EU rules, but leaving room for political choices within strong institutions………”

    Such renewal/reform has to come from national governments- like it or not- there is no alternative. AI is used as just another trivial political detraction!

    Answers to such pertinent counter question should be demanded from all participating politicians in preparation for the 2019 EU elections!




  9. avatar

    AI is fine in some areas of Security where there are certain protocols that have to have a real live person or persons with the final authority to say yes or no, that or they will have to be accountable for any wrong decisions they make, a machine cannot be held liable.

  10. avatar

    Nothing more serious to discuss?

  11. avatar

    if we can trust the Humans who great it and the self programming of AI has to be watched carfully

  12. avatar
    Sunil Kopparapu

    Generally yes; knowing that there is no full proof system, AI or not. However if AI is particularly rogue programmed .. it is anything but safe!

  13. avatar
    Hari Devarapalli

    Machine Learning is effective, when the data is complete and correct. If not it is as good as any other technology including statistics. If the inferences from Machine Learning are presented in a logical way and is comprehensible to human knowledge and understanding ( In simple terms , if machine can explain its deduction from the data and application of rules in a given context) then the trust in ML/DL increases. Otherwise it is another blind intutuion and leads to subjectivity and inturn untrustworthyness.

    The hope is machine is not conditioned to normal human prejudices (Age/Gender/Geography) of demography of the subjects.

  14. avatar

    Screen shot everything to screw up algorithms. It works.

  15. avatar

    I don’t trust anything eu

  16. avatar

    It will be safe. Only when the several keys are good protectet. I hope also 😁

  17. avatar

    No. I only trust my bed and my momma i’m sorry.

  18. avatar

    Yep. I mean humans aren’t all that much better

    • avatar
      Cat Face

      Yeah… true .-.

Your email will not be published

Leave a Reply

Your email address will not be published.

Notify me of new comments. You can also subscribe without commenting.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

By continuing to use this website, you consent to the use of cookies on your device as described in our Privacy Policy unless you have disabled them. You can change your cookie settings at any time but parts of our site will not function correctly without them.