Artificial Intelligence is already changing society. Algorithms and machine learning are trading millions of euros in financial markets; they are predicting what people want to search for online and what shows they might like to watch on Netflix; AI is already helping police identify criminals using facial recognition (albeit with mixed results), and sifting through climate change data. Soon, AI could be driving our cars and trains (even our ships and planes).
What comes next? How will these new technologies transform our workplaces, our homes, our cities, and our lives? Inevitably, there will be disruption. But can that disruption be minimised? And can the benefits of AI be shared fairly across society?
What do our readers think? First up, we had a comment sent in from Kristin, who argues:
[…] I would not be surprised if technology further creates divides and inequality. AI is anyway going to be disruptive, and so it makes sense the most vulnerable in society (including disabled) will be the most disrupted.
Is she right? Will the most vulnerable in society be the most disrupted by AI? To get a response, we put her comment to Andrus Ansip, European Commissioner for the Digital Single Market and Vice President of the European Commission. What would he say?
Yes, I agree that AI will transform our society. I see many opportunities, also for people with disabilities. The EU actually funds a series of projects which aim at making the most of technologies for people with disabilities: from an AI exoskeleton helping paralysed people walk again to an AI app reading the web for visually impaired people.
AI also brings challenges: many jobs will be created, others will disappear, most will be transformed. This means we should help workers acquire new skills. We have launched a series of initiatives to support lifelong learning and the European Social Fund invests €2.3 billion specifically in digital skills.
AI should be at the service of people, of all people. This is part of the approach we presented on 25 April.
To get another perspective, we also put Kristin’s comment to Professor Nick Bostrom, Director at Oxford University’s Future of Humanity Institute and Director of the Governance of Artificial Intelligence Program. How would he react?
Next up, we had a comment sent in from Paul, who is more relaxed about the prospective impact of AI on society. He argues that humans are too ‘randomly stupid’ for machines to successfully replace positions that require interacting with them:
Robots will never fully replace people in many jobs for one simple reason, no matter how advanced their AI and ‘learning’ abilities are, they will never have the lateral thought process required to deal with the random stupidity of some humans
How would Andrus Ansip respond to Paul’s comment?
I agree that AI will never fully replace humans, their creativity, lateral and critical thinking. In most cases, AI will be complementary to and assist people with specific tasks requiring for example the processing of large amounts of data. One example is AI analysing sets of x-rays to assist doctors with diagnosis. So overall, instead of replacing people, AI will enhance our abilities (hence the concept of “augmented intelligence”) and in a way help us be smarter!
And what would Nick Bostrom say to the same comment?
To get another view, we also put Paul’s comment to Andrea Renda, Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy at the Centre for European Policy Studies (CEPS). What would he say?
Next up, we had a comment sent in from Jose, who argues that “AI is like a ‘double-use technology’, so advancements in civil AI mean advancements in military AI, and vice-versa.” Is he right? Is AI a dual-use technology? And, if so, how can we ensure it isn’t misused?
How would Andrus Ansip, European Commissioner for the Digital Single Market, respond to Jose’s comment?
AI systems must comply with international law. We firmly believe that humans should make the decisions with regard to the use of lethal force, exert sufficient control over lethal weapons systems they use, and remain accountable for decisions over life and death. The EU actively participates in international discussions on the different ethical, legal, technical and military aspects related to lethal autonomous weapons systems.
Projects funded under our research and innovation programme Horizon 2020 involving dual use technologies must fulfil some specific requirements to make sure they comply with law and ethical standards.
The ethical development and use of AI is essential, this is why we will also present ethical guidelines by the end of the year, based on the EU’s Charter of Fundamental Rights, taking into account principles such as data protection and transparency, and building on the work of the European Group on Ethics in Science and New Technologies.
And how would Andrea Renda respond to Jose’s comment?
Finally, we had a comment sent in from Vytautas, arguing that AI will have a negative impact on the salaries of all us old-fashioned human workers. In other words, he believes people will have to accept lower incomes (or else simply lose their jobs) in order to compete with AI. Is he right?
To get a reaction, we put his comment to Eva Kaili, a Greek MEP who sits with the social democrats in the European Parliament and is a member of the Committee on Industry, Research and Energy. What would she say?
How will Artificial Intelligence change society? How will it affect the way we work? Will it be a gradual evolution, or a transformation revolution? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!
In partnership with the European Economic and Social Committee (EESC) – Civil Society Days 2018 #CivSocDays.
For more information about the Civil Society Days 2018, please check: www.eesc.europa.eu/csdays2018