Neo-Nazis, terrorists and other extremists use the Internet to recruit. Websites, forums and message boards are cost-effective ways for these networks to spread their propaganda, to radicalise and recruit new members, and to fundraise. As social media giants ban their most extreme users, some will then migrate from more mainstream platforms to extremist websites.
Critics argue that taking down websites just drives extremists underground. There will always be spaces for the violent and hateful to gather online, from encrypted messaging apps to the dark web. If those spaces are public, there may be pressure to moderate their discourse (or, at the very least, it will be easier for researchers to keep tabs on what they are saying and how they are organising).
Want to learn more about the consequences of banning extremist websites? Check out our infographic below (click for a bigger version):
What do our readers think? We had a comment sent in from Maia, who says: “If any site promotes war, death, violence, destruction, harm and suffering of others as something good, it needs to be banned, because it is encouraging criminal acts.”
To get a response, we spoke to André Taubert from LEGATO, which offers counselling and advice services in Hamburg for religious-based radicalisation. What would he say to Maia?
I think the question is rather: Can we wind back time? Can we wind back the way communications and the Internet have developed? I think that’s the question that has to be asked. Because we might be able to ban some propaganda videos and websites, but it might never be possible to ban them all unless we want to have a society like North Korea’s.
If it were possible to wind back time and ban these propaganda videos and websites, then it might be positive for the development of young people. Because this propaganda is often very professionally developed, and it tries to change people’s views on violence by manipulating feelings and emotions. So, it would be better if they never encountered it.
However, since we can’t wind back time and we can’t wind back the development of the Internet and the role of the Internet, we should rather be much better in training young people to understand their own emotions when they watch propaganda. To understand how radicalisation functions, how does it work? The question I often ask young people is: When did you radicalise yourself and why? And how did it feel? And what helped you to stop?
Because I think everyone radicalises themselves at one time or another in their lives. Maybe we radicalise every day in small moments, and the better we understand what happens in those moments the better we are prepared not to be open to violence.
For another perspective, we put the same comment to Alex Agius Saliba, a Maltese social democratic MEP. How would he respond?
This is a very sensitive topic, and I think that citizens have become more aware of content moderation and content control online, and basically the actions these big tech giants are taking to control what we see and what we don’t see, since the Capitol Hill incident and since we saw an ex-President of one of the most powerful countries in the world banned from a number of platforms. Even though a lot of citizens, including myself, don’t agree with 98% of what Donald Trump used to say, we can still question the decisions that these big tech giants are unilaterally taking.
So, I think that the starting point should be transparency. People need to have trust in the system, and to do that they need to know why decisions are being taken. We need to give more visibility to how and why decisions about removing content are taken online.
Finally, we also put Maia’s comment to Moritz Körner, a German liberal MEP. What would he say?
That’s a very tricky question, because we can definitely see the negative impact of hate speech and websites like the ones Maia mentions. However, we also have to consider that we have freedom of speech which is, I would say, a very important European value.
What we are trying to regulate right now (and I would say this is also the position of the European Parliament) is that we should be able to ban illegal content. If a website is promoting illegal content, such as content calling for murder or something like that, then you could ban this website.
However, there is also so-called harmful content, like fake news and other things where we really have to think whether we should ban this. I think the line should be if there’s something illegal and when there’s a clear criteria over why this content is illegal. When it comes to harmful content, it’s more difficult to know where to draw the line. So, I think we should regulate illegal content and if social media platforms for themselves have other rules on harmful content then they should define those rules very clearly in their terms and conditions. And, if user content is taken down, I think there should be a clear way as a user or citizen to complain and a mechanism to do that. I think that’s the best way to remove illegal content but preserve freedom of speech.
Can shutting down extremist websites prevent violence? Or would that just drive them onto encrypted messaging apps and the dark web? Let us know your thoughts and comments in the form below and we’ll take them to policymakers and experts for their reactions!