New Zealand’s Prime Minister Jacinda Ardern and French President Emmanuel Macron launched the initiative ‘Christchurch Call’ that calls on other states and online platforms to take further precautions to stop the spread of terrorist content online. The attacks against two mosques in Christchurch on 15 March 2019 highlighted the vulnerability of online platforms to abuse by violence and terrosism. Discussions revealed that the EU is rather unprepared to tackle online terrorism content.
“One of the challenges we faced in the days following the attack was the spreading of the video’s many versions,” said Facebook’s Vice-President for Integrity, Guy Rosen. Facebook, therefore, announced its plan to make the rules for live streams more restrictive. Specifically, users who upload a material that violates the platform’s policies will be banned from using the live video stream for a period of time. While no regulation to contain violent content exists, self-regulation applies instead. Moreover, a joint database to record and block the ‘digital fingerprints’ of suspected terrorist content exist since 2017.
Yet, although few countries attempt to regulate the content individually, there is no joint response at the European level. In September, the European Commission introduced a legislative draft against terrorist content that immediately removes content that promotes or describes terrorist acts if a corresponding removal request from a national authority from an EU member state has been sent. Otherwise, penalties of up to 4% of the company’s global turnover could be imposed. The draft has, however, been criticized by NGOs, journalists, and politicians that are skeptical of such a direct restriction of freedom of speech. Therefore, the new regulation proposal is planned to be negotiated in September or October this year.