Barack Obama’s election as President of the United States in 2008 marked a turning point in political campaigning. Initially dismissed as an underfunded no-hoper, he confounded the pundits through his sophisticated use of the available social media platforms. For those ‘liberal and left-leaning people’, 2016 marked a turning point in the way they thought about social media. Once heralded as harbingers of democracy following the Obama campaign and the Arab Spring, Facebook and Twitter, in particular, were suddenly perceived as irresponsible, scary and destructive, coarsening civil discourse and enabling the rise of populism.
Mark Zuckerberg, who launched Facebook, initially dismissed as ‘crazy’ the idea that Russian agents had manipulated his platform to interfere in the 2016 US presidential election. Yet, as the Cambridge Analytica scandal unfolded, the bad news days for Facebook metastasized into an annus horribilis. And the toxic press was not confined to Facebook. Bot armies on Twitter, Holocaust-denying autocompletes on Google search, radicalizing auto-suggest cues on YouTube: every week seemed to bring disturbing revelations. By September 2018, Facebook, Google and Twitter had announced 125 initiatives to combat ‘fake news’ on their platforms.
All sorted then? Not quite. Observers noted that there was a lack of independent, objective data on which of these initiatives had been implemented and what their effect had been. It took some time for the leadership of social media platforms to adjust to their new role in the naughty corner. Overnight, the profile of social media platforms among politicians plummeted from wunderkind to enfant terrible. The US Senate summoned Mark Zuckerberg to testify in April 2018, amid calls to regulate the social media platforms. His carefully worded, gamerish responses appeared tone-deaf to the public mood.
When Zuckerberg said that users ‘completely control the terms’ under which their data is used by the platform, senators meekly accepted this view without challenging the sometimes exploitative nature of those terms, or whether users can really be said to have freely given their consent to the way their data is used. Then, Zuckerberg appeared before the European Parliament on what was dubbed his ‘apology tour’. The format of the event saw MEPs make rambling statements for an hour, leaving Facebook’s leader to cherry-pick talking points for the small time allotted to him.
Scholars had been pointing out for years that the surveillance-based business models of free-to-use platforms were problematic, and likely to cause societal harm. These warnings fell on deaf ears as the public was lulled by the platforms’ addictive usability, and politicians obediently repeated the libertarian’s mantra that you can’t regulate the internet. Something else had changed. The responses of the social media platforms to the democratic shocks of 2016 seemed hollow, self-serving and inadequate. What followed was a rush to regulate. In May 2018, more than 20 countries had either proposed legislation or were going through parliamentary inquiries. By December 2018, that number had increased to more than 40 countries.
But there is a difference between regulation and good regulation. If the regulators’ minds are trained on a poorly defined concept of ‘fake news’, the regulation that flows will focus on banning types of content or speech, with adverse consequences to freedom of expression. Some authoritarian regimes have seized the opportunity to define illegal content widely, sparking fears of crackdowns on political dissidents, minorities and human rights defenders. Even if regulation avoids the pitfall of clamping down on free speech, or criminalizing certain types of content, there are numerous other risks associated with regulating such a complex environment.
A long-awaited white paper entitled Online Harms recently published by the British government proposes imposing a duty of care on platform providers, with swingeing penalties for breaches, and the appointment of an independent regulator to oversee compliance. Industry voices outside the major platforms are concerned that companies providing technical or back-end services might be caught within the scope, and that measures requiring internet service providers to block or filter traffic might have unintended consequences by extending into the architectural layers of the internet.
This trend carries several risks: the erosion of the internet’s distributed nature; entrenching existing market positions; or undermining the open, global network by creating digital borders along jurisdictional lines. The white paper has also been criticized for its failure to define ‘online harms’, and whether the duty of care, a concept imported from the tort of negligence, is applicable to speech. However, in a complex, rapidly evolving environment, an over-rigid approach would be self-defeating. Moreover, the concept of ‘harms’ implies an effects-based approach, where it is the outcome rather than the motivation that is punishable. A parallel can be drawn to the European Union’s anti-cartel legislation where the law engages if agreements distort the market – either by intention or effect.
It is unlikely that the first attempt at regulating such sensitive areas will be fully successful. But an important line has been crossed. Confidence in the platforms to regulate themselves has declined, and the plethora of national laws passed since 2017 indicate that the state does have a role in protecting its citizens from harm – whether online or offline.
‚How to Rein in the Web of Lies‘ – Commentary by Emily Taylor – Chatham House / The Royal Institute of International Affairs.
(The Commentary can be downloaded here)