The rate at which social media has become integrated into our lives has been astonishing. 2019 saw 45 million active social media users, or 67% of the British population; a quantity so profound that for many of us it now feels impossible to imagine a world without social media.
Society today enjoys a level of social networking and correspondence never before experienced in human history. It is, therefore, no wonder that we consume so much of it in our daily lives; members of ‘Generation Z’ being the worst offenders, spending over three hours a day on some sort of social media. As a result, we have a society bound to their phones and tablets. But is this level of dependence, and in turn influence by media platforms, a cause for concern?
Social media platforms currently operate under an embryonic regulatory system which is not properly engineered to promote online safety. This can be observed in cases of online misconduct such as Cambridge Analytica. More distressingly, it can too be seen in tragic cases, for example, the Molly Russell suicide in 2017. In this instance “bleak depressive material, graphic self-harm content and suicide-encouraging memes” contributed to the suicide of the 14-year-old girl. Her father, Ian Russell, said he has “no doubt that social media helped kill [his] daughter.” Sadly, this does not seem to be the exception: NSPCC estimates that 90 cyber-crimes are recorded a day against children. With this in mind, one cannot help but sympathise with the view of the NSPCC head of Child Safety Online Policy, that ‘crystal clear regulation cannot come soon enough’. Searching for a specific cause, Damian Hinds, as education secretary, attributed the endemic nature of these cases to media platforms for confusing a child’s digital age of consent for their relinquishing of their legal status as minors.
The government seems to have recognised how social media facilitates child abuse, particularly against young girls. In April 2019 the government published an Online Harms White Paper, forcing media companies to both pay for research concerning online harm and share data regarding the actions their company is taking to tackle online abuse. This follows an international trend toward closer social media management, with some countries now imposing fines to platforms which do not remove hate speech within 24 hours.
What exactly does this white paper change? It proposes to introduce a legally enforceable regulatory framework designed to ensure platforms comply with their duty of care; those which fail to do so can expect punitive sanctions including “substantial fines”. In other words, it goes much further than any previous attempt to safeguard people online from illegal or harmful content. Civil rights groups have opposed making media companies liable for content on their sites, on the grounds that people should have autonomy over their online activity and not to be made to feel like “lab rats.”
The concern here is one of censorship. The Online Harms White Paper may incentivise media firms to become overly zealous in censoring content in the pursuit of avoiding governmental sanctions. It places the burden of deciding what content is lawful and what is unlawful at the door of private companies; action which, as Human Rights Watch points out, results in the creation of “no accountability” zones, given censorship avoids judicial inspection. In this regard the white paper is at risk of being too broad, a fault highlighted of the German Network Enforcement Act 2017, legislation enforcing similar action to that of the Online Harms white paper, when a satirical magazine had their content blocked.
What should the government do? It is clear action is needed now. Indeed, NSPCC figures show that “more than 25,000 offences involving child abuse images and sexual grooming have occurred since the publication of the Online Harms White Paper” in April 2019. Government is right to recognise the need for stricter age verification processes, to ensure young children do not see potentially harmful content, but it should be open to more indirect methods, such as the assembling of an online rights ombudsman, or the restructuring of the content promotion mechanism, which traps users in a bubble of possibly damaging imagery.
It is unquestionable that reform is needed. Yet government should gear policy toward preventative measures, not risk-based auditing techniques, as the latter risks breaching freedom of expression.
Jack Fulton is currently undertaking work experience at Bright Blue. Views expressed in this article are those of the author, not necessarily those of Bright Blue.