Skip to main content

There is no doubt whatsoever that big tech must be regulated. US legislators say it; EU authorities agree; the UK body politic concurs. Indeed, big tech itself agrees that there must be some form of regulation for the internet, as it continues to increase its presence in our lives. Even in our divided world, on this, there is consensus.

Sadly for all concerned, that is just about as far as the consensus goes. There were various so-called ‘easy wins’ when it comes to online spaces, but all of these were reached years ago.

Companies should have liability for customer data being lost or stolen due to negligence – sorted. Tech should make significant efforts to keep child abuse imagery off their services – a perpetual struggle, but one in which tens of thousands of images a day are removed. Wherever they are based, tech should remove speech that does not comply with local law – also long ago, sorted.

This is most notable when looking for compliance with Germany’s understandably strict laws against speech denying the Holocaust. Social networks now take down content that falls foul of this law, but is legal speech in other countries, thereby restricting it solely in Germany.

Big tech has made it clear they will comply with democratic laws on speech. They say if you want particular content not to exist, simply make it illegal – otherwise, leave the limits of legal speech up to us as individual services.

Some services, like Facebook, will not let so much as an exposed female nipple be on show – even if breastfeeding. Others, such as Twitter, are happy to host hardcore pornography. Each makes a choice according to what its users and advertisers are willing to accept, and acts accordingly.

But the UK’s approach in the Online Safety Bill would have taken that much further. The Bill created a new category of ‘legal but harmful’ content which social networks would need to demonstrate they had plans to tackle without outright banning it. With absolutely swingeing fines at stake, such a policy would clearly result in most networks zealously over-censoring content, given the costs of falling foul of the law are so much greater than the benefits of supporting an open internet.

Such was the backlash from the tech sector that the Government tried to reframe the ‘legal but harmful’ restrictions as content likely to be accessed by children – on the face of it, a much more reasonable middle ground.

This was essential, as the Home Office had used children’s charities to front much of the legislation and convinced them it was necessary to keep children safe online. In classic Home Office fashion, it set a trap and then immediately fell in it: pass the measure, and big tech withdraws jobs and activity from the UK; do not pass it and face a huge public backlash from charities that are largely above reproach.

The compromise does not work, of course. Almost every service online is likely to be accessed by children, generally defined as a child under 13. Kids can easily tick a box saying that they are older than they actually are and generally are excellent at finding things that they should not.

As a result, the only way to really exclude children is to introduce stringent age verification checks, which rely on some form of real-world ID, akin to those on gambling websites. That has put the government in direct conflict with Wikipedia, the worthy not-for-profit encyclopaedia cribbed by children across the world for their homework assignments. Wikipedia, not willing to introduce an age filter and not able to comprehensively moderate its site, has said that, if these measures pass, it will withdraw from the UK.

If any of this was easy, then it would have been done and dusted long ago. But there is a fundamental problem at the core of the rules: they do not bring big tech to heel. Instead, they merely introduce new restrictions on the ability of private citizens to speak out online.

It is approaching ridiculous if Facebook can be punished for allowing on its site comments that someone could otherwise make in a public space without sanction. The result is not a restriction on Facebook, but rather on the person making the comments.

In general, big tech does not adequately moderate its content, nor does it contribute enough in tax to outweigh the societal harms of its products. The huge profit margins of the tech giants,  especially in their online ad divisions, is telling of a surplus that is causing issues.

Good legislation might look at trying to levy them in proportion to the harmful content on their platforms, with a sound legal definition of such content, or else to require minimum moderation ratios and support times for large sites. But it should not try to outsource the limits of speech to cautious tech compliance companies, which is what the current Bill proposes.

When we do not like what we see online, it is rarely big tech’s fault. We must ensure that when we are trying to regulate business, we do not instead merely regulate each other.


James Ball is a British journalist and author. He has worked for The Grocer, The Guardian, WikiLeaks, BuzzFeed, The New European and The Washington Post. 

This article was published in the latest edition of Centre Write. Views expressed in this article are those of the author, and not necessarily those of Bright Blue. 

Read more from our August 2023 Centre Write magazine, ‘Back to business?’ here.