It seems a new battle has broken out in the war on free speech.
To those who have not been paying attention, this is a war that has been waged for some time now, largely centred on university campuses, before spilling over into social media where people are quick to declare that the end of civilisation is nigh. It is a war that appears less concerned with the very real threats to free speech being encountered by journalists across the world, but instead is happy to exist in its own ecosystem, complete with its particular vocabulary of “snowflakes”, “trigger warnings” and “safe spaces”, and a line-up of protagonists – provocateurs and puritans in equal measure – who have become household names.
The latest skirmish follows the riots in the UK in recent weeks and concerns the subsequent prosecutions of those who engaged in the violence, as well as those who incited it behind a screen. Listen to those on the front line of this war, and we are at an inflection point. Dystopia is right around the corner unless we stand up and defend free speech.
Perhaps it is asking too much, but if we are being called upon to enlist in a war to preserve civilisation, we might hope that the recruiting sergeants demonstrated some understanding of the concept for which they claim to be fighting. That they understood its complexity, understood (or even just recognised the existence of) the centuries of legal and philosophical thought that has grappled with the subject, and they didn’t just resort to some facile summary that free speech = good, censorship = bad. It might just be a bit more complicated than that.
But complication and nuance are inconvenient concepts in an age of ‘hot takes’ and Twitter/X spats. Absolutism gets the clicks. So, we encounter an army of so-called “free speech absolutists” who do not need to worry themselves with difficult questions about where lines should be drawn. That is until you begin to interrogate their stance and see that their position is not so absolute after all.
Take Elon Musk as an example. I’m sure he values and would seek to protect the copyright, patents and trade secrets associated with his various companies from being used by or revealed to a competitor in an exercise of free speech.
He has expressed a dislike of workers collectively exercising their free speech to form a union. His companies have sought to ensure that the free speech of his employees is constrained by their signing non-disclosure agreements. And he has threatened to sue others for exercising their freedom of speech in a way that he says is defamatory.
All of this is not to say that Elon Musk is a hypocrite or to engage in ad hominem attacks. I’ll leave that to others. It is to say that even the most famous self-proclaimed free speech absolutist appears to recognise that there are limits to free speech, limits that have been developed over time and which now garner a relatively broad consensus of support.
And, more than anything, it is to hope that he might recognise that this is a complex subject that deserves more than over-simplified, sophomoric statements.
Musk might say that his conception of free speech absolutism was never totally absolute. Indeed, in the early days of his acquiring Twitter, he said that the platform “believes in free speech, provided it does not break the law”. Simple and understandable enough, albeit a long way off from the absolutism that he has espoused in other contexts.
But is this really his position? What about those laws that were passed by a democratically elected parliament in the UK and have been invoked by courts in recent weeks against the rioters and their inciters? Musk doesn’t like them, comparing the UK to the Soviet Union and endorsing views that Britain is on the brink of collapse into an Orwellian dystopia.
He has even suggested that the judges enforcing these laws should be the ones being arrested. So, maybe Musk’s previous statement about his platform’s commitment to free speech needs a re-write: that it believes in free speech, provided it does not break the law (unless it’s a law that Musk doesn’t like).
The generous response to this state of affairs is simply to point out to Musk and others like him that – despite wanting to appear to be engaged in some grandiose and profound philosophical discourse – they perhaps haven’t grappled with the complexities of free speech and its limits.
A less generous response is that, for some, this really isn’t – and never has been – about free speech at all. Free speech has simply become a convenient vehicle to act as the weapon against the real enemy: wokeness (or, as Musk calls it, “the woke mind virus”).
The principal manifestation of wokeness is cancel culture, so free speech becomes the obvious antidote. The result of this is that there is no need to attempt to understand the boundaries of free speech, since the concept is reduced to little more than an empty incantation, a battle cry of freedom in a war against a censorious woke agenda pursued by safe-spacing, no-platforming, preferred-pronouning snowflakes.
Then there are others who are just as comfortable to co-opt the notion of free speech to advance another agenda. These are those whose interests go beyond destroying the “woke mind virus” and believe that society restricts too greatly freedoms of all sorts.
To these anarchic, ultra-libertarians, rules are for other people. But, best for them not say that out loud.
Better to clothe themselves in the language of free speech – recognising the potency that the concept in its abstract form carries (who is possibly going to call themselves anti-free speech?) – even if, to them, it’s not about freedom of speech, but freedom to do whatever they want, free from consequence and any form of state regulation.
If the subject of free speech were not complicated enough in general terms, it only gets harder when it is mapped onto the online world. Yet, once again, the recent discussions about free speech online fail to appreciate any of this complexity.
This is largely because the prevailing view is that the rules governing conduct online should be no different from those governing conduct offline: misinformation and disinformation require no special treatment. Familiar maxims are invoked that sunlight is the best form of disinfectant, or that the only way to counter bad speech in a marketplace of ideas is more (free) speech.
The problem is that the online marketplace is different. It isn’t a bazaar where everyone has the same size stall, can’t drown out anyone else, and has an equal opportunity to sell their intellectual wares.
Instead, it looks more like the IKEA marketplace. Complex algorithms online have mapped out your route and dictate your next step, taking you further and further into echo chambers and exposing you to ever more outrageous, offensive, and harmful content.
The online world also allows a purveyor of particularly bizarre or offensive ideas to garner attention that they might not otherwise attract if each person’s voice had equal weight. Quite apart from algorithms primed to pump out and prioritise content that causes outrage – that most precious commodity of the digital age – all it takes is for Elon Musk to retweet to his 194 million followers something that might otherwise have attracted little more than a quiet mutter in a corner of a pub, and it becomes instantly viral. As the saying goes, in the online context freedom of speech is one thing, freedom of reach is another.
It is against this background that regulators have recognised that speech online is different and requires special treatment. It is why the early conception of the UK’s Online Safety Act didn’t just seek to regulate illegal content online, but also a category of speech termed as “legal but harmful”.
Unsurprisingly, this led to an immediate outcry, especially from familiar voices in the war on free speech. The Act as conceived was, they said, “a woke charter” that would see social media companies censoring content in pursuit of liberal, cosmopolitan views.
It was not long until the “harmful but legal” parts in the Act were scrapped, although it is said that the Government is flirting with revisiting this matter in light of the riots, so expect the same arguments to resurface.
There are good arguments on either side on the wisdom of regulating “harmful but legal” content online. But, again, they are complex and deserve sober discussion rather than being hijacked by those who want to use the language of “woke charters” and warn of “legislating for hurt feelings”.
It is worth pointing out, for example, that “legal but harmful” content is regulated under the Online Safety Act as it applies to children, but this did not attract anything like the same opposition. Presumably, this is because it is believed that children are more vulnerable to the ill effects of such content.
But are adults really that different? The sophisticated algorithms in use are designed to tap into our basest instincts and affect us at a neuroscientific level. If these platforms are, as one US congressman put it, “digital fentanyl”, the dangers of exposure to, and dissemination of, harmful content are surely universal and do not cease to exist on your 18th birthday.
That said, there will always be squeamishness from some about the government or an independent regulator deciding what constitutes harmful content. And it probably means that any attempt to revisit the Online Safety Act will fail.
But that does not mean nothing can be done. The government can continue to see that illegal content is regulated as the Act envisages, taking Elon Musk at his (initial) word that the platform supports free speech so long at is it lawful.
And, while we’re at it, we can take Elon Musk and X at their word on other matters. We do not need government to define and regulate “harmful but legal” content and risk the ire of those claiming that this an inappropriate exercise of state intervention. Online platforms have already done the work themselves.
Look at X’s terms of service and you will see all sorts of lawful speech that are not permitted, including entire sections on ”abuse and harassment” (complete with restrictions on behaviour that “harasses, shames, or degrades others”), on “violent speech” which “threatens, incites, glorifies, or expresses desire for violence or harm”, and on “hateful conduct” which prohibits attacking people on “on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.”
These prohibitions may come as surprise to recent users of the platform, given that at times it can feel as if this is the only type of content that appears. Yet, the restrictions form part of the contract between the platform and its users, and they echo the terms of service or community guidelines of many similar platforms.
If these companies aren’t upholding their part of the bargain and enforcing their rules, what is to stop a user from demanding that they do so? Or, as some have argued, rather than wait for Ofcom to work out the implications of the Online Safety Act in this area, what is to stop an appropriate body committed to consumer protection from taking action against them for failing to enforce their own rules?
Crucially, this response does not mean that the state or anyone else is telling platforms what to say or do beyond merely expecting them – just like any other company that interacts with others – to do what they say they will. And if they do not, there should be consequences.
Of course, in response, platforms like X could simply change their terms of service, declaring that they welcome all manner of offensive material provided it stays within the law, and that they will not take any steps to take it down. At least then would users and advertisers know where they stood.
Any hope that the platform might get better would be extinguished, and people wavering about whether to stay would likely finally decide to go elsewhere. It would also mean that those like Musk could practise the over-simplified message they preach, and face the consequences of doing so.
Jack Kennedy is a lawyer with a special expertise in media law