Social media moderation and incitement to online violence

The role of social media in the violence and disorder on Britain’s streets has become a major topic, with the moderation and regulation of platforms coming under scrutiny.

Misinformation spread online has contributed in part to the riots, and now people are being arrested and charged for inciting hatred or violence via social media platforms.

Below, you’ll learn how social media content moderation currently works, how posting hateful material can be a criminal offense, and how regulation of the sector could change moderation in the future.

– How do social media sites currently moderate content?

All major social media platforms have community rules that their users must adhere to. The way they enforce these rules can vary depending on the composition of their content moderation teams and how they go about that process.

Most large sites have thousands of human moderators who monitor content flagged by them or proactively discovered by human staff or by software and AI tools designed to detect harmful material.

– What are the restrictions at the moment?

There are several key issues with content moderation in general, including: the sheer size of social media makes it difficult to find and remove all harmful posts; moderators – both human and artificial – can struggle to recognize nuanced or local context and therefore sometimes confuse the harmful with the innocuous; and moderation relies heavily on users reporting content to moderators – something that doesn’t always happen in online echo chambers.

Additionally, the use of encrypted messaging on some sites means that not all content is publicly visible and can be spotted and reported by other users. Instead, they rely on those within encrypted groups to report potentially harmful content.

A view of the Twitter, Instagram and Facebook apps on an iPhone screen

In many cases, social media platforms take action against posts that fuel or encourage disorder (PA)

Crucially, many tech giants have also made a number of cuts to their content moderation teams recently, often due to financial pressures. This has also had an impact on the responsiveness of their content teams.

At X, formerly Twitter, Elon Musk drastically reduced the site’s moderation staff after taking over the company as part of his cost-cutting measures, and he repositioned the site as a platform that would allow more “free speech,” significantly relaxing policies around prohibited content.

The result is that harmful material can proliferate on the largest platforms, and it’s also why there have long been calls for stricter regulations to force sites to do more.

– How can posting messages on social media be made punishable?

Offences relating to sedition, provocation of violence and intimidation existed in UK law before the age of social media and are covered by the Public Order Act 1986. However, this Act covers offences that occur both online and offline.

Most social media sites explicitly prohibit such content in their policies, meaning that they, like the police, can take action based on such posts.

– How realistic is it to expect all harmful content to be removed by the platforms?

Under the current circumstances, not very much.

In many cases, social media platforms take action against posts that fuel or encourage disorder.

However, the speed at which this harmful or misleading content spreads can make it difficult for platforms to remove all posts or limit their visibility before they are seen by many other users.

New regulations for social media platforms – the Online Safety Act – came into force in the UK last year, but are not yet fully enforceable.

Once implemented, platforms will be required to take “strong action” against illegal content and activity, including around offences such as incitement to violence.

The law also introduces criminal offences for sending threatening messages online and sharing false information with the intention of causing non-trivial harm.

Ofcom investigations into GB NewsOfcom investigations into GB News

Ofcom published an open letter on Wednesday (Yui Mok/PA)

– How will the Online Safety Act help?

The new laws mean that for the first time, companies will be legally responsible for the safety of users, particularly children, when using their services.

The new laws, which are overseen by Ofcom, do not specifically target content removal by the regulator itself, but do require platforms to put in place clear and proportionate safeguards to prevent illegal and other harmful content from appearing and being distributed on their sites.

It is crucial that there are clear sanctions for those who do not comply with the rules.

Ofcom can fine companies up to £18 million or 10% of their global turnover, whichever is higher. For the biggest platforms, this could amount to billions of pounds.

In more serious cases, Ofcom can apply to the courts for an injunction to disrupt business operations. This could include restricting internet service providers from accessing the platform in question.

And most notably, senior managers could in some cases be held criminally liable if they fail to comply with Ofcom’s sanctions. With these sanctions, the company hopes to push platforms to take tougher action against harmful content.

In an open letter published on Wednesday, Ofcom urged social media companies to do more to tackle content that incites hatred or violence on the streets of Britain.

The watchdog said: “New safety obligations under the Online Safety Act will come into force in a few months, but you can take action now – you don’t have to wait to make your sites and apps safer for users.”

The letter, signed by Gill Whitehead, Ofcom’s director of online safety, says guidance will be published “later this year” setting out what social media companies must do to tackle “content involving hatred, disorder, incitement to violence or certain instances of disinformation”.

It added: “We expect to continue to work with businesses during this period to understand the specific issues they face, and we welcome the proactive approaches taken by some services in relation to these violent acts in the UK.”

Leave a Comment