Social media firms have defended their content material moderation practices to a Parliamentary committee investigating on-line misinformation, arguing that they have already got efficient processes and methods in place to take care of the unfold of false data on their platforms.
On 25 February 2025, the Commons Science, Innovation and Know-how Committee (SITC) grilled tech giants X, TikTok and Meta as a part of its inquiry into on-line misinformation and dangerous algorithms.
Opening the session, committee chair Chi Onwurah mentioned the subject has unusually “robust public curiosity” for a technology-related concern, and that the committee is anxious about misinformation “being disseminated at industrial scale”. She added that this specific session would primarily focus in on the unfold of disinformation throughout the Southport Riots in 2024.
Within the wake of the deadly stabbing of three ladies in Southport on 29 July 2024, social media turned awash with unsubstantiated rumours that the perpetrator was an asylum seeker of Muslim religion.
Whereas this was later confirmed to be utterly false, Islamophobic far-right rioting broke out in additional than a dozen English cities and cities over the following few days, particularly concentrating on mosques, lodges housing asylum seekers, immigration centres and random individuals of color.
Responding to MPs’ questions on social media corporations’ response to the unrest, Chris Yiu, director of public coverage for Northern Europe at Meta, mentioned the organisation’s belief and security groups took down round 24,000 posts for breaking its insurance policies on violence and incitement, and an extra 2,700 for breaking its guidelines on harmful organisations.
“One factor that I feel is difficult in these fast-moving incidents is that it may be troublesome to ascertain the details on the grounds in actual time,” he mentioned. “I feel that’s one thing which we might want to have a mirrored image on and perceive how we might try this.”
Group pointers
Alistair Regulation, director of public coverage and authorities affairs for the UK and Eire at TikTok, added that whereas the overwhelming majority of content material on the platform throughout the unrest was both documentary or bystander footage, tens of 1000’s of posts containing “violent feedback” had been eliminated by the corporate for violating its group pointers.
Echoing Yiu, Regulation additionally mentioned it may be troublesome to “set up the veracity of potential claims” in fast-moving occasions, including that there must be collaboration all through the “wider worth chain” of knowledge sharing, which incorporates broadcast media, as information protection and social media content material can create damaging suggestions loops round misinformation.
Wilfredo Fernández, senior director for presidency affairs at X, mentioned the corporate has “very clear protocols in place on tips on how to take care of this content material and the challenges that come up within the aftermath of an assault like this”.
He added that X’s “group notes” mannequin was capable of present helpful context to customers, and that distinguished far-right figures like Tommy Robinson and Andrew Tate acquired such notes in relation to their posts concerning the Southport assaults. “X has no energy to put or take away a observe,” mentioned Fernández. “It’s utterly powered by individuals.”
In response to MPs highlighting situations of blue-tick X accounts making posts concerning the location of immigrants and inspiring rioters to go there (and which didn’t obtain group notes), Fernández mentioned the corporate did take numerous actions on tens of 1000’s of posts. “To sit down right here and say we get the precise name each time, that may be incorrect,” he mentioned.
Labour MP Emily Darlington additionally challenged Fernández over excessive messages she had personally acquired on the X platform in November 2024, by which she was described as “a traitor to the British individuals” and threatened with hanging after sharing a petition to avoid wasting her native Submit Workplace.
Darlington mentioned she reported the put up as dangerous and violent speech, in violation of X’s guidelines that state expressing need for violence isn’t allowed, and listed different violent or racist feedback made by the identical account.
Requested by Darlington whether or not this was acceptable, Fernández mentioned the feedback had been “abhorrent”, however that whereas he would have content material moderation groups overview the account for phrases of service violations, he couldn’t make any assurances that it could be eliminated.
Meta was additionally criticised for its removing of third-party fact-checking in favour of an X-style “group notes” method, which MPs argued would permit “racist misinformation” to unfold after citing quite a few examples of Meta customers posting racist, antisemitic or transphobic feedback on the platform.
“Now we have acquired suggestions that … some areas of debates had been being suppressed an excessive amount of on our platform and that some conversations, while difficult, ought to have an area to be mentioned,” mentioned Yiu.
Each Onwurah and Darlington pushed again, arguing that some issues – equivalent to statements denying the existence of trans individuals or deriding immigrants – ought to be not be characterised as up for debate.
The representatives from Meta and TikTok mentioned that whereas the size of social media use presents clear content material moderation issues, every agency takes down upwards of 98% of violent content material.
Processes and methods
Responding to questions on whether or not the On-line Security Act (OSA) being in pressure on the time of the riots would have modified their method in any respect, each firm mentioned they have already got processes and methods in place to take care of misinformation crises.
Within the wake of the riots, Ofcom warned that social media corporations can be obliged by the OSA to take care of disinformation and content material that’s hateful or provokes violence, noting that it “will put new duties on tech corporations to guard their customers from unlawful content material, which below the Act can embody content material involving hatred, dysfunction, scary violence or sure situations of disinformation”.
The net harms regulator added that when the act comes into pressure in late 2024, tech corporations will then have three months to evaluate the chance of unlawful content material on their platforms. They’ll then be required to take acceptable steps to cease it showing, and act rapidly to take away it after they change into conscious of it.
“The most important tech corporations will in the end must go even additional – by persistently making use of their phrases of service, which regularly embody banning issues like hate speech, inciting violence and dangerous disinformation,” mentioned Ofcom, including that it’ll have a broad vary of enforcement powers at its disposal to take care of non-compliant corporations.
“These embody the ability to impose important monetary penalties for breaches of the security duties,” it mentioned. “The regime focuses on platforms’ methods and processes moderately than the content material itself that’s on their platforms.”
Nevertheless, whereas numerous the On-line Security Act’s prison offences had been already in pressure on the time of the unrest – together with these associated to threatening communications, false communications and tech firms’ non-compliance with data notices – some mentioned on the time it was unclear if any of those can be relevant to these utilizing social media to organise racist riots.
“As an alternative, the police are more likely to should depend on offences below the Public Order Act 1986, which is the primary piece of laws which penalises using violence and/or intimidation by people or teams,” mentioned Mark Jones, a associate at Payne Hicks Seaside. “While the house secretary might have mentioned ‘if it’s a criminal offense offline, it’s a criminal offense on-line’, and while that could be right, the On-line Security Act supplies no extra assist to the pre-existing prison legislation overlaying incidents of incitement of violence.”