Experimental feature

Listen to this article

00:00

00:00

Experimental feature

or

The UK ethics watchdog has accused Facebook, Google and Twitter of failing to properly protect online users from abuse, in a highly critical report that will heap pressure on technology companies to police the content on their websites more closely.

In a government-commissioned review published on Wednesday, the Committee on Standards in Public Life said Britain should introduce laws to force tech companies to identify intimidating social media posts and report those responsible for illegal content to the authorities.

It also recommended that Google, Facebook and Twitter publish quarterly information on the posts they flag or take down, starting from next year.

The committee said: “In the fast-paced and rapidly developing world of social media, the companies and the government must proactively address the issue of intimidation online.”

It added: “We have met with Twitter, Facebook and Google and we are deeply concerned about the lack of progress all three companies are making in protecting users online.”

Theresa May, the prime minister, ordered the committee’s review in the summer after complaints from MPs about online threats of sexual violence, damage to property and harassment in the run-up to the June general election.

The report focused on the bullying of politicians but also called on tech companies to take broader responsibility for the content on their platforms.

Under EU rules drawn up almost 20 years ago, tech companies are considered “hosts” to information rather than “publishers” that are legally responsible for actively policing content on their sites.

But the public mood has shifted recently following a number of online controversies involving extremist and sexually explicit content, hate speech and fake news. The concern for policymakers is that social media companies can be used to spread content that would be considered inappropriate or illegal by publishers.

The committee’s report said these companies should not be considered the same as publishers. But it added that distinctions between technology companies and media publishers were outdated and therefore there was a need to reconsider to “recognise the changing nature of [content] creation”.

The Cabinet Office said “intimidation is completely unacceptable” and that it would consider the recommendations from the committee.

“We need to ensure that our democracy is a tolerant and inclusive one, in which all future candidates for elections will not be dissuaded or intimidated from standing for public office,” it added.

Social media companies rely heavily on users and automated technology to flag inappropriate posts that are then escalated to teams of moderators.

But recently they have hired more people to improve online moderation: Google’s YouTube plans to increase the number of employees reviewing content to more than 10,000, although it has not specified how many workers currently perform that function.

The committee’s report said Facebook was increasing the size of its community operations team that “help people share responsibly and respectfully” from 4,500 to 7,500.

Google has said it reviews 98 per cent of flagged content on YouTube within one day and this month pledged to publish regular reports on the content it removes.

Twitter said in response to the committee’s report: “We’re now taking action on 10 times the number of accounts every day compared to the same time last year and using new technology to limit account functionality or place suspensions on thousands more abusive accounts.”

Facebook said: “We want parliamentarians and election candidates to feel safe on Facebook . . . We’re making significant investments in hiring more people who understand the issues around candidate safety.”

Google declined to comment.

Leave a Reply

Time limit is exhausted. Please reload the CAPTCHA.