Facebook has updated its rules to tackle posts containing depictions of “blackface” and common anti-Semitic stereotypes.
Its community standards now explicitly state that this content should be removed if it is used to target or mock people.
The company said it consulted more than 60 outside experts before taking action.
But an activist said she was still concerned about her broader anti-racist efforts.
“Blackface is an issue that’s been around for a decade, which is why it’s surprising that it’s only being addressed now,” said Zubaida Haque, Acting Director of the Runnymede Racial Equality Think Tank. Trust.
“It is deeply damaging to the lives of black people in terms of the hatred directed at them and the spread of racial myths, lies and stereotypes.
“We welcome Facebook’s decision.
“But I’m not entirely convinced that these steps are part of a solid strategy to proactively deal with this hatred, as opposed to it being some kind of crisis. “
Hate speech policies
Facebook’s rules have long included a ban on hate speech related to race, ethnicity and religion, among other characteristics.
But they have now been revised to clarify:
- black people cartoons in blackface form
- references to the Jewish people ruling the world or controlling major institutions such as media networks, the economy or government
The rules also apply to Instagram.
“This type of content has always been against the spirit of our hate speech policies,” said Monika Bickert, Head of Content Policy at Facebook.
“But it can be really difficult to take concepts… and define them in a way that allows our content reviewers based around the world to identify violations in a consistent and fair manner. “
Facebook said the ban would apply to photos of people depicting Black Pete – an assistant to St. Nicholas, who traditionally appears in black at winter festivals in the Netherlands.
And it could also remove some photos of English Morris folk dancers who painted their faces black.
However, Ms Bickert suggested that other examples – including critical messages calling attention to the fact that a politician once wore a black face – might still be allowed once the policy goes into effect.
The announcement coincided with the latest figures from Facebook on dealing with problematic posts.
The tech company said it removed 22.5 million items of hate speech between April and June, up from 9.6 million in the previous quarter.
He said the hike was “largely driven” by improvements to its auto-detection technologies in several languages, including Spanish, Arabic, Indonesian and Burmese. This implied that a lot of content had been missed in the past.
Facebook admitted that it was still unable to give a measure of the “prevalence of hate speech” on its platform – in other words whether the problem was in fact getting worse.
It already gives such a metric for other topics, including violent and graphic content.
But a spokesperson said the company hopes to start providing a figure later in the year. He also said the social network intends to start using a third-party auditor to verify its numbers sometime in 2021.
A campaign group said it suspected hate speech was indeed a growing problem.
“We have been warning for some time that a major pandemic event could ignite xenophobia and racism,” said Imran Ahmed, director general of the Center for Combating Digital Hate (CCDH).
Hate speech on Facebook
More than 5 times more than last year
Facebook’s report also found that staffing issues caused by the pandemic had meant it had taken steps to reduce the number of suicide and self-harm posts – on Instagram and Facebook.
And on Instagram, the same problem meant he took action on fewer posts in the category he calls “child nudity and sexual exploitation.” Shares fell by more than half, dropping from one million positions to 479,400.
“Facebook’s inability to act against harmful content on its platforms is inexcusable, especially when they have been repeatedly warned that lockdown conditions are creating the perfect storm for online child abuse at the start of this pandemic,” said Martha Kirby of the NSPCC.
“The crisis has revealed how unwilling tech companies are to prioritize the safety of children and instead react to damage after it has happened rather than designing basic safety features at their sites to prevent it in the first place, ”she said.
However, on Facebook itself, the number of deletions of these posts has increased.