Advertisement

Facebook comments like ‘white men are stupid’ were algorithmically rated as bad as antisemitic or racist slurs, according to internal documents

 (Getty Images)
(Getty Images)

Facebook is changing its algorithms that detect hate speech, according to internal documents.

Internal documents from the social media company indicate that its previous “race-blind” policies resulted in Facebook being more vigilant about removing slurs directed at white users and deleting posts by people of colour.

The change involves re-engineering Facebook’s moderation systems in order to get better at automatically removing the worst content on its platform, which includes slurs against black people, Muslims, the LGBTQ community, and Jewish people, according to the Washington Post.

Before Facebook’s changes, comments such as “white people are stupid” would receive the same numerical rating as antisemitic or racist slurs, despite the distinction between how much each groups have historically suffered from abuse.

The first phrase of this change has meant Facebook has deprioritized comments about “Whites”, “men”, and “Americans”; while the company still considers attacks hate speech, its algorithm considers them as “low-sensitivity” and means that 10,000 fewer posts are being deleted each day.

This is in contrast to Facebook’s previous policies; in 2017, it was revealed that Facebook only deleted slurs and calls for violence when they were directed at “protected categories” based on traits like race, sex, gender identity, and others.

This meant that white men – as a category including both race and gender – received greater protections than female drivers or black children because “drivers” or “children” were not protected characteristics, despite them obviously being more at risk to receive hateful language.

Since that date, according to complaints from users seen by the Washington Post, Black and Hispanic users were the most engaged groups on Facebook in terms of overall activity and the number of videos watched and uploaded, but also had been raising public awareness about hate speech on the platform while their accounts were being suspended.

“We know that hate speech targeted towards underrepresented groups can be the most harmful, which is why we have focused our technology on finding the hate speech that users and experts tell us is the most serious,” said Facebook spokeswoman Sally Aldous.

“Over the past year, we’ve also updated our policies to catch more implicit hate speech, such as content depicting Blackface, stereotypes about Jewish people controlling the world, and banned Holocaust denial.”

Facebook also felt threatened, with executives reportedly worried that users would move to Twitter or Snapchat.

However, critics say that Facebook’s overhaul are not specific enough to inspire confidence, and the company needs to be more transparent about its changes.

“Hate speech is a serious problem that can spread prejudice, inflame violence, and suppress civic participation. When tech companies develop policies designed to manage hate and harassment, they owe it to the public to do evidence-based governance” said Nathan Matias, an assistant professor of communication at Cornell University

“So far, the quantitative claims in the company's public reports and civil rights audits have been too vague to interpret clearly. Unless Facebook commits to conducting rigorous research on the impact of their new policies and openly releasing the results of those studies for independent verification, the results of these changes will be impossible to distinguish from window dressing.”

In July 2020, a report that Facebook commissioned into itself condemned the company for a series of "serious setbacks" that led to failures on issues including hate speech, misinformation and bias.

It recommended that Facebook build a "civil rights infrastructure" into every aspect of the company, as well as a "stronger interpretation" of existing voter suppression policies and more concrete action on algorithmic bias.

At the same time, Facebook said it would study bias against minotiries on its main app and on Instagram, to see how their products could negatively affect minority groups. One month later, the platform’s algorithm was found to be “actively recommend[ing]” Holocaust denial and fascism, according to research by the Institute for Strategic Dialogue (ISD) think tank.

Read More

Facebook accused of discriminating against US workers

Facebook to ban anti-vaxx conspiracy theories

Forty US states plan to sue Facebook next week, report claims