Is Social Media Racist by Design?

Chris Stokel-Walker
·18-min read
Photo credit: Sean Gallup - Getty Images
Photo credit: Sean Gallup - Getty Images

From Esquire

Caleb Cain was an unhappy young man who found solace and a sense of being in the darkest corners of the internet. He dropped out of college in 2015 and headed home to West Virginia. He had no job, no car, and was stuck in the house all day, sleeping when the sun shone and staying awake during the night, laying in bed watching YouTube.

He started out with comedy videos, but before long he was being served up increasingly strange content by YouTube’s algorithm – the black box that powers the site, and many other social platforms on which we spend our lives. He eventually stumbled upon Stefan Molyneux, a Canadian libertarian who the Southern Poverty Law Centre says is an “alleged cult leader who amplifies ‘scientific racism’, eugenics and white supremacism”. Cain loved it. “Soon I was believing these beliefs and I was repeating these beliefs to my friends and family,” he told me for a BBC Radio 4 documentary on YouTube.

He wanted to base the United States’s immigration system on IQ. He wanted the army to round up illegal immigrants and deport them. He wanted gay people to live underground; women to leave the workforce and go home to cook for men; he wanted trans people to undergo conversion therapy to their birth-assigned gender.

Cain’s journey into extremism was an inevitability, made easier by the interconnected world the internet and social media has wrought. It truly has never been easier to spread hate or indoctrinate people. In July, renowned rapper Wiley was able to broadcast antisemitic thoughts to a global audience through his Twitter following. White supremacists regularly go viral on TikTok, gaming the algorithm to reach millions. (The app alone has removed more than 380,000 videos in the United States so far this year for violating rules around hate speech.) Racism on Instagram is just a simple search away, with the company telling journalists such posts could remain online for longer due to staff limitations as a result of the coronavirus. YouTube sends people down algorithmic black holes, turning their thoughts extreme and in some instances, transforming online hatred into offline violence. The US president is saying publicly he can’t decide whether QAnon, a mass conspiracy theory fuelled by social media, is a “bad thing or a good thing”. Meanwhile, Black Lives Matter campaigners claim their Tik Tok videos were silenced following the killing of George Floyd– something the app claims was the result of a glitch in the way hashtags worked.

Photo credit: Joe Raedle
Photo credit: Joe Raedle

The fissures in society we’ve seen form for the last five or more years are being accelerated and amplified by platforms like Twitter and Facebook. We’ve never been more divided politically, socially or morally. Social media has become too big to fail – and yet it’s failing all of us, turning our world into a toxic mess. Caleb Cain escaped his spiral of hate. He managed to reground himself, gain perspective and now warns people about the pernicious power of social platforms. But he’s a rarity. We’re now massively split and enormously extreme – in part thanks to the apps we use every day.

But how did we get here? And more importantly, now we’re here, how do we get out?

They say sunlight is the best disinfectant, and have done since former US Supreme Court associate justice Louis Brandeis first coined the phrase more than 100 years ago.

Brandeis was the son of two Czech immigrants to the United States and a committed campaigner for Jewish rights, believing the people deserved a permanent country to call their own. His ascension to the Supreme Court was momentous, making him the first Jew to sit on the lawmaking body – and it wasn’t without its controversies. His path was repeatedly blocked by political opponents and president Woodrow Wilson. The man who replaced him on the Supreme Court, William O Douglas, called Brandeis “a militant crusader for social justice” – he was the first social justice warrior.

But when Brandeis first put forward the idea that “sunlight is the best disinfectant”, he showed a fundamental misunderstanding of biology that most children are taught in primary school. Sunlight can disinfect, but it’s also crucial to the growth of nature. Weeds become knottier and difficult to kill when given a spurt of sunlight. Far from being a cleanser, it’s a propagator. Unpalatable opinions spread further if they’re allowed to be aired more readily – as social media has proved over the past twenty years.

“It’s an area where you have lots of different human rights areas in play,” explains Gracie Mae Bradley, interim director of Liberty, a human rights advocacy group. “The internet is a frontier for the right to expression, and it isn’t absolute but is broad in scope and that includes things that might shock or offend or disturb people.” However, that doesn’t mean you can say whatever you want, she says. “You’ve got the right to free expression, but you’ve also got people’s rights to live free from discrimination on the grounds of race or religion.”

The issue is that social media set-ups have historically leaned closer to the former principle than the latter.

One of YouTube’s first dozen employees told me that the video sharing platform, which is one of the major ways we encounter content online, has followed Brandeis’s idea of sunlight being the best disinfectant since its launch in 2005. (The employee asked not to be named.) “The idea was these people with these views exist,” they say. “They’re part of our society. They’ve been marginalised appropriately and swept under the rug appropriately, but there’s no use in pretending they don’t exist. We should know about them so people can confront them.”

Photo credit: Jeff J Mitchell
Photo credit: Jeff J Mitchell

Key to that argument was the idea that in the early days of YouTube, the content was being pushed into the fringes and “appropriately shunned”. “We thought maybe it wasn’t the worst thing from a free speech perspective and operationally, as long as it’s not too terrible, you can express these ideas and people can confront you about them,” they say. “That lasted about six months.”

YouTube became too big, and certain types of inappropriate content became too popular – taking on a half-life of their own. “We found that if left to their own devices, those things would get onto the most viewed, and they would stay there,” they say. “It makes for, like, a trashy atmosphere.” As they point out, “if you go to a website and see a bunch of Hitler stuff, you’re probably not going to want to publish stuff about your cat or your baby.”

While the policies changed, the underlying principle didn’t. Outright hate speech was in theory outlawed, but the reality was that dog whistle attacks could still exist on YouTube – and any other platform, for that matter. The problems had only just begun.

Fifteen years on from the formation of YouTube, 16 years on from Facebook’s founding and 14 years after Twitter burst onto the scene, we’re seeing what these platforms have wrought. A far-right agitator is in the White House. Hate speech predominates. Our society has become divided. Truth is a fiction. And the tech companies have made big bank.

“One of the main problems with social media and how it contributes to, reflects and reinforces white supremacy and anti-Blackness and antisemitism is that ultimately social media is based on a commercial foundation,” says Francesca Sobande, a digital media studies lecturer at Cardiff University, who studies structural inequalities in the media. “At the end of the day social media platforms are there to serve a purpose which is profit. Capitalism from my point of view is based on ultimately very racist foundations.”

While Sobande says social platforms have made attempts to tackle racism online, “the reality is it’s not a priority for these organisations.” It’s also something that, despite their integral part in our society as public forums, isn’t necessarily always the platforms’ fault. “On issues relating to tech and human rights, I think sometimes we look at the site of the problem,” says Bradley, when in fact we should be asking deeper questions. Like: how does someone get to a point in their belief system where they they can be presented with racist content and take it seriously, rather than react with revulsion?

“As a society, it’s really easy to say we should do something about social media, when actually the question is: What is the character of the society we live in? What is it that means we are susceptible to content that is racist or antisemitic?” asks Bradley. “The issue is prior to social media. You don’t just mysteriously end up susceptible to the YouTube algorithm. There are lots of other things that have happened before you get to that point.”

Algorithms hold a mirror up to our society, and currently act as an accelerant down increasingly extreme rabbit holes. The ability to use small pockets of granular data about individual users and provide them with a personalised experience on apps like Facebook, YouTube, Twitter and TikTok – pushing them into filter bubbles – makes the problem more pernicious. While abhorrent views may be pushed out to the fringes of apps, they still find their way to people algorithmically. And because everyone’s experience of social networks is different, it can be difficult to discern how widespread videos have become, and how calcified views are.

Photo credit: Twitter
Photo credit: Twitter

It’s only when you find a supposedly fringe viewpoint on your feed that you realise how effectively social media can slingshot racist ideas into the mainstream. When Wiley wrote that Jewish people were “snakes” and “at war” with Black people, it became abundantly clear that anti-Semitic sentiment pervades the platforms, and is able to spread rapidly. Wiley was later banned permanently from Twitter, Facebook and Instagram for his comments, with the platforms deciding to make an example of him to discourage similar behaviour from others.

But whether anything will really change is another question entirely.

“Racism and sexism and antisemitism have been big business in many countries, certainly in the United States, for a long time, predating the internet,” says Safiya Umoja Noble, author of Algorithms of Oppression, a book that looks at racism in technological algorithms. “The same kind of racist narratives and tropes, sexist, tropes and serious hate-filled dangerous propaganda moves now through the internet, particularly through large platforms at scale with tremendous speed.” It’s become profitable for big tech companies to allow that kind of post to thrive on their platforms, because it drives engagement, and engagement means money.

High-performing posts that go viral demonstrate a platform’s popularity. In many ways, it doesn’t matter what type of engagement takes place with a post: it can be racists and antisemites spewing hatred in the comments section of an unfunny MAGA meme, or right-minded people arguing back and highlighting the inhumanity of the image. It all counts. “They engage with it, even if they don't like the content, and that's incredibly profitable for big tech companies,” says Noble.

It’s also worrying, because the mere presence of a piece of information on social media is increasingly seen as a validation of truth. Nearly half of Brits and Americans use social media as a source of news, according to a massive global survey by the Reuters Institute for the Study of Journalism in April – up five percentage points from January. Two-thirds of Spaniards do, and eight in 10 Argentinians. When split by age, that number becomes even greater: 61% of Brits under the age of 61% get news from social networks.

Yet what’s defined as “news” has changed – in part thanks to the battering traditional media outlets have taken by anti-establishment politicians in the last decade. Tommy Robinson claims the role of journalist as he’s broadcasting lies in contempt of court from outside a judge’s chamber – a crime for which the former leader of the English Defence League, whose real name is Stephen Yaxley-Lennon, was jailed for nine months. Nigel Farage travels to the Kent coast to document the arrival of asylum seekers, restyling himself as a roving freelance journalist while shouting vitriol at people desperately seeking a stable life from war zones. Alex Jones exists largely because of social media, and even once his conspiracy-pedalling Infowars show was “cancelled” by YouTube and other platforms, the ghost of his odd proclamations still haunts those sites, re-clipped and re-uploaded for all to see.

Photo credit: Luke Dray
Photo credit: Luke Dray

In a world where anyone can be a journalist, including racists, it’s difficult to stop the spread of such viewpoints – especially when the platforms incentivise their sharing across the world.

The same problem is true of Google, says Noble. “The public generally relates to these platforms as information or knowledge portals or resources,” she says. “They think of Google search as a fact checker, a place where they can go to authenticate the veracity of things that they see on other channels.” But it’s not – and it’s a place where inequality is perpetuated. In July, tech site The Markup reported that Google’s advertising keyword suggestion system would recommend pornographic keywords for people wanting to advertise using the phrase “Black girls”, “Latina girls” and “Asian girls” – but not “white girls”.

That extends to the search results presented to users: the cover of Noble’s book shows the way Google used to autocomplete for the query, “Why are Black women so…”. The results included “angry”, “loud”, “mean” and “lazy”.

“No longer do people go to the library to research something deeply, or have conversations with people they know who might be educated about a thing,” Noble explains. “They don’t take a class, go to university, read the newspaper, or cross check a number of different ideas through different channels. Now people see it through ‘authoritative’ sites like a search engine, and they think it must be true. And so that is a tremendously different kind of impact on the way we understand the social world around us.”

Six percent. 3.7%. 3.9%. That’s the proportion of Black people employed by Twitter, Google and Facebook respectively – between a quarter and a half of the total share of the US population who are Black.

Every tech company trumpets the ground they’re making up on improving their diversity numbers, but it belies the fact that they started from an inherently unequal place. “There's no doubt that the lack of diversity in Silicon Valley and silicon corridors around the world has an impact on the design of these platforms, and the fact that engineers, user experience, researchers, a whole host of designers and programmers in companies don't look for certain kinds of problems,” says Noble. “We know this because we see the ways in which these technologies get weaponised against these communities, who are often minoritised in the countries where these companies do business, who don't have the money and the resources to combat propaganda or dissent.”

She comes to a worrying conclusion: “You have a very small narrow band of people who are really controlling the information and knowledge landscape who are deeply out of tune with the majority of the world.”

Bigots have always existed, but they've very rarely been given the means to spread their hatred at such a scale. “What’s much more problematic than Joe Bloggs spewing out some racist idea around a particular individual or group is the way those systems are designed to allow that to happen,” says Charlene Prempeh, a lobbyist and consultant who runs A Vibe Called Tech, which promotes better equality through technology. It’s also not limited to social networks. Uber, Deliveroo and other apps have enabled racism. Black drivers are more likely to have lower ratings on those apps than white drivers – not because they’re any worse at their jobs, but because human beings have inputs into them. “There needs to be a reckoning with how these technologies work, and how structural racism feeds into those interactions,” says Prempeh. “We need to think how things can be designed in a way that reduce the harm that empowers people with racist ideas, the harm that they can inflict on people of colour.”

The time to act is now, agrees Noble. “Many of these technologies have not been with us for very long,” she says. “Now is an incredibly important time to look at public policy regulation studies, to put more money into researching the harms and dangers and to try to roll back and abolish some of these technologies and some of these practices.” That also includes reducing monopolies in the tech sector that allow intransigence on policies and craft algorithms that exacerbate society’s inequalities.

It may seem like a daunting task – because it is. Billions of us use these technologies every day. We’re continuing to feed them data, and perpetuating the problem. We’re making it worse with every day that passes. Meanwhile the social giants continue to make money hand over fist. A Black-led boycott of advertising on Facebook following the platform's refusal to step in when Donald Trump posted “when the looting starts, the shooting starts” has had little effect on the company's bottom line. One thousand advertisers withheld their advertising dollars from the platform because of it, but 8,999,000 continued to pay up. Facebook still reported massive profits anyway.

Photo credit: Chip Somodevilla
Photo credit: Chip Somodevilla

For that reason it can seem pointless to act. But that’s misguided. “I don't think we should ever feel defeated, that we can't push back and resist the way these companies do business, in our communities, in our nations and our countries,” says Noble. “There was a time that people thought that you could not reimagine the American economy from a kind of slave autocracy raised from the institution of big cotton and the enslavement of African people.

“Even though we have big tech and AI and algorithms really deeply ingrained in many aspects of our society, we should look to other eras to reimagine how it could be different,” she says.

What could that future look like, and how do we get there? For one thing, we start talking about it. Shouting about it, in fact.

“At this point, people, individuals and organisations that are a part of this cannot claim to be unaware of the reality of the situation,” says Sobande. “It’s not as simple as solving this with surface level representational politics. It’s not as simple as saying: ‘Hire more Black people, or make your team more diverse’.”

It requires a more fundamental rethink of what these platforms are, what purpose they serve, and how we support them. It may also require new brooms in politics. “It does feel as though there have been missed opportunities in the past to do something that might have contributed a different situation than what we see today,” says Sobande. Inaction from social giants in the early 2010s, when politics was less polarised, has made movement from the norm even trickier today.

If and when that happens, it’s about maintaining pressure on companies resistant to change by making clear your unhappiness with the status quo. “The more and more we have these discussions about technology, the more people are going to feel empowered enough to talk about what the problems are, and know a bit more about the detail,” says Prempeh. “It's not something I think the general public as it stands are educated enough about to be able to identify exactly what it is that they think they think needs to change. As understanding grows, I think that will help in increasing the pressure.”

Liberty’s Bradley agrees that better education about how these platforms – and our technology more generally – works will help. “What are the skills with which we equip people when they’re thinking about being online? How do we equip people to appraise critically what is it I’m looking at? Is it a credible source? Should I be taking it on board? Do I understand how these companies work?” she asks. “I just think that there are conversations to be had that aren’t about social media that are far more difficult in a way, and I do think these are under-appreciated by people.”

For those wanting a revolution at the pace of technology, you may be disappointed. “Quick action is not good action,” says Bradley. We’ve diagnosed the illness – we have a racism problem – but misidentified the cause. It’s not just social media, though that’s definitely an accelerant, an amplifier, and an enabler.

“When you look at what’s happening, at least in British society, link up what happens online with what happens offline, and look at the tenor of the national conversation around, for example, migrants, it feels like in society there’s a much bigger reckoning we have to have that is prior to and in some ways a bit distinct from what’s happening online,” says Bradley. That’s because what’s happening online is not new. “The scale and the speed at which people are being hit with this stuff is different. But in terms of the kind of hatred people are expressing, that’s stuff we’ve needed to deal with as a society for a long time.” And it’s something that changes to Facebook, Twitter, Instagram or TikTok alone won’t fix.

Like this article? Sign up to our newsletter to get more articles like this delivered straight to your inbox

SIGN UP

Need some positivity right now? Subscribe to Esquire now for a hit of style, fitness, culture and advice from the experts

SUBSCRIBE

You Might Also Like