Facebook, no stranger to misinformation on its platform, attempted an uncharacteristically bold move as the COVID-19 pandemic took hold around the world: instead of merely labelling posts that discouraged vaccination and/or spread COVID-19 conspiracy theories about masks, social distancing and other themes, it would remove that content altogether -- a big step, considering how often engagement (good or bad) trumps just about everything else in its business. In total, it has removed more than 25 million pieces of COVID-19 misinfo content worldwide since the start of the pandemic. But now, it looks like the company is exploring how it might change course once again.
Parent company Meta is exploring whether it should halt its removal policy, instead reverting to labels and algorithmic demotion "either directly or through our third-party fact-checking program" when those kinds of posts come up. In other words: it's looking at if it might keep up potentially false or harmful information, and just label it instead of removing it.
Nick Clegg, the former UK politician who is now the company's president of global affairs, notes in a blog post that Meta has asked its Oversight Board -- the group formed by the company to act as an independent auditor of content decisions and moderation made on Facebook and Instagram, to provide regular reports on activity and proactively make calls as well as respond to appeals, which just got a fresh $150 million in funding from Meta to keep operating -- for advice on whether to reconsider those measures, given the wider downgrading of COVID-19 measures many countries have also gone through "as many, though not all, countries around the world seek to return to more normal life."
Normal life, it would appear, seems destined to be filled with more misinformation.
Clegg's arguments are those that Meta (and previously Facebook) have long had internally and externally.
On one hand, he says Meta is reconsidering its position because, essentially, the situation no longer feels as critical as it did in many parts of the world. (I guess he hasn't seen the resurgence of cases in many parts of the world due to new variants?) And on top of that Meta itself has put in place other ways of accessing information. He's referring here to the company's COVID-19 Information Center and labelling policies, guidance from public health authorities and more. "Meta’s COVID-19 Information Center has connected over two billion people across 189 countries to helpful, authoritative COVID-19 information," he writes.
On the other, this underscores an interesting shift for Meta to once again hem closer again to its basic idea of "free expression" -- regardless of whether you agree with everything that's expressed or not -- which at the end of the day is what will bring more people to posting and interacting on its platforms, for better or worse.
"Resolving the inherent tensions between free expression and safety isn’t easy, especially when confronted with unprecedented and fast-moving challenges, as we have been in the pandemic," Clegg writes. "That’s why we are seeking the advice of the Oversight Board in this case. Its guidance will also help us respond to future public health emergencies."
Faced with tough business decisions in what is a tough time for the company, it's really not clear whether Meta will go for what is "right" in the misinformation sense, or what is "right" in the "free speech" sense. The latter by definition means more content, more controversy, and more engagement = better for business.
Yes, people might simply get bored and stop posting any content that might fall into the misinformation category, but anything less that that ideal feels like a wash: a win for one side will be a loss for the other.
It's worth pointing out Clegg's phrasing there: the request is being phrased as advice. So although the Oversight Board was created with the view to having some kind of content control with binding decisions, it also makes non-binding advisory calls, too. Clegg specifies this particular request as "advisory" and therefore non-binding.
That means that if the Oversight Board says that Facebook and Instagram should still remove this content, Meta could choose to ignore that; and conversely, Meta could choose to uphold it in some or all cases, even if the Oversight Board believes it could fall back into a less aggressive stance. It will also potentially mean more moderation work and more appeals on whichever decision gets taken.