The panel on stage at the Knight Foundation's Informed event is Elon Musk's nightmare blunt rotation: Techdirt editor Mike Masnick, Twitter's former safety lead Yoel Roth, and Bluesky CEO Jay Graber, who have come together to discuss content moderation in the fediverse.
It's been more than a year since Musk showed up at Twitter HQ with a literal sink in tow, but many social media users are still a bit nomadic, floating among various emerging platforms. And if a user made the choice to leave Twitter in the Musk era, they likely are looking for a platform with actual moderation policies, which means even more pressure for leaders like Graber to strike the fragile balance between tedious over-moderation and a fully hands-off approach.
"The whole philosophy has been that this needs to have a good UX and be a good experience," Graber said about her approach to running Bluesky. "People aren't just in it for the decentralization and abstract ideas. They're in it for having fun and having a good time here."
And at the start, users were having a good -- really good -- experience.
"We had a really high ratio of posters to lurkers. On a lot of social platforms, there's a very small percentage of people who post, and a very large percentage of people who lurk," Graber said. "It's been a very active posting culture, and it continues to be, although the beginning was extremely high, like 90-95% of users were all posting."
But Bluesky has faced some growing pains in its beta as it figures out what approach to take to delicate content moderation issues. In one incident, which Roth asked Graber about on the panel, users discovered that Bluesky did not have a list of words banned from appearing in user names. As a result, users started registering account names with racial slurs.
"At the time last summer, we were a really small team, like less than ten engineers. We could all fit around a conference table," Graber said. When content moderators discovered the issue with slurs in usernames, the team patched the code, which is open source, so users could see the implementation of the word lists happen in real time, which sparked further debate. "We learned a lot about communication transparency and being really proactive.... One of the reasons we've stayed in beta so long is to give ourselves some space to get this right."
Since then, both Bluesky's userbase and its team have grown. Bluesky hired more engineers and content moderations, while its total number of users increased from about 50,000 at the end of April 2023, to over 3 million this month. And the platform still isn't open to the public.
"It's fair to say that about half of our technical product work has been related in some way to trust and safety, because moderation is quite core to how this works in an open ecosystem," Graber said.
For platforms like Bluesky, Mastodon and Threads, content moderation challenges become even more complicated when you add in the variable of the fediverse.
Once the AT Protocol is fully up and running, anyone will be able to build their own social network atop Bluesky's infrastructure -- Bluesky, as a social network, is just one app built on the protocol. But this means that as new networks crop up on the AT Protocol, the company will have to decide how (or if) it should regulate what people do on the platform. For now, this means Bluesky is building what it calls "composable moderation."
"Our broader vision here is composable moderation, and so that's essentially saying that on the services we run, like the app, that we set a baseline for moderation," Graber said. "But we want to build an ecosystem where anyone can participate [in moderation], and third party is really first party."
Graber explains the complicated concept further in a blog post:
Centralized social platforms delegate all moderation to a central set of admins whose policies are set by one company. This is a bit like resolving all disputes at the level of the Supreme Court. Federated networks delegate moderation decisions to server admins. This is more like resolving disputes at a state government level, which is better because you can move to a new state if you don’t like your state's decisions — but moving is usually difficult and expensive in other networks. We’ve improved on this situation by making it easier to switch servers, and by separating moderation out into structurally independent services.
So, Bluesky can mandate that copyright infringement and spam are not allowed, but an individual app built on the protocol can make its own rules, so long as they don't contradict Bluesky's baseline. For example, Bluesky allows users to post adult content, but if someone were to build a more family-friendly server on the AT protocol, they would have the right to ban adult content from their specific server -- and if someone on that server disagreed with that decision, they could easily port over their account to a different server and retain all of their followers.
"One of the issues that we have right now is that, when you just have what Twitter or Meta gives you, and maybe just a few options or checkboxes, that's not really algorithmic choice," Masnick said. "That's not really composable moderation. That's not getting you to the level of really allowing different entities to try different things and to experiment and see what works best."
Users can also choose to use third-party feeds to view content, instead of just choosing from a "recommended" and "following" tab.
"Rather than telling people decentralization has all these benefits in the abstract [...] it's a lot more powerful to just say, here, there's 25,000 custom feeds that third-party developers have built, and you can just choose from them," Graber said.
But since it's such early days for Bluesky, this composable moderation philosophy hasn't really been tested yet. Meanwhile, companies from Cloudflare, to Substack, to Mastodon have reckoned with what to do when dangerous communities organize on your platform.
"Let's say somebody takes all this code you've been publishing, and the AT protocol, and they build a new network. Let's call it NaziSky," Roth told Graber. "What do you do?"
Mastodon faced such an issue in 2019, when the far-right, Nazi-friendly social network Gab migrated to its servers after being kicked off of GoDaddy. Mastodon's founder condemned Gab, but said at the time that decentralization prevented him from actually taking action -- so, users had to take matters into their own hands. Individual Mastodon servers blocked Gab's server en masse, making it impossible for Gab members to interact with others on the website. But still, Mastodon has to reckon with its open source code being used to power what it calls a "thinly (if at all) veiled white supremacist platform."
"This is one of the trade-offs of open source, which is that there's a lot of benefits -- stuff is open, anyone can collaborate, anyone can contribute, anyone can use the code," Graber said. "That also means people whose values drastically diverge from yours can use the code, grab it and run with it."
Like what happened on Mastodon, Graber thinks that the user base will ultimately set the tone for what is considered acceptable behavior on the platform.
"It's a pluralist ecosystem. There's lots of parties out there, and when they unanimously decide that something is outside the Overton window of the norms of communication, then that becomes sort of the social consensus," Graber said. "If a whole parallel universe emerges, that's possible with open source software, but those communities don't necessarily talk if the norms are so drastically divergent."
Then again, dominant and centralized social platforms like Facebook and X have shown the dangers that can emerge when just a few people are in charge of these moderation decisions, rather than whole communities.
"Unfortunately, you can't turn a Nazi into not a Nazi. But we can limit the impact of the Nazis," Masnick said. "Let's limit their ability to wreak havoc. I think that leads to a better place in the long run."