In a wide-ranging interview on the Joe Rogan Show last Friday (10th January), Meta/Facebook CEO Mark Zuckerberg struck an introspective note, explaining how his platform had wandered away from its original liberating vision and begun terminating a large number of accounts for “ideological” reasons. He promised a radical revamp of Meta’s moderation policies, including heavily cutting down on ideological censorship and moving away from “fact-checking” and toward the “community notes” system offered by X.
Now, it is hardly a coincidence that this dramatic U-turn comes right after the election of Donald Trump to the White House. Nor is it likely a coincidence that it comes in the wake of a dramatic decline in public trust in the media and in the “fact-checkers,” a decline acknowledged by Zuckerberg in this interview.
Indeed, it may very well be good business, at this time, for Zuckerberg to fire his fact-checkers and embrace the community notes system. It may very well be good business, at this time, for Zuckerberg to cut back on the ideologically motivated censorship, so as to make his platform more palatable to conservatives and free speech advocates.
Furthermore, even if Meta became an enthusiastic free speech platform, this would not remove the fact that the fate of billions of users and the fate of free speech in the world depend on the moral and political beliefs of a handful of Big Tech moguls. A change in moderation policies at Meta falls far short of what is needed to make our digital public sphere safe for free speech.
However, none of this can take away from the fact that this declaration of intent to pull the plug on fact-checking and radically reduce the level of censorship of public debate has the potential to reshape the whole digital landscape, and open up spaces for voices that have been systematically silenced in the past, whether on Covid vaccines, immigration, the transgender debate, or other controversial issues.
Let us not forget that Meta/Facebook, which now owns Instagram, WhatsApp, and Messenger, reported over 3 billion daily active users on its platforms for the third quarter of 2024, outpacing competitors like Youtube, TikTok, Telegram, and Twitter/X. To put that in perspective, we are talking about over a third of our planet’s estimated population (8.5 billion).
Obviously, time will tell to what degree Zuckerberg will actually follow through on these promises, and lofty ideals have a way of getting lost in translation, especially on massive platforms, as we see in certain instances of censorship on Twitter/X.
Notwithstanding these caveats, it is likely that we will see a much wider range of debate on Meta/Facebook in the coming months and years. It is simply not plausible to suggest that Zuckerberg’s public “mea culpa” about past censorship, and his public commitment to shut down his fact-checking programme and cut back on ideologically motivated censorship, will be inconsequential for Meta’s policies going forward.
So what exactly is Zuckerberg proposing to do? We can get some idea of the sorts of policy changes in store at Meta from his recent interview on the Joe Rogan show. Here are some key highlights from the interview. All quotations are Zuckerberg’s own words, verbatim:
Meta/Facebook’s Drift Away from Free Expression
"I think at some level you only start one of these companies if you believe in giving people a voice."
"But it was really in the last 10 years that people started pushing for like ideological-based censorship."
“(I) started off very pro-free speech, free expression. And then over the last 10 years, there have been these two big episodes. It was the Trump election and the aftermath, where I feel like in retrospect, I deferred too much to the critique of the media on what we should do."
"We were like, all right, we're just going to have this system where there's these third party fact checkers, and they can check the worst of the worst stuff…So things that are very clear hoaxes…So that was sort of the original intent. We put in place the system and it just sort of veered from there."
"I think to some degree it's because some of the people whose job is to do fact-checking, a lot of their industry is focused on political fact-checking, so they're just kind of veered in that direction."
“We just basically got to this point where there were these things that you just like couldn't say, which were mainstream discourse.”
"I kind of think after having gone through that whole exercise, it, I don't know, it's something out of like, you know, 1984, one of these books where it's just like, it really is a slippery slope."
The Excesses of Pandemic Censorship, Partly Under Government Pressure
"And then, you know, in 2020, there was COVID.... We just faced this massive, massive institutional pressure to basically start censoring content on ideological grounds."
"They (members of the Biden administration) basically pushed us and said, you know, anything that says that vaccines might have side effects, you basically need to take down."
"But when it went from two weeks to flatten the curve… and like in the beginning, it was…, OK, there aren't enough masks. Masks aren't that important. To then, it's like, oh, no, you have to wear a mask. And everything was shifting around."
"I mean, it's like our government is telling us that we need to censor true things. It's like, this is a disaster."
Introduction of More Permissive Moderation Policies
"I think what Twitter and X have done with community notes, I think it's just a better program (than our fact-checking approach). Rather than having a small number of fact checkers, you get the whole community to weigh in….When people usually disagree on something, tend to agree on how they're voting on a note, that's a good sign to the community that there's actually a broad consensus on this and then you show it. And you're showing more information, not less."
"When you're talking about nation states or people interfering, a lot of that stuff is best rooted out at the level of accounts doing phony things."
"If it's OK to say on the floor of Congress, you should probably be able to debate it on social media."
"But by far the biggest set of issues we have…is, OK, you have some classifier that's trying to find, say, like drug content….And then you basically have this question, which is how precise do you want to set the classifier? So do you want to make it so that the system needs to be 99% sure that someone is dealing drugs before taking them down, do you want it to be 90% confident, 80% confident, and then those correspond to amounts of, I guess the statistics term would be recall, what percent of the bad stuff are you finding."
"One of the things that we're going to do is basically set them to…require more confidence, which is this trade-off. It's going to mean that we will maybe take down a smaller amount of the harmful content, but it will also mean that we'll dramatically reduce the amount of people whose accounts were taken off for a mistake, which is just a terrible experience."
"And so, I mean, at this point,…I think a lot of people look at this as like a purely political thing. You know, it's because they they kind of look at the timing and they're like, hey, well you're doing this right after the election….Okay I try not to…change our content rules…right in the middle of an election either, right?…there's no good time to do it…."
"I think that this is going to be pretty durable, because at this point, we've just been pressure tested on this stuff for like the last eight to 10 years with…these huge institutions just pressuring us."