Zuckerberg on Rogan: Unveiling Government Pressure on Social Media Moderation
Meta CEO Admits Biden Admin's Role in Censorship, Discusses Future of Free Speech on Platforms
In the latest episode of the Joe Rogan Experience, Mark Zuckerberg, the CEO of Meta, discussed government-induced content moderation and the future of social media for nearly three hours (video below). This came out on the same day, January 10, 2025, as Zuck returned to Mar-A-Lago for a second time. As Zuck makes the rounds of attrition, some are happy to forgive his past dirty deeds, while others not-so-much.
I personally was put into Facebook “jail” (suspended) for 3 days, 7 days, and ultimately 30 days. And as Zuck mentioned, I was one of many who never violated any of their terms of service but was caught up in their failed algorithm. At the time, I managed 11 pages; two were political (neither was the subject of the suspensions). It was not even any of the honest articles I posted that were, at the time, the Palm Beach Examiner, which has morphed into this Substack. To Facebook’s credit, they never banned or took down any of my articles on COVID or health, as they have all been proven to be completely factual.
But enough about my misery using Facebook and trying to dance around their failed algorithms. What has really been disturbing was his acknowledgment again (as I said, he’s been on this apology tour without the apology), where he admits to the intense pressure from the Biden administration to take down posts and delete user accounts for things that took away from the government narrative. He even mentioned how it seemed like it was something from Orwell’s ‘1984’ and how the Biden “team” would swear and yell at his content management staff to remove posts. How Biden, Psaki, and the rest of his team have not been sued for clear violations of the First Amendment is hard to understand.
After all, the First Amendment prohibits the government from abridging free speech. And while it applies directly to government actions and not private companies like social media platforms, it must apply when the government strongly private companies in what can be put on their sites; right?
Well, the Missouri v. Biden lawsuit accused the Biden administration of coercing social media platforms to censor certain viewpoints, particularly those skeptical of COVID-19 policies and election integrity. The lawsuit came about after the “Twitter Files” were released proving these actions. The case made it all the way up to the Supreme Court in June 2024, overturning lower court rulings only for the reason that the plaintiffs lacked standing to sue and did not address the core issue.
Meanwhile, the Biden administration maintains that its communication with social media companies alerted them to content that violated their platforms’ policies regarding misinformation that could harm the public. However, this argument has proven untrue through the “Twitter Files” and Zuck’s own statements before Congress and on various online videos, as factual information was removed simply because the Biden administration disliked having it out there for public consumption.
While there is evidence of government influence over social media content moderation, the legal threshold for proving a First Amendment violation (coercion vs. persuasion) has not been definitively crossed in a way that sustains legal action, as the Supreme Court's latest ruling states.
Zuckerberg admits to the initial missteps in dealing with misinformation, where deference to media critiques led to a slippery slope of content policies that curbed free expression. The introduction of fact-checking systems was well-intentioned but flawed, often reflecting bias in what was deemed worthy of scrutiny. This scenario underscores the libertarian argument against centralized control over speech, favoring a decentralized, community-driven approach like Twitter's (now X) community notes.
Now Zuckerberg has announced that Meta is implementing technological solutions X has found successful, like “Community Notes,” like AI for detecting inauthentic behavior and community-driven fact-checking. These innovations aim to balance the scales of moderation, reducing human error and bias. However, the challenge remains in ensuring these technologies don't become oppression tools but facilitators of open discourse.
From my point of view, the heart of the issue lies in the power dynamics between the government, tech giants, and the individual. In an ideal society, individuals and communities self-regulate, where free speech is sacred, and technology is a tool for empowerment, not control.