🔥 The Hottest Takes on Free Speech & Twitter's War with Trump
Should Facebook and Twitter fact check the president?
Yo! ✌️ I’m Brett! I am a Product Manager, musician, and Twitter power user. Twitter is where all the action happens in the tech industry but it’s not easy to keep up with. I write this weekly newsletter to help people stay informed on the most important discussions happening on Tech Twitter.
It’s ok to brag 😜Share this with a friend or subscribe if you haven’t!
⏱ A quick recap - Trump lied, Twitter denied
The first and second, because it contained misinformation. More context here.
The third, because it glorified violence. More context here.
Donald J. Trump @realDonaldTrump....These THUGS are dishonoring the memory of George Floyd, and I won’t let that happen. Just spoke to Governor Tim Walz and told him that the Military is with him all the way. Any difficulty and we will assume control but, when the looting starts, the shooting starts. Thank you!
Trump was upset.
He signed an Executive Order Preventing Online Censorship attacking Section 230, which made it so companies wouldn’t be held accountable for content posted on their platforms. You read that correct - Section 230 actually supports free speech.
Facebook’s role in this whole matter further complicated things. While Zuck condemned repealing Section 230, he refused to remove the same post by the president glorifying violence that Twitter did.
This turned out to be incredibly poorly timed given the fact that the George Floyd protests condemning the very violence the post glorified started reaching international scale. Facebook employees, a typically silent group on these matters, revolted.
🚨 The big question - free speech or nah?
UGC platforms like Facebook, YouTube, and Twitter all have policies around acceptable content. The obvious stuff is universally banned - beheadings, child pornography, etc. But things get a lot more confusing when you get into misinformation or even hate speech and threats of violence. Despite platforms having policies on these matters, enforcement is incredibly inconsistent, especially when public (political) figures are concerned.
Here’s Zuckerberg saying Facebook would take down content threatening violence despite not doing so when it was from Trump recently.
Let’s be clear - this is really really important to figure out. Misinformation, hate speech, and threats of violence are obviously very dangerous. But at the same time, we must be aware that these platforms have become the town square for the world. Any policies, good or bad, dramatically impact how the world communicates and there’s no clearcut answer on what is right.
At the same time, there is more pressure than ever for platforms to figure it out as disinformation continues to accelerate and mutate.
📐Content moderation is incredibly difficult
One of the key requirements of reducing misinformation on platforms is identifying it in the first place. Turns out, it’s not as easy as it sounds:
Apparently there were even inaccuracies in the article Twitter surfaced on the Trump tweet they labeled as misinformation. Sheesh.
Misinformation is also constantly changing - like a virus. Here’s some of what’s to come:
One the topic of hate speech and threats of violence - both in their most extreme forms are obvious. Things get a lot more difficult to pin down when people use intentionally imprecise wording or dog whistles. What is one person’s nuance is another person’s hate speech.
Content moderation is incredibly challenging on a case-by-case basis - as evidenced by the recent Trump/Twitter controversy - let alone at scale.
🌋Centralization is dangerous
We’ve done some pretty incredible things as a species, so it’s a bit preposterous to think we wouldn’t be able to solve content moderation at scale eventually. The true challenges come when we define “we.”
Balaji S. Srinivasan @balajisI don’t think Twitter & Facebook should be in the business of filtering out “untrue” statements from their networks. But the deeper question is whether they can confidently say something is false even if the New York Times says it is true. Because this does happen. https://t.co/rWHacHuTl4
Whether we’re talking about Facebook/Twitter/etc or the US government moderating content on the internet, we’re talking about centralizing control of one of the most powerful weapons in the world - speech - in the the hands of a select few.
Orwell would roll over in his grave if we got that far.
Check out this fake product launch I did for a keyboard app that censors speech👇
There’s probably a formula here: centralization * (corruption + incompetence + maliciousness) = misery & oppression. We see this playing out today with the criminal justice system’s mistreatment of African Americans, and have seen it play out over and over again elsewhere throughout history.
Business models, reelection cycles, and other means by which organizations collect resources clouds their judgement. Even Facebook’s advocacy for free speech can be tied back to their business model:
Less free speech on Facebook
= less usage
= less ads / lower ad $
= less revenue for Facebook
While this is definitely a nihilistic perspective, it’s important to keep in mind when designing safe and legitimate content moderation systems.
😡 People will always question legitimacy
Any time you have systems in which some pieces of content are highlighted and others are not - newspaper editorial boards, newsfeed algorithms, etc. - people will protest. No matter how transparent these systems may be (and most are not today), people will fear repression when content they agree with isn’t highlighted.
As social media continues to polarize through echo chambers, this problem of unrelenting skepticism may only be getting worse.
Jack Taddeo @jacktaddeoCanceling One side wants to ostracize dissidents and thinks the opposition promotes hate speech. One side defends hateful opinions in the name of freedom and thinks the opposition is Big Brother. -> can't talk about free speech or figure out proper paths to redemption
🤘Decentralization and transparency is the key
Even Jack seems to be in agreement that Twitter or any other single organization should not be the arbiter of truth. Whatever ends up taking on this role should be maintained and reviewable by anyone.
Balaji S. Srinivasan @balajisIt shouldn’t be tech companies per se getting into fact checking. It should be open source technology. Free, universally available code and data for epistemology. Take a piece of text, parse it, extract assertions, compare to explicitly specified knowledge graphs and oracles. https://t.co/gDOEmZn7S4
Many platforms today do have decentralized tools for content moderation. For example, Twitter allows anyone to report tweets that contain misinformation. Unfortunately, the fact that these tools still reside within the platform means they are still controlled by the platform. They may indeed reduce misinformation but they will not be trusted by groups that feel disenfranchised and are still at risk of abuse.
There are a lot of approaches to solving this challenge - heck I even tried building something back in 2016 - but Balaji is right on the money here. Fact checking (and identification of hate speech and threats of violence) should be open sourced.
Shoot me a message if you’re building something in this space! 👇
The world has been pretty dystopian lately. Just remember"the arc of the moral universe is long, but it bends toward justice” (MLK).