Scientists Respond to FTC Inquiry into Tech Censorship
Spence Purnell
Resident Senior Fellow, Technology and Innovation, R Street Institute
Content moderation as a practice is evolving, with platforms like X and Meta utilizing decentralized models that flag misinformation instead of removing posts. Research shows most moderated (removed) content is spam or explicit material, not political speech.
Conservative claims were fact-checked and moderated more often than liberal ones in some cases, but this may be due to their frequent reliance on lower-rated sources, even when those ratings were determined by both ideologically diverse and conservative-only groups. Motivated reasoning can make these sources more appealing, bypassing critical thinking filters. However, bias in fact-checking is also a factor. A 2023 Harvard survey found that 90% of misinformation experts lean left, potentially influencing early moderation practices.
However, new decentralized models address this concern using “bridging” algorithms to ensure both sides of an issue evaluate flagged content. One study found that up to 97% of notes were rated “entirely” accurate by a group of diverse users. This approach allows controversial content and political speech to remain online, but gives critical, cross ideologically vetted context users can trust knowing that opposing viewpoints contributed to the final rating.
Regarding the possibility of algorithmic bias: to appease both users and marketing clients, platforms use algorithms to amplify content they predict users will engage with, including political content. Users create their own echo chambers by liking and sharing content they agree with while ignoring or downvoting opposing views. The algorithm then amplifies content based on these behaviors. Algorithms reflect user behavior rather than enforcing ideological preferences, though biases in content moderation and fact-checking may still play a role in perceived disparities.