In an era of rapid information exchange, fact-checking has re-emerged as a potentially important tool to help citizens obtain accurate information about political issues and to combat misinformation. Typically defined as the spread of factually inaccurate information, misinformation is a growing concern on social media, where it can spread widely and easily.

Traditional fact-checking organizations aim to address this challenge, but their effectiveness, accuracy, and potential biases remain subject to intense debate. The general research consensus is that while fact-checking can improve people’s awareness of facts, it cannot address more causally complex questions—and political bias often prevents any meaningful change in opinions or beliefs. Fortunately, new decentralized models have potential to increase the quality and effectiveness of fact-checking while retaining important limitations. This essay aims to assess literature on traditional fact-checking and compare it to newer decentralized models.

FeatureTraditionalDecentralized
AccuracyAccurate, thorough, well- researchedHighly accurate, less original research, fewer dedicated checkers
BiasHard to detect, immeasurable, perception-basedDiversity of viewpoints ensured through network data, problems of perception
EffectivenessProactive, rapid, improves knowledge of facts, doesn’t require consensus, not trusted by diverse perspectives, not proven to reduce misinformationReactive, slower, improves knowledge of facts, requires consensus, trusted by diverse perspectives, shows promise for reducing misinformation

Traditional Fact-Checking

Accuracy

Traditional fact-checking organizations pride themselves on accuracy, and they typically depend upon structured methods and historically reputable sources to verify information. When examining a relatively small subset of data, one recent meta-study found only a single case in which two fact-checking institutions—Snopes and PolitiFact—came to opposing conclusions about a claim, “suggesting a high level of agreement between [the two institutions] in their fact-checking verdicts.” Another study confirmed these findings, observing that fact checkers did “fairly well” on “outright falsehoods” and “obvious truths.”

However, when researchers in both studies examined the entirety of the data, they found considerable disagreement between institutions. The first study called fact-checking “a complex and multifaceted process that involves numerous variables, including the nature of the claims being fact-checked,” while the second observed more “ambiguous” questions. In other words, while the data writ large showed many opposing conclusions, researchers found a high level of consensus when limiting the search to certain relatively narrow types of fact-related questions during specific periods.

Political claims can be complex, and reducing them to binary labels like “true” or “false” can oversimplify nuanced arguments and make consensus difficult. For example, claims that involve predictions or causal arguments—such as the impact of immigration on crime rates—are not easily reduced to simple factual assessments, making fact-checkers vulnerable to criticism regarding how certain statements are categorized or analyzed. This can lead to conscious or unconscious biases creeping into fact-checking work.

Bias

A frequent criticism of traditional fact-checking is the perception of political bias. For instance, a 2023 paper from Duke University found that liberal fact-checkers gave consistently lower ratings to Republicans than they did to Democrats, suggesting bias in how fact-checkers evaluate claims. A 2011 analysis of 511 PolitiFact claims found that Republican statements were declared false at three times the rate of Democratic statements (roughly 75 percent versus 25 percent). A 2023 Harvard University survey of 150 misinformation experts found that nearly 90 percent identified as either center- or left-leaning, with only 5 percent saying they leaned “slightly right.” Yet other factors not controlled for in the study could be more to blame than fact-checker bias.

Other studies using alternative methodologies question the scale and consistency of this alleged bias, suggesting that people’s perception of the quality of the fact-checker—rather than the analysis itself— may be the main factor in perceiving bias. Ultimately, the question of bias in fact-checking is a highly subjective and complex topic, likely to be at least partly (though by no means exclusively) rooted in users’ beliefs and perceptions. Traditional fact-checking institutions lack effective mechanisms to combat ideological capture, which may ultimately reduce their effectiveness in combating misinformation.

Effectiveness

Measuring fact-checking’s effectiveness is an opaque task; however, some studies have analyzed the relationship between fact-checking and people’s beliefs and behaviors and whether this relationship reduces misinformation.

Most studies show that traditional fact-checking improves people’s knowledge of facts but does little to change their beliefs and opinions, attributing this dynamic to “motivated reasoning”—that is, the idea that people reject information contrary to their current beliefs, only accepting information that reaffirms them. Additional research concluded that while fact-checking can correct misinformation, it often fails to change opinions—particularly among highly partisan individuals. A 2017 study of the public perception of fact-checkers showed that most users view them in a negative light, reinforcing the idea that people reject assertions contrary to their worldview. Researchers also found that encountering contrary information can create a “backfire effect” that further strengthens the recipient’s pre-existing beliefs. However, newer research argues that the backfire effect may be less common than initially believed, with most people accepting corrections in matters of fact regardless of their political affiliation—although it does little to change their opinions.

Because humans are social creatures, we often trust other humans more than abstract ideas and facts. As mentioned above, fact-checking institutions may suffer from a social reputation of being politically biased. Additionally, unlike social media, fact-checking institutions have no direct line of communication to users, giving them less reach and no consistent measure of their impact. This does not mean they have no impact on misinformation; rather, it means there is no readily acceptable measure to demonstrate their success. One built-in advantage of social media is that it comes with data for researchers to study the impacts of information interventions in the right open-source or data-partnership settings.  

Decentralized Fact-Checking

Accuracy

Decentralized models like X’s “Community Notes” offer potential advantages in terms of accuracy by crowdsourcing verification to a wider, more diverse audience, but their novelty means they have a narrower range of data and social science to back them up.

Nevertheless, a 2024 study of more than 45,000 Community Notes determined that up to 97 percent  were “entirely accurate,” with some 90 percent relying on moderately to highly credible sources. A 2021 study assembled a group of politically diverse, untrained test subjects and asked them to act as fact-checkers, finding that “aggregating judgements can substantially improve performance even in highly politicized contexts.” Decentralization can therefore allow a broader range of sources and viewpoints to be considered when assessing the veracity of a claim, which could compensate for the limited inputs of more centralized models. Because no central authority approves the final content, there is an obvious concern that misinformation will get through; however, both theory and data show that distributed ratings systems can be as or more accurate than traditional methods.

Bias

Because decentralized models involve contributions from a wide user base, they may be less prone to institutional biases. The 2024 study cited above presents evidence that Community Notes examine “liberal” and “conservative” claims at nearly the same rate. A German analysis found “no clear political orientation of the helpful community notes,” and that the algorithm “ensures a certain numerical balance between parties to the left and right.” However, other analyses claim that “users estimated to be more liberal” receive around 50 percent of all notes, whereas “users estimated to be more conservative” receive 24 percent. The same study found that the gap narrowed from 26 percent to 12 percent after Elon Musk purchased the platform and that removing the topic of his takeover reduced the gap to 8 percent. They conclude that the program’s expansion “leveled the partisan balance.” The distributed nature of Community Notes allows users across the political spectrum to keep each other honest by disputing claims in an open forum. This resolution of misinformation has potential to be less biased—and therefore more trusted.  

While algorithms tend to have a poor reputation, they can help address confirmation bias. For example, “bridging” algorithms show people information contrary to their beliefs. Community Notes already uses this process, stating that notes are not shown to users based solely on their popularity but on whether “people who rated it seem to come from different perspectives.” In this fashion, notes help expose users to new views while validating those views by letting users know that notes come from multiple ideological perspectives, thus enhancing their trustworthiness. Because platforms like X have access to network data, they can use it to ensure that diversity in fact-checkers continues even as the platform’s dynamics change over time. This is a major advantage over traditional fact-checking, where there are fewer systems available to ensure ideological diversity.

Another study found that users trusted Community Notes more than traditional fact-checking analyses and that decentralized models “have the potential to at least mitigate trust issues that are common in traditional approaches to fact-checking.” Traditional methods effectively bottleneck the analysis to a set of employees at fact-checking institutions, whom people may perceive as more susceptible to capture and bias. The diverse and transparent nature of Community Notes ensures statements are open to critique from various perspectives and sources, making them less susceptible to these limitations.

Effectiveness

Decentralized fact-checking has potential to be more effective than centralized models because it directly engages users in the verification process. Unlike a fact-checking institution—which can eventually become biased, even unintentionally—the bridging algorithm ensures that a diversity of viewpoints from accredited users reach consensus on a note before its release. While this increases accuracy and helps overcome the confirmation bias problem, it may create lag at times when information is going viral.

A 2023 data model claimed that Community Notes have no impact on post deletion and may even increase post engagement and following of accounts posting misinformation.

However, more recent studies show that Community Notes has been effective at reducing the spread of misinformation. A 2024 review of 285,000 notes found that the presence of a note reduced the number of retweets (re-shares) by nearly 50 percent, reduced comments by a similar amount, and increased the probability of original post deletion by 80 percent. A 2024 University of Chicago study confirms:

[T]he effectiveness of publicly displaying community notes on an author’s voluntary tweet retraction… not only increases the probability of tweet retractions but also accelerates the retraction process among retracted, thereby improving platforms’ responsiveness to emerging misinformation… demonstrating the viability of crowdsourced fact-checking as an alternative to professional fact-checking and forcible content removal.

Earlier research confirms this effect. In 2021, one pair of social scientists found that attaching warnings or notes to misleading posts can reduce the likelihood of the affected comments being shared, suggesting that decentralized systems can tangibly impact people’s behavior and perspectives by limiting the spread of misinformation and other suspect claims. This effect was also observed in research conducted by Twitter in the early days of Community Notes.

One limitation of Community Notes is that it cannot be proactive, only reactive. In other words, notes do not limit the spread of misinformation—they just correct it wherever it appears. This means that some pieces of misinformation are able to spread widely before a note is ever attached. The requirement that notes be reviewed by diverse perspectives is good for accuracy but could detract from efficacy due to delays in consensus.

While some evidence is conflicting, newer research suggests that Community Notes have been effective at identifying and reducing misinformation, largely through voluntary retractions rather than forced content removal. Although data on Community Notes’ impact on belief change is limited, the combination of high accuracy ratings, a bridging algorithm, and voluntary corrections indicates they may be at least as effective as traditional fact-checking in improving knowledge.

Conclusion

Ultimately, content moderation is an impossible problem, and any solution will be imperfect. Solutions like preventative content removal may work for the most obvious cases but may not be suitable for contentious political topics with no obvious answers. The market is still searching for effective moderation mechanisms in these cases, but early evidence suggests that Community Notes’ decentralized approach has many advantages over traditional fact-checking and should be welcomed and encouraged as an innovation in combating misinformation. By requiring cross-ideological cooperation, decentralized models may currently be the most effective tool for combating confirmation bias and its role in the spread of misinformation—although there are clear improvements to be made.

First, there is a time delay in attaching notes to misinformation. If notes are unable to inform users during the peak virality of a post, their utility may ultimately be limited. One option could be proactively creating notes and rating content for use when an information situation arises. This could be done by simulating network interactions based on current trends, creating predictive information nodes that can then be mapped to actual users who can write high-quality notes on the predictive topics.

Another limitation is data access. Although Community Notes is open-source, some processes and pieces of information remain non-public. Increasing data access would be one way for media companies to increase transparency, trust, and knowledge of decentralized moderation systems.

With all their value, Community Notes do not entirely eliminate the value of traditional fact-checking by institutions using structured, methodical approaches that prioritize accuracy and thorough research. However, their methodologies, the questions they address, and the format of their content may need to adapt to modern information networks to retain its value.

 

Our technology and innovation policy work, in your inbox.