Free-Market Solutions to Misinformation and Disinformation on Social Networks
To date, the challenges of dealing with misinformation and disinformation online have been immense, and there are few solutions that don’t violate cherished norms of free speech and/or private property. However, one recent proposal for a unique online content-moderation system offers hope for ferreting out false and misleading information while preserving the rights of both free speech and private property.
That solution is information “warrants.”
Warrants are based on First Amendment principles and monetary incentives within the context of an information “free market.” Statements within this information free market can be true or false and they can consist of unpopular opinions or conspiracy theories and fake news. It is up to individual users to decide whether to accept statements that appear to run counter to the truth, and to subject these statements to neutral fact-checking or some other form of internet-governance remedy.
A Boston University study aimed to create such a warrant-based system to address misinformation and disinformation while accounting for market failures and externalities in cases in which markets do not “self-correct.” Specifically, the study calls for a decentralized information market that incentivizes speakers to avoid misinformation and disinformation through an insurance-based system of penalties and rewards.
The study’s idea is to allow social media participants to first “express their claims as fact or opinion.” If a user chooses to make a factual claim, that user would accompany their social media posts with a time-limited “warrant” that offered something of monetary value that would be placed at risk. Paid ahead of time, this warrant would serve as an insurance policy to cover any “possible harm” that false or misleading statements might cause. Thereafter, a reader could challenge the veracity of a warranted post by paying a “modest fee” to cover the costs of adjudication by a fact-checking committee or other “jury of peers.”
The warrant’s insurance fee would be returned to the original poster if the content were judged to be true by the committee. If the committee determined that the claim was false, then the challenger would receive the money or other item of monetary value that secured the warrant. On the other hand, if the claim went unchallenged, all resources would be returned to the poster. In addition, the statements of opinion, which cannot be warranted for validity, could not be imposed on others’ newsfeeds without their (or their moderator’s) consent.
Social media posters can focus on factual claims, place their resources at risk via a warrant, and receive judgment from an independent, impartial body of reviewers. The warrant mechanism promotes truth in the marketplace of ideas by rewarding truth-telling and penalizing falsehoods, fake news, and conspiracy theories—even while offering a monetary advantage to challengers of false or misleading content. This mechanism also creates a public record of claims and challenges on a per-statement basis, eliminating the need for more widespread censorship, either internally from the platform or from an outside source.
Another option for market-based solutions to the problem of misinformation and disinformation is an existing software protocol called “Notes and Other Stuff Transmitted by Relays” (Nostr). Nostr is not an “app” per se, but is simply a decentralized and open-source protocol that software developers can use to build new user interfaces and clients like X clones, YouTube-like video sharing apps, and music-streaming software.
Nostr platforms are known for their adherence to free-speech principles. There is no corporation and there is no CEO, nor are there any in-person trust or content-moderation teams that censor speech. Nostr users instead police their own platform. Users simply report objectionable content by choosing the reason for removal, such as spam, scams, profanity, hatefulness, malicious impersonation, nudity or other graphic content, and illegal behavior. Once an offender has been reported, that person is publicly “named and shamed” for all the community to see. Unfortunately, at this time, there is no removal category for misinformation and disinformation, but that category could be introduced soon.
Another feature of Nostr social media apps is that users can “zap” or donate to another person’s digital wallet a small amount of bitcoin for a favored post. This practice rewards users who post the “truth” or other positive content, thereby giving people an incentive to share content that does not violate terms of service while rewarding posts that are truthful and successfully vetted by the community. The result: Nostr has implemented a free-market solution that monetarily incentivizes users to adhere to community rules and disincentives objectionable or illegitimate content.
In short, warrant- and other community-based online content moderation is a novel approach to policing social media platforms and expunging misinformation, disinformation, and other undesired posts. These systems are rooted in free-market principles and do not depend on the government to regulate free speech for social technologies. Other concepts are certain to be introduced in the future. Misinformation, disinformation, and other objectionable content can be controlled without having to reflexively resort to government or platform-wide censorship. And that is very good news for proponents of the free marketplace of ideas.