How Crowdsourcing May Reduce Online Misinformation

Professor Richard Staelin found that requiring social media users to rate their posts as true or false may curb the spread of false information

Marketing
Image

With general elections looming in the United States and around the world, social platforms’ misinformation policies are back in the spotlight, especially in light of the reported downsizing of some teams that platforms had built to prevent the spread of false information, better known as “fake news.”

But given platforms’ hesitancy to wade into the waters of content moderation, a solution may lie in the wisdom of the crowds, according to a model developed by marketing professors Richard Staelin of Duke University’s Fuqua School of Business and Yiting Deng of UCL School of Management, University College London.

In their paper published in the journal, Marketing Letters, the researchers tested a crowdsourcing intervention aimed at minimizing the spread of fake news.

Currently, they point out, many people share content after quickly skimming the headline or body of the text, instead of critically reading the post. Often this is done because these senders find the post to be entertaining, and by sharing it they can impress others and get positive feedback by accumulating likes, Staelin said.

Moreover, prior studies have shown that these “quick readers” believe that the message they send is true—or at least is not fake news—he said.

How the intervention works

Staelin and Deng propose that social media companies require posters or re-sharers to also anonymously state whether they believe the message they are posting is true or false. The platform would then aggregate these “veracity” ratings and make them available to anyone subsequently receiving the content.

The researchers’ model assumes that the poster’s network of followers will use this veracity score to decide in part on whether to read the content. The lower the score, the less likely the receiver would be to engage with the post. The model also assumes some of these readers—albeit perhaps a small percentage of them—will read the post carefully and have the ability to determine if the content is actually true or fake. Then, regardless of whether the person is a quick reader or a savvier one, they will decide whether to reshare the content.  

If the message is fake, Staelin said, the more savvy reader will state it is fake, while the quick reader will say it is true. If it is true, then both types of readers will say it is true. 

“The publicly available veracity score is a useful signal to the receiver,” Staelin said. "Is the content worth reading? Would I want, as a sender, to be associated with a post that others think is true or fake?”

The researchers calculated the number of reposts of true content and false posts over time and found that tagging the messages with the veracity score greatly reduces the virality of fake news.

“All you need is 20 or 30% of your population to be truth savvy,” Staelin said. “This proportion of readers is enough to tip the scale and limit the spread of misinformation.”

Staelin said the goal of this intervention is to keep false information from going viral rather than stopping people from posting fake news.

“If you have a million TikTok followers, your post goes through a million people,” he said. “The question is, does it then spread to 2 million or 10 million more people? Our intervention is intended to limit its spread.”

Content moderation by platforms

Social media companies have occasionally tested more direct forms of interventions, such as posting “accuracy nudges,” or labeling posts they deem false, and even down-ranking false content in their algorithm, Staelin said.

With nudging, platforms may be warning users to be more careful, thereby increasing the probability that the readers will pay more attention and become more savvy, Staelin said. Flagging or down-ranking “fake” news are more radical approaches, he said, and companies could even use machine learning to predict the veracity of a message.

“However, taking such steps has led some people to cry foul, saying that social platforms are downgrading certain kinds of messages,” he said.

Truthful information is good business

Social media companies may not have an incentive in regulating themselves, Staelin said, but policymakers may encourage platforms to adopt anti-misinformation policies. 

“Platforms may feel the pressure from Congress to contain misinformation, or some pressure from the public,” he said.

Staelin also believes most platforms—and advertisers—would always want to be associated with truthful information. “It’s just good business,” he said.

“And they wouldn’t even need direct interventions such as nudging, flagging, downranking,” he said. “One of the beauties of crowdsourcing is that the marketplace can correct itself. All you need is enough savvy people.”

This story may not be republished without permission from Duke University’s Fuqua School of Business. Please contact media-relations@fuqua.duke.edu for additional information.

Contact Info

Contact Info For more information contact our media relations team at media-relations@fuqua.duke.edu