X's Community Notes Working Better Than Woke Fact-checkers At Correcting Misinformation On Social Media: Researchers
Keneci Network @kenecifeed
Keneci Network @kenecifeed
A new academic review conducted by physician John Ayers of University of California, San Diego, and other doctors concluded that Community Notes -- a crowdsourced context feature on X -- works better in combating misinformation, than so-called professional fact-checkers relied on by other social media platforms like Facebook.
The results of the research, which was published recently in the Journal of the American Medical Association, showed the Community Notes were almost always accurate and usually cited high-quality sources. Ayers and the other doctors looked specifically into the accuracy of X’s Community Notes, using the contentious issue of Covid-19 vaccines as a test case.
In the study, the doctors looked at a sample of 205 Community Notes about Covid-19 vaccines. They agreed the user-generated information was accurate 96% of the time, and that the sources cited were of high quality 87% of the time. While only a small fraction of misleading posts were flagged, those that did get notes attached were among the most viral, said lead author Ayers.
The system used by other big tech platforms including old Twitter (before acquisition by Elon Musk) rely on fact-checkers whose identity and scientific credentials are unknown, and are mostly left-wing and woke. They could take down posts they deemed to be misinformation, ban users, or use the more underhanded technique of “shadow bans” by which users’ post were hidden without their knowledge.
X's Community Notes on the other hand, relies on volunteers to flag misleading posts and then add corrective commentary complete with links to scientific papers or media sources. Other users can vote on the value of the notes, and it only becomes visible platform-wide when there is wide agreement among users who historically disagree. This ensures that the system is not hijacked by unknown partisans and bad faith actors.
Content moderators employed by big tech platforms have been attacked both for failing to censor so-called hate speech and for censoring mostly conservative users. These companies are run by mostly left-wing CEOs and activist employees.
During the pandemic, so-called fact checkers and moderators labeled lots of subjective statements as misinformation, especially those judging various activities to be “safe.” But there’s no scientific definition of safe. For example, was it safe to let kids back into school or gather without masks? It later turned out it was. Much of what was labeled as misinformation was just opinion not shared by clueless government bureaucrats, big pharma and their sponsored legacy media mouthpieces.
The moderation system used big tech companies including old Twitter is based on the assumption that people skip vaccines or otherwise make bad choices because they are exposed to misinformation. But another possibility is that lack of trust is the real problem -- people lose trust in health authorities or can’t find the information they want, and that causes them to seek out alternative sources especially when they see a concerted effort to censor those sources.
Censorship creates more distrust by stifling open discussion about important topics. And relying on unknown so-called fact-checkers, only deepens the distrust.
The public perception of social media misinformation is often distorted by political biases and outrage. But criticisms of these companies by conservative users are mostly based on facts given the domination of right-wing voices on less censored platforms like X and Rumble; and even on YouTube before 2016.
Conservatives argue that left-wing narratives can only thrive when right-wing voices are censored on big tech platforms. Since 2016, big tech and pro-censorship activists in politics, media and academia have faced backlash for their authoritarian push for more censorship.
Such criticisms prompted some much-needed reflection among a group of researchers from Oxford University who came out with a study titled “People Believe Misinformation Is a Threat Because They Assume Others Are Gullible.” In other words, the people most outraged about fake news aren’t worried they’ll be fooled; they’re worried others will be. Such people tend to overestimate their own levels of discernment, and are condescending to the 'unwashed masses.'
The success of X's Community Notes demonstrates the superiority of crowdsourced consensus system to unknown professional third-party human fact-checkers. People tend to underestimate the power of collective intelligence, which has proven surprisingly good for forecasting and assessing information -- as long as enough people participate, according to psychologist Sacha Altay, who was not involved in the JAMA research.
Community Notes code is open source; and Ayers said the data for their study were easy to obtain. For hashing out factual issues in areas such as science and health, social scientists have recommended a crowdsourcing approach, citing studies demonstrating the power of collective intelligence.
Several studies have pitted crowdsourcing against professional fact checkers, and found crowdsourcing worked just as well when assessing the accuracy of news stories.