Overwhelming pressure from governments and the public has compelled social media platforms to take unprecedented action on what users share online in the pandemic. But who fact checks the fact checkers?
The Defender is experiencing censorship on many social channels. Be sure to stay in touch with the news that matters by subscribing to our top news of the day. It’s free.
In a move likened to the way governments have assumed emergency powers in response to the COVID pandemic, Facebook has removed 16 million pieces of its content and added warnings to around 167 million. YouTube has removed more than 850 000 videos related to “dangerous or misleading COVID-19 medical information.”
While a portion of that content is likely to be willfully wrongheaded or vindictively misleading, the pandemic is littered with examples of scientific opinion that have been caught in the dragnet — resulting in their removal or de-prioritization, depending on the platform and context. This underscores the difficulty of defining scientific truth, prompting the bigger question of whether social media platforms such as Facebook, Twitter, Instagram, and YouTube should be tasked with this at all.
“I think it’s quite dangerous for scientific content to be labeled as misinformation, just because of the way people might perceive that,” says Sander van der Linden, professor of social psychology in society at Cambridge University, UK. “Even though it might fit under a definition [of misinformation] in a very technical sense, I’m not sure if that’s the right way to describe it more generally because it could lead to greater politicization of science, which is undesirable.”
How fact checking works
The past decade has seen an arms race between users who peddle disinformation (intentionally designed to mislead) or unwittingly share misinformation (which users don’t realize is false) and the social media platforms that find themselves charged with policing it, whether they want to or not.
When The BMJ questioned Facebook, Twitter, and YouTube (which is owned by Google) they all highlighted their efforts to remove potentially harmful content and to direct users towards authoritative sources of information on COVID-19 and vaccines, including the World Health Organization(WHO) and the U.S. Centers for Disease Control and Prevention. Although their moderation policies differ slightly, the platforms generally remove or reduce the circulation of content that disputes information given by health authorities such as WHO and the CDC or spreads false health claims that are considered harmful, including incorrect information about the dangers of vaccines.
Read the full story here: Source: Who Fact Checks Health and Science on Facebook? • Children’s Health Defense