Since January 2020, COVID-19 has been the main topic of news. Despite attempts to keep health information accurate, misinformation has been put out from amateur and speculative sources all the way up to President Trump.
Facebook has been blamed along with other social media sites for allowing misinformation to spread. himself. Many websites from traditional media to social media have been trying to connect people to accurate information from health experts and keep harmful misinformation from spreading. It is a difficult task, even with the best technology, to monitor millions of users creating content every second and even more often reposting other information.
Facebook has said that they have directed over 2 billion people to resources from the World Health Organization in their "COVID-19 Information Center" and via pop-ups on Facebook and Instagram with over 350 million people clicking through to learn more.
When a piece of content is rated false by fact-checkers, they reduce its distribution and show warning labels with more context, and they can use similarity detection methods to identify duplicates of debunked stories.
I heard on an episode of the Make Me Smart podcast that Rep. Adam Schiff (not a Facebook fan) asked other tech giants (Google, YouTube, Twitter) to follow Facebook's example by contacting users who’ve interacted with misinformation.
This is a kind of contact tracing for misinformation. In public health contact tracing, staff work with a patient to help them recall everyone with whom they have had close contact during the timeframe while they may have been infectious. The public health staff then warn these exposed individuals (contacts) of their potential exposure as rapidly and sensitively as possible.
In public health or online protecting patient/poster and contacts is important. Generally in health situations contacts are only informed that they may have been exposed to a patient with the infection but are not told the identity of the person who may have exposed them.
YouTube announced it would add informational panels with information from its fact-checkers to videos in the US, an expansion of a program it launched in India and Brazil last year.
Twitter introduced its COVID-19 content policies earlier this month, which require users to remove tweets with content that includes misinformation about coronavirus treatments or misleading content meant to look like it’s from authorities and recently updated their policy to encompass tweets that may “incite people to action and cause widespread panic, social unrest or large-scale disorder,” such as burning 5G towers.
TrackbacksTrackback specific URI for this entry
The author does not allow comments to this entry