Facebook will stop putting the disputed flags on the content that is checked by the third-party fact checkers and marked as false. Instead of flagging the content, the social networking site will show related articles to bring more clarity around the fake news.
According to Facebook, the new method would prove more effective in curtailing a potential fake piece, along with making it easier for the company to move forward in its efforts to reach other markets and content types. The decision by the company has been taken in light of academic research, which suggests that calling some of the news fake may “entrench deeply held beliefs,” Facebook said in a statement.
“Related articles outperformed disputed flags in giving people more information so they could understand what was true or false,” Tessa Lyons, a News Feed product manager, told BuzzFeed News. “Hoaxes that had related article fact checks had fewer shares than those with the disputed flag.”
The Related Articles feature has been there since 2013. But, the company has been testing a new version of it since April, by working with third-party fact-checking groups. Since earlier this year, Facebook has been working with third-party fact-checkers, such as PolitiFact and Snopes, to identify and mark stories as disputed.
Using AI to fight fake news and extremist content
Facebook, with over 2 billion average monthly users, has often been held responsible for not controlling the spread of fake news, along with other social platform companies such as Twitter and Google’s YouTube. Recently, Facebook faced criticism of being vulnerable to the Russian operatives during the U.S. Presidential election. Since then, the Menlo Park, California-based company has made several efforts, including tweaking its algorithms, to identify the sites that use provocative language associated with fake news. At the same time, high-quality content is also being recognized by the company and rewarded.
Along with the fake news, Facebook is also fighting extremist content on the platform, and has amped up its efforts to search and kill disputed content across the platform. Recently, the company deployed Artificial Intelligence clubbed with other automated techniques to get rid of the terrorism-related posts.
Facebook stated that 99% of the material about Al Qaeda and the Islamic State are removed by the Artificial Intelligence and other automated techniques. However, the company admitted that more efforts are to be made to mark other such groups, notes the BBC. Facebook stated that it is after Al Qaeda and the Islamic State as these two entities represent the “biggest threat globally.” However, identifying other groups was “not as simple as flipping a switch,” the company admitted, according to the BBC.
Mark Zuckerberg first talked about his AI based plans in February. At the time, he stated that it would take “many years” to completely develop the required systems. At Facebook, both humans and machines work in close coordination to identify the disputed posts. However, the company stated that the task is “primarily” done by the automated systems. According to the BBC, these systems deploy photo and video matching, where the imagery used by the terrorist groups is picked when it is used for repost.