Facebook has taken another “step back” in its so-called fight against fake news

Happy New Year! Just as December 2017 was coming to and end, Facebook announced that it will ditch the “disputed” news tag implemented earlier this year in an effort to fight fake news distribution. In yet another change of mind, Facebook will replace this fake news fighting mechanism with a “related stories” functionality aimed at showing the reader different viewpoints on a certain story.

This step back just proves that either Facebook doesn’t understand the fake news phenomena (which it helps propagate!) or there is no honest intention in doing anything useful about it. Here’s why.

First of all, the “disputed” news tagging was flawed from the beginning. Facebook is now the world’s main distributor of news content (according to the latest Reuters Institute Digital News Report) so its responsibility to the content it distributes and its users is paramount. However, even one year after the US presidential elections “fake news” scandal, Facebook has done nothing to curb the fake news industry it helps exist. Adding the “disputed” flag to news sources, on the basis of debunking by a handful of fact checking organizations and media agencies, targeting only the English-speaking world, has created exactly the opposite effect: “Academic research on correcting misinformation has shown that putting a strong image, like a red flag, next to an article may actually entrench deeply held beliefs – the opposite effect to what we intended”.

This just shows how little do the Facebook news specialists know about echo-chambers, “clickbait” and the psychological profile of the users impacted by fake news.

Second, fighting fake news with a limited set of fact checking organizations has little impact, due to the velocity, variety and volume of news content created. Basically the definition of Big Data, right? Well, it seems that the tech giant, using some of the world’s most advanced Big Data and Machine Learning technologies available on the planet, would do something smart about fighting fake news. Instead, they are employing journalist to “flag” fake news, probably after the misinformation has already made an impact. Well, it’s not just Facebook that thinks you can catch a moving bullet with someone else’s teeth. Even some of the most prominent journalists and news organizations think that adding webpage meta-data that says you are trustworthy will automatically make readers trust you (face-palm).

And third, the new solution put in place by Facebook which means displaying related articles next to a certain story, is again bound to fail. According to their own words “Related Articles, by contrast, are simply designed to give more context, which our research has shown is a more effective way to help people get to the facts.

I partially agree on this approach. And I’m not just thinking out loud, but rather sharing some of the research on the topic I’ve done while implementing news verification algorithms for TrustServista. In some cases, where the stories are covered by a large number of media organizations of different types (media agencies, blogs, magazines, online outlets), you can obtain different viewpoints on a certain story. This can give a reader context, “see the other side” and make a judgement about an article being trustworthy or not. But in most case of successful fake news stories, articles handle niche topics, sometimes re-posting content from a year ago and using several websites to propagate the same information at the same time. When handling case-studies like these, we couldn’t find any related articles that represented opposing or alternative viewpoints. Simply, if you were to stumble in such an article, the related articles were just sides of the same echo chamber.

In conclusion, the only way Facebook can effectively fight against fake news is to automatically determine whether an article is trustworthy or not when it first gets posted on the platform.

Not having any smart automated system to perform this task, relying on human fact checkers or presenting the user with multiple viewpoints and no real indication of trustworthiness are just recipes for failure.

Tagged with:

Leave a Reply

Your email address will not be published.