A different approach to handling the fake news surge

“Sure.” This was Sundar Puchai’s answer on a journalist’s question on whether fake news might have swung enough votes toward Donald Trump to change the election results. The CEO of Google made this very frank statement during an interview for the BBC and just adds up to the whole controversy of the influence of fake news not just on election results, but on society as a whole.

It has been both Mark from Facebook and Sundar from Google that have taken on the task on “fixing” the distribution of fake news through their social and search platforms. Facebook has quite an ugly history on how their trending news items are selected: after a team of human editors was fired for being pro-Clinton, their automated algorithms managed to distribute fake news that favored Trump during the election process. As I mentioned in a past post, this is a serious issue that will lead to a critical game change.

However, the real question is not if Google and Facebook will manage to stop fake news from becoming viral, but more on how will they do that while being able to maintain the user’s trust?

Take this simple example: Would you accept a “guilty/not guilty” ruling from a judge, if that ruling is not supported by documented legislation and is not delivered with a motivation? Would you blindly trust someone to tell you what is right or wrong without a clear and logical explanation? If so, why would you accept someone to curate news feeds for you without being fully transparent?

Anything that Facebook or Google will implement for detecting and blocking fake news needs to be transparent, exposing the underlying algorithm or methodology, so that it will maintain the loyalty and trust of users. Otherwise there will always be a suspicion of censorship and a sense on unfair control over the information flow. It might just turn out that efforts to curb the spread of false news will result in even more loss of trust – in news and their distribution channels – if not done correctly.

TrustServista‘s approach to the whole problem is source and distribution channel agnostic and doesn’t flag any story as being trustworthy or not without an explanation. I will detail our entire approach in future posts and even show you peak-preview screenshots and videos once we reach “demo-able” stage with our prototype. The fair approach on the issue of fake news – in my honest opinion – is to let the users fully control what they read and replace the “it just works” approach (which hides complexity, but also insight) with a “it works like this” approach, so that automated content curation is backed up by insight into how the algorithm works and how it made certain decisions. And most important, only flag stories as being trustworthy or not and let the users decide what content sources or stories to block.

As a personal note, I am currently reviewing a huge historical archive of Romanian newspapers from the 1920s up to the 1950s. I found a lot of propaganda, a lot of slander, a lot of unfair and discriminatory content in this superb historical database. At that times, to block a certain news source would mean to abusively ban its distribution or even to attack and destroy the printing facility. And yes, news always impacted society and politics, but overall the benefits of News as an institution significantly surpasses its wrongdoings. Blocking content is not the right way to do it, as there will always be collateral damages. Instead, let’s put in place measures where users are made aware of what content they are accessing and letting them make the decision what to believe or not.

Tagged with: