13 Ways To Fight Fake News – And A Big Problem With All Of Them

Updated on

Will humans or computer algorithms be the future arbiters of “truth”?

Today’s infographic from Futurism sums up the ideas that academics, technologists, and other experts are proposing that we implement to stop the spread of fake news.

Below the infographic, we raise our concerns about these methods.

Here’s 13 Ideas To Fight Fake News

Fake News iG

While fake news is certainly problematic, the solutions proposed to penalize articles deemed to be “untrue” are just as scary.

By centralizing fact checking, a system is created that is inherently fragile, biased, and prone for abuse. Furthermore, the idea of axing websites that are deemed to be “untrue” is an initiative that limits independent thought and discourse, while allowing legacy media to remain entrenched.

Centralizing “Truth”

It could be argued that the best thing about the internet is that it decentralizes content, allowing for any individual, blog, or independent media company to stimulate discussion and new ideas with low barriers to entry. Millions of new entrants have changed the media landscape, and it has left traditional media flailing to find ways to adjust revenue models while keeping their influence intact.

If we say that “truth” can only be verified by a centralized mechanism – a group of people, or an algorithm written by a group of people – we are welcoming the idea that arbitrary sources will be preferred, while others will not (unless they conform to certain standards).

Based on this mechanism, it is almost certain that well-established journalistic sources like The New York Times or The Washington Post will be the most trusted. By the same token, newer sources (like independent media, or blogs written by emerging thought leaders) will not be able to get traction unless they are referencing or receiving backing from these “trusted” gatekeepers.

The Impact?

This centralization is problematic – and here’s a step-by-step reasoning of why that is the case:

First, either method (human or computer) must rely on preconceived notions of what is “authoritative” and “true” to make decisions. Both will be biased in some way. Humans will lean towards a particular consensus or viewpoint, while computers must rank authority based on different factors (Pagerank, backlinks, source recognition, or headline/content analysis).

Next, there is a snowball effect involved: if only posts referencing these authoritative sources of “truth” can get traction on social media, then these sources become even more authoritative over time. This creates entrenchment that will be difficult to overcome, and new bloggers or media outlets will only be able to move up the ladder by associating their posts with an existing consensus. Grassroot movements and new ideas will suffer – especially those that conflict with mainstream beliefs, government, or corporate power.

Finally, this raises concerns about who fact checks the fact checkers. Forbes has a great post on this, showing that Snopes.com (a fact checker) could not even verify basic truths about its own operations.

Removing articles deemed to be “untrue” is a form of censorship. While it may help to remove many ridiculous articles from people’s social feeds, it will also impact the qualities of the internet that make it so great in the first place: its decentralized nature, and the ability for any one person to make a profound impact on the world.

Article by Jeff Desjardins, Visual Capitalist

Leave a Comment