What Really Happens When You Report Fake News On Social Media

Fake news is nothing, well, new — there's a Catalonia pamphlet from 1657 that tells of a discovery of a terrible monster that consisted of "goat's legs, a human body, seven arms and seven hands." Eew. No one wants to see that.

Advertisement

Fake news as a term soared during the 2016 U.S. presidential election, according to Gale Academic OneFile. The phrase was used so often that the Collins Dictionary dubbed it the word of the year in 2017. While "fake news" means different things to various people, its basic definition is news that is fabricated. It "is an invention — a lie created out of nothing — that takes the appearance of real news with the aim of deceiving people," said #30 Seconds

Such disinformation can have a major impact on the world. When the public can't attain reliable facts, their ability to make informed decisions flounders. Facebook recently decided to ban content that "denies or distorts the Holocaust" — after previously saying such opinions deserved free speech — because a survey showed that a quarter of its participants, Americans aged 18-39, believed that the genocide of Jewish people and others was fake or exaggerated. That's right, 25 percent did not acknowledge the Holocaust's terrible death toll and cruelty. 

Advertisement

Americans get their news from social media

While social media is still a forum for showing off your freshly baked apple pie and funny photos of yourself with rabbit ears, it is also a news distribution system, with 62 percent of American adults using it as their primary news source, said a Pew Research Center survey. Despite that, social media policy on fake news varies from platform to platform. Often these are "confusing, unclear, or contradictory," according to Bill Fitzgerald, a privacy and technology researcher in Consumer Reports' Digital Lab. 

Advertisement

Consumer Reports also points out that's why it is difficult to know the expectations for different social media. It's hard to make "companies enforce their rules fairly when you can't even figure out what those rules are supposed to be." According to the publication, much of social media — including Facebook, YouTube, Twitter, Pinterest, Reddit, Snapchat, WhatsApp and TikTok — allow some false information. More often, such platforms ban certain topics of fake news.

Sharing fake news consequences

What happens to you when you share fake news on social media? It all depends on the type of information you share and which platform you use. Facebook and Twitter label posts with warnings, and both platforms might make it harder to access such posts in search results and news feeds. Some platforms will straight-out delete the offending post if it determines it is false information.

Advertisement

The truth is, disseminating fake news currently has little consequence for whoever does the posting. As for society, democracy, the integrity of information ... that's another matter. The law currently leaves fake news publishers some leeway, since there are neither laws nor "precedents that clearly define the concept," according to Rasmussen College's blog. One possible, but doubtful, punishment would be if someone filed a civil suit for defamation on the basis of the social media post. If they could prove serious negative results of the post, says the Law Firms web site, there could be a high payout.

Some social media platforms such as Reddit and Twitter have created tougher terms of services to stop propaganda, hate speech and fake news, and might suspend accounts that violate those terms. 

Advertisement

Reporting fake news doesn't always work

Most platforms have a way you can report fake news on their site by clicking on a "report" icon. For instance, if you flag an item on Instagram, its fact-checkers will confirm the information and if they find the post is false, it will be hidden from the Explore and hashtag pages. Unfortunately, though, it still might appear in the feed section, so people who follow the page will still see it. Even so, there's no guarantee that the information will reach those fact-checkers. An article in Mashable said that Instagram uses a number of factors to determine if information needs confirmation.

Advertisement

Facebook started checking facts in 2016 and sends suspect information to PolitiFact and Factcheck.org, which will review only those submissions the organizations think are "important or impactful to evaluate," said WIRED

So while reporting misinformation is important, there is no full system in place to prevent it from appearing on social media. Perhaps in the future, social media will find better ways to prevent misinformation. "Fake news is no longer a problem that platforms can buy under PR speech and abstract commitments," said Theodore Weng in an article on Law Technology Today. "... The time to take a firm stance against the harm fake news poses to society is now." 

Recommended

Advertisement