Jennifer Golbeck of the University of Maryland and Northeastern University’s Andrea Matwyshyn discuss social media sites’ crackdown on ‘fake news.’

Google, Facebook and Twitter last week vowed to fight fake news, hate speech and abuse in their own ways amid the backlash over how such content may have influenced voting in the U.S. presidential election. Those actions could have come sooner, and many troubling issues persist, according to experts.

Fake News
Image source: Pixabay

Google has said it would prevent websites carrying fake news from accessing its AdSense advertising platform that helps such sites share in advertising revenues. Facebook said it would not integrate or display ads in apps or sites that have illegal, misleading or deceptive content, including “fake” news — news that is deliberately factually incorrect. Twitter said in a statement that it would remove accounts of people posting offensive content, building on earlier measures that help users “mute” such content and report abuse. That builds on its existing policy on hate speech and other offensive content.

Following the money is the right strategy to prevent such abuses, according to Jennifer Golbeck, director of the Social Intelligence Lab and professor of information studies at the University of Maryland, and author of the book Social Media Investigation: A Hands-on Approach. The new measures by these companies have “really taken away the main source of income for those sites, which is what drives them to exist in the first place,” she said. She noted that many of the individuals and organizations posting fake news, including during the recent elections, are not in the U.S. and don’t care about the ideologies behind the content. “They just care about making money, and they figured out ways to create click feeds that will get it.”

Underpinning the moves by the social media companies is a law (section 230 of the Communications Decency Act) that that gives them “a modicum of legal protection for the content that exists on their platforms, as long as they don’t veer off too much into editorial functions,” said Andrea Matwyshyn, law professor at Northeastern University and affiliate scholar at the Center for Internet and Society at Stanford Law School. “They’re walking a bit of a legal line to create the right kind of environment from their corporate perspective but also to not run afoul of the extent to which Section 230 of the CDA gives them a buffer of legal protection.”

Golbeck and Matwyshyn discussed the broader aspects of the fight against fake news and hate speech on the [email protected] show on Wharton Business Radio on SiriusXM channel 111. (Listen to the podcast at the top of this page.)

Facebook CEO Mark Zuckerberg, who had earlier resisted charges that his company unwittingly allowed fake news to proliferate, is no longer in denial. He put out a post November 18 describing how his company plans a multipronged front to avoid fake news being shared on its site. Even so, he saw his limits: “We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties,” he wrote.

“The fact that Facebook and Google waited until the election was over — and the flow of advertising revenue from fake news sites subsided — before taking action is pretty damning.”–Kevin Werbach

How Far Could They Go?

The important issue is whether Facebook and other major social media platforms take responsibility for their role in shaping the informational environment, according to Kevin Werbach, Wharton professor of legal studies and business ethics. “They don’t want to think of themselves as media companies, but they are playing the same role that media companies traditionally did in influencing public opinion. With that influence comes responsibility.”

Should the government throw in its weight with the social networks and build a stronger front? Regulation and laws could help, but the challenge is in implementation of laws, said Wharton marketing professor Jonah Berger. “One person’s pornography is another person’s art,” he said. “With religious beliefs, one person’s truth is another’s falsehood. That is where this gets messy.”

There is only so much social media networks could do to prevent fake news, said Wharton marketing professor Pinar Yildirim. “Serious newspapers — gatekeepers of information — usually do a much better job in fact-checking before publishing and distributing news,” she said. “Since the barriers to distributing information is so much lower nowadays, it is hard for platforms like Twitter and Facebook to be able to filter information, which they aggregate at such a large scale.” Moreover, they do not want to offend users by blocking their content, she added.

The Lure of Business

Empowering users to decide what they want to see and what they want to avoid also makes good business sense. “After the polarized 2016 elections, it has become clear that giving more control to users on what kind of information they are exposed to will make them more likely to continue to use these platforms,” said Yildirim. “In the absence of these tools, users are likely to unfriend their connections or engage with the platforms less in order to avoid harassment or unpleasant content.”

Twitter’s moves protect not just its users, but also its brand image that had earlier taken a beating for allowing hate speech and abusive trolls. Golbeck noted that Twitter last week suspended the accounts of some members of white supremacist and neo-Nazi groups.

Matwyshyn is less enthused by those moves. She noted that Twitter has a contract with its users and it dictates the terms of engagement, which now includes banning hate speech from its platform. “The steps it is taking now are merely a run-of-the-mill contract enforcement situation,” she said.

More significantly perhaps, Twitter’s business outlook could also get a boost as it continues to look for a buyer. Twitter has seen its platform become “in a lot of ways a cesspool of terrible things from anonymous accounts that has made it sometimes a legitimately dangerous place for a lot of users,” according to Golbeck.

“Just for their company image, in addition to the fact that it is really affecting their business and the perception of the value of their business, there couldn’t be a better time for them to start aggressively taking these measures,” Golbeck said. In recent months, several suitors — including Google and Disney — are said to have considered buying Twitter, but some were apparently put off by the hate speech and abuse that the site allowed.

“How do you know something is hate speech or fake news? Once you start restricting some of these things, it’s a slippery slope and you open yourself to legal action.”–Jonah Berger

In addition to its recent moves to combat abuse, Twitter has also been experimenting with another method — “the idea of speaker identity being a marker of credibility,” said Matwyshyn. In such a method, the platform doesn’t filter out ideas, but gives credibility or trust ratings to speakers, she added. That idea could be extrapolated in some ways with other platforms, she noted.

According to Golbeck, the trustworthiness that Matwyshyn referred to could be built into Google’s search results, too. “The way Google traditionally does that is that every time somebody links to

1, 2  - View Full Page