The Internet’s ‘Nasty’ Side: Can Firms Control The Trolls? 

Last month, Milo Yiannopoulos, the technology editor at the conservative news site Breitbart with a big following, led a Twitter campaign against Ghostbusters actress Leslie Jones. Hundreds of trolls heeded his call, hurling racist comments and ugly memes, compelling her to leave the social networking service.

More recently, a Staten Island woman wearing a white cap with the message “America Was Never Great” had her photograph unwittingly snapped, posted on social media, and started receiving online death threats. It’s no wonder that when Cinderly, a new fashion app for girls, announced its launch, the start-up said that a no-trolling pledge for users would come with the deal, “since the internet can be a nasty place.”

Indeed it can. But it’s not just the stereotypical malcontent who is spewing digital invective. A White House national security aide was fired after being unmasked as the anonymous Twitter troll who had been taunting senior government officials. Every few days brings another instance in which some troll somewhere has succeeded in making life miserable for someone.

Trolling — a term that has become a catch-all encompassing a spectrum of bad online behavior — poses major challenges to social media sites, publishers and retailers. “When trolling gets bad, it can really wreck the experience of any customer who wants to use the online service. Not many people want to go to or participate in a place where others are being mean or acting like idiots,” says Rider University psychologist John R. Suler, author of Psychology of the Digital Age: Humans Become Electric. “How much trolling is a problem for a business depends on how it runs its social media site. If there is a space for people to speak their mind anonymously, trolls will likely appear. It will also depend on the reputation of the company and the nature of the products or services they offer. Some companies, products and services draw more fire from trolls than others.”

“Companies need to consider their revenue model, how much activity the trolls actually represent and the overall impacts in both directions.” Kevin Werbach

And then there is the question of how much a company actually wants to discourage trolls, says Wharton professor of legal studies and business ethics Kevin Werbach. “Trolls and their followers often generate a large volume of activity. Services that monetize based on eyeballs may be concerned about cutting down on their traffic or user growth,” says Werbach. “Companies need to consider their revenue model, how much activity the trolls actually represent and the overall impacts in both directions.” Cutting down on abuse may make the platform more attractive to current and potential users, for example. “Ultimately, these firms have to decide what kind of company they want to be,” he adds. “Sometimes pursuing every drop of short-term revenue obscures the most profitable strategy over the long term.”

Some are trying to have their cake and eat it, too. Entrepreneurs and Google alumni Bindu Reddy and Arvind Sundararajan have co-founded a new social app called Candid that aims to create a digital safe space by using artificial intelligence to monitor and curate conversations. Users are anonymous, but earn “badges” based on past posts that tag them as influencers, givers or socializers — or gossips and haters.

Harassment Happens

Trolling is worse than ever, but it has been present “since the very beginning of the internet, when chat rooms and discussion boards ruled,” says Suler. “Before the internet, we didn’t see much trolling on TV, but it did happen on radio, especially during call-in shows that allowed people to be anonymous. Trolling has always existed in the social life of humans and always will exist, because there will always be people who antagonize and hurt others, either because they feel compelled to or simply because they enjoy it.”

They are a busy breed. Near three-quarters of 2,849 respondents to a 2014 Pew Research Center survey said they had seen someone harassed online, with 40% saying they had experienced it personally — from being called a name, to stalking and threats of physical violence. The report showed that men are “more likely to experience name-calling and embarrassment, while young women are particularly vulnerable to sexual harassment and stalking.”

As to where harassment happens, 66% of internet users experiencing online harassment said their most recent incident occurred on a social networking site or app; 22% in the comments section of a website; 16% on an online gaming site; 16% in a personal email account; 10% in a discussion site such as Reddit, and 6% on an online dating website or app. In half of all cases, the identities of the harassers were unknown to the harassed.

Trolls come in a variety of shapes and sizes, Suler says, though the basic categories are immature teenagers, chronically angry and frustrated people who take it out on others, narcissists and sociopaths. “The hardcore troll is a sociopath who enjoys hurting people, who wants people to get upset, angry and depressed,” says Suler. “It’s a deliberate act of manipulation and control in order to feel powerful. In fact, such sociopaths want to destroy other people as best they can.”

Who is the troller and how did he get that way? British tech researcher Jonathan Bishop examined one up close, and determined that in the most severe cases, trolls meet the diagnostic criteria for anti-social personality disorder. “These haters usually have a high expectation of what it means to be successful, which is higher than they are able to attain,” he wrote in “The Effect of de-individuation of the Internet Troller on Criminal Procedure implementation: An Interview with a Hater (pdf),” published in the International Journal of Cyber Criminology. “This results in them resenting others who think they are successful but whom fall below their standards. It also results in them showing resentment to those with a similar background to them who achieve successes they are unable or unwilling to [achieve].”

“Trolling has always existed in the social life of humans and always will exist, because there will always be people who antagonize and hurt others….” –John R. Suler

Eric K. Clemons, a Wharton professor of operations, information and decisions, places trolls into a taxonomic hierarchy that spans from ignorant or arrogant howlers who do commercial or personal harm, to a class that is no more than the simple fraudster. “These are guys who publish false attacks on products and sellers, or false praise for products and sellers, for a fee,” Clemons notes. “China now gives them jail terms if they are caught. This is not protected freedom of speech, but criminal behavior. It is easy to agree that it should be banned. It is hard to detect, except in a few special cases.” Ratebeer.com, for example, has tens of millions of reviews from hundreds of thousands of reviewers. It is easy to detect an outlier, Clemons says, like someone with only one or two reviews, who lives in St. Louis, and thinks Bud Light is the best beer in the world. “Ratebeer.com still publishes the outliers, but marks them as outliers and does not include them in their summary statistics.”

Clemons says that commercial reviews should be forced to be accurate and relevant. 

1, 23  - View Full Page