The Internet’s ‘Nasty’ Side: Can Firms Control The Trolls?

Updated on

The Internet’s ‘Nasty’ Side: Can Firms Control The Trolls? 

Last month, Milo Yiannopoulos, the technology editor at the conservative news site Breitbart with a big following, led a Twitter campaign against Ghostbusters actress Leslie Jones. Hundreds of trolls heeded his call, hurling racist comments and ugly memes, compelling her to leave the social networking service.

More recently, a Staten Island woman wearing a white cap with the message “America Was Never Great” had her photograph unwittingly snapped, posted on social media, and started receiving online death threats. It’s no wonder that when Cinderly, a new fashion app for girls, announced its launch, the start-up said that a no-trolling pledge for users would come with the deal, “since the internet can be a nasty place.”

Indeed it can. But it’s not just the stereotypical malcontent who is spewing digital invective. A White House national security aide was fired after being unmasked as the anonymous Twitter troll who had been taunting senior government officials. Every few days brings another instance in which some troll somewhere has succeeded in making life miserable for someone.

Trolling — a term that has become a catch-all encompassing a spectrum of bad online behavior — poses major challenges to social media sites, publishers and retailers. “When trolling gets bad, it can really wreck the experience of any customer who wants to use the online service. Not many people want to go to or participate in a place where others are being mean or acting like idiots,” says Rider University psychologist John R. Suler, author of Psychology of the Digital Age: Humans Become Electric. “How much trolling is a problem for a business depends on how it runs its social media site. If there is a space for people to speak their mind anonymously, trolls will likely appear. It will also depend on the reputation of the company and the nature of the products or services they offer. Some companies, products and services draw more fire from trolls than others.”

“Companies need to consider their revenue model, how much activity the trolls actually represent and the overall impacts in both directions.” Kevin Werbach

And then there is the question of how much a company actually wants to discourage trolls, says Wharton professor of legal studies and business ethics Kevin Werbach. “Trolls and their followers often generate a large volume of activity. Services that monetize based on eyeballs may be concerned about cutting down on their traffic or user growth,” says Werbach. “Companies need to consider their revenue model, how much activity the trolls actually represent and the overall impacts in both directions.” Cutting down on abuse may make the platform more attractive to current and potential users, for example. “Ultimately, these firms have to decide what kind of company they want to be,” he adds. “Sometimes pursuing every drop of short-term revenue obscures the most profitable strategy over the long term.”

Some are trying to have their cake and eat it, too. Entrepreneurs and Google alumni Bindu Reddy and Arvind Sundararajan have co-founded a new social app called Candid that aims to create a digital safe space by using artificial intelligence to monitor and curate conversations. Users are anonymous, but earn “badges” based on past posts that tag them as influencers, givers or socializers — or gossips and haters.

Harassment Happens

Trolling is worse than ever, but it has been present “since the very beginning of the internet, when chat rooms and discussion boards ruled,” says Suler. “Before the internet, we didn’t see much trolling on TV, but it did happen on radio, especially during call-in shows that allowed people to be anonymous. Trolling has always existed in the social life of humans and always will exist, because there will always be people who antagonize and hurt others, either because they feel compelled to or simply because they enjoy it.”

They are a busy breed. Near three-quarters of 2,849 respondents to a 2014 Pew Research Center survey said they had seen someone harassed online, with 40% saying they had experienced it personally — from being called a name, to stalking and threats of physical violence. The report showed that men are “more likely to experience name-calling and embarrassment, while young women are particularly vulnerable to sexual harassment and stalking.”

As to where harassment happens, 66% of internet users experiencing online harassment said their most recent incident occurred on a social networking site or app; 22% in the comments section of a website; 16% on an online gaming site; 16% in a personal email account; 10% in a discussion site such as Reddit, and 6% on an online dating website or app. In half of all cases, the identities of the harassers were unknown to the harassed.

Trolls come in a variety of shapes and sizes, Suler says, though the basic categories are immature teenagers, chronically angry and frustrated people who take it out on others, narcissists and sociopaths. “The hardcore troll is a sociopath who enjoys hurting people, who wants people to get upset, angry and depressed,” says Suler. “It’s a deliberate act of manipulation and control in order to feel powerful. In fact, such sociopaths want to destroy other people as best they can.”

Who is the troller and how did he get that way? British tech researcher Jonathan Bishop examined one up close, and determined that in the most severe cases, trolls meet the diagnostic criteria for anti-social personality disorder. “These haters usually have a high expectation of what it means to be successful, which is higher than they are able to attain,” he wrote in “The Effect of de-individuation of the Internet Troller on Criminal Procedure implementation: An Interview with a Hater (pdf),” published in the International Journal of Cyber Criminology. “This results in them resenting others who think they are successful but whom fall below their standards. It also results in them showing resentment to those with a similar background to them who achieve successes they are unable or unwilling to [achieve].”

“Trolling has always existed in the social life of humans and always will exist, because there will always be people who antagonize and hurt others….” –John R. Suler

Eric K. Clemons, a Wharton professor of operations, information and decisions, places trolls into a taxonomic hierarchy that spans from ignorant or arrogant howlers who do commercial or personal harm, to a class that is no more than the simple fraudster. “These are guys who publish false attacks on products and sellers, or false praise for products and sellers, for a fee,” Clemons notes. “China now gives them jail terms if they are caught. This is not protected freedom of speech, but criminal behavior. It is easy to agree that it should be banned. It is hard to detect, except in a few special cases.” Ratebeer.com, for example, has tens of millions of reviews from hundreds of thousands of reviewers. It is easy to detect an outlier, Clemons says, like someone with only one or two reviews, who lives in St. Louis, and thinks Bud Light is the best beer in the world. “Ratebeer.com still publishes the outliers, but marks them as outliers and does not include them in their summary statistics.”

Clemons says that commercial reviews should be forced to be accurate and relevant.  “Some editing is legit, if it is backed by analysis and expressed in a clear policy,” he says. “Comments on blogs should no longer be anonymous, and should be held to a high standard of accuracy and relevance as well. But this is easier said than done.”

Weeding Out the Trolls

The challenge for companies such as Twitter and Reddit is that “the less you try to control what users do on the platform, the easier it is for some of them to engage in abuse,” says Werbach. “Requiring real names is one way to cut down on abuse, but there are many legitimate reasons to allow pseudonyms or anonymous speech. And it’s very tough to write a set of rules that distinguish appropriate and inappropriate activities in every context. Add to that the enormous volumes of traffic on these social media services, and it’s a real challenge.”

Cases of abuse have been wending their way through the legal system, though such help has been limited. “The U.S. Supreme Court has dealt with this issue many times,” notes Werbach. “The First Amendment says the government can’t limit the freedom of speech, but that doesn’t mean you can threaten the President or yell ‘fire!’ in a crowded theater. We can debate about the close cases, like whether it’s legitimate speech to burn the flag as a form of protest. But most of the cases we’re talking about with Twitter and other services aren’t in the grey area. When people go out of their way to hurt others — and not just psychologically — then revel in it, that’s not a political act.”

While legal remedies may be appropriate in specific cases involving defamatory speech or harassment when it becomes a true threat, “the laws are written specifically to take into account First Amendment protection of most online speech,” concludes a Fordham Law School report, “Online Harassment, Defamation and Hateful Speech: A Primer of the Legal Landscape,” from 2014. “Moreover, online anonymity can make it quite difficult to identify a perpetrator, and issues of jurisdiction complicate which police department, court or state is most appropriate to handle the complaint. Add to this limited resources and computer literacy, and legal remedies begin to look like a last resort.”

“Comments on blogs should no longer be anonymous, and should be held to a high standard of accuracy and relevance as well. But this is easier said than done.” Eric Clemons

Bishop proposes the creation of a Trolling Magnitude Scale. “Then it will make it easier for the police and other law enforcement authorities to prioritize who is prosecuted in an objective way, rather than feel obligated to take action when it may not be in the public interest to do so.”

The recent Twitter episode involving Leslie Jones blended two internet plagues into one, foul mess — trolling and cyberbullying. After it was revealed that the new Ghostbusters movie would feature an all-female cast, both Jones and her co-stars were targeted by trolls. The racist and misogynist messages fired at Jones’ account intensified after the film’s release in July. Twitter eventually suspended Yiannopoulos, and the company admitted that it needed to do more to “prohibit additional types of abusive behavior and allow more types of reporting, with the goal of reducing the burden on the person being targeted.” The company, which has long been criticized for not doing enough to protect its users, says it will announce more details shortly.

Wharton marketing professor Pinar Yildirim says it would not be difficult for Twitter to introduce a filter that examines the tweeting pattern of an account and its content so users could weed out trolls. “When you have an account that tweets every day exactly the same content that another 100 or 1,000 or more people tweet, and it does this for a long period of time, it is not difficult for algorithms to detect trolls,” Yildirim says. Twitter could introduce options for receiving information from the immediate (followed) network, or their immediate friends, or verified or trusted accounts, which are accounts with a high probability of not being trolls. “They can develop ‘for’ or ‘against’ filters, where you can be exposed to ideas similar or different to yours, again, based on text analysis algorithms, which can detect meaning with about 80% accuracy,” she says. “Twitter is a universe for hearing from other people, but the company can offer more options about navigating this information.”

The ‘Nasty Effect’

For businesses, the consequences of allowing unmediated nastiness may be quite severe. Research has shown that for some people, the act of reading rude or angry comments actually changes opinions about the subject matter at hand. In “The ‘Nasty Effect:’ Online Incivility and Risk Perceptions of Emerging Technologies,” published in the Journal of Computer-Mediated Communication, the authors had 1,183 participants read a news post on a fictitious blog about the risks and benefits of a substance called nanosilver.

Afterward, half of the readers were shown civil comments, the other half rude ones. Those who read the civil comments were not swayed from their original opinions on the question of nanonsilver’s risk-benefit proposition. But readers exposed to the rude comments ended up being much more likely to think that the downside of nanosilver was greater than they had thought upon first reading the article. “Online communication and discussion of new topics such as emerging technologies has the potential to enrich public deliberation. Nevertheless, this study’s findings show that online incivility may impede this democratic goal,” wrote the study’s authors.

“Twitter is a universe for hearing from other people, but the company can offer more options about navigating this information.” –Pinar Yildirim

It’s also not necessarily good for commerce. “A good tool that many social media companies have used is giving customers a way to ‘tune out’ anyone who is a troll,” says Suler. “But in a customer site for discussing or reviewing products, many customers probably won’t use it much. They just won’t come back if the trolling is out of control.”

What all this means is that much of the delicate task of mediating free speech often rests in the hands of businesses that aren’t necessarily schooled in First Amendment rights. How does a clerk for a retailer decide which comments to delete in a product review section or on a social media site? Facebook drew attention to the gray area inhabited by social media a few months ago when it announced it was changing its news feed algorithm to favor friends and family over content from news publishers, which got 41% of their traffic via Facebook referrals in one sampling taken by analytics firm Parse.ly.

There are a variety of things that companies can do to prevent, detect and manage trolling, but no solution is perfect, says Suler. “Relying on automated interventions, such as algorithms, to detect and delete offensive language works OK, but not great. It’s always a good idea to make it easy for customers to report inappropriate behavior. But then how does the company intervene? Try to reason with trolls? Ban trolls from the site? Both of these turn out to be tricky issues, because ‘trolling’ will have to be defined with rules and then the rules must be consistently enforced. That takes paid workers to carry out those strategies. Paid moderators is always a good strategy for monitoring, regulating and intervening in social media, but that does cost money.”

And it may cost image. Yildirim says negative public relations or charges of censorship are a risk, and monitoring requires a “centralized decision maker who can say what is good or what is bad, or what should be talked about and what should not be.” But this is highly relative from one person to the other, one country to another, or from one time period to another, she notes. The way many social media platforms do the clean up right now is through collective input, the reports and requests from others to block or close an account. “This prevents the platform from being the decision maker on what should be talked about and what should not be. It also provides them with a justification for closing accounts — customer complaints.”

Businesses have been grappling with trolling for a long time, and yet it remains a thorny issue. “Twenty years ago,” says Suler, “when I spoke to a representative from a major communications company about trolling … he said, ‘What’s good for the company isn’t necessarily good for the community. And what’s good for the community isn’t necessarily good for the company.’ I think this still holds true.”

Ultimately, social media platforms must choose what they want to be, says Werbach: Anything goes, regardless of the human cost, or a safe space for open communication. “While management of these companies may be ideologically reluctant to be the ones limiting speech, if they don’t, they’re allowing the loudest and most abusive voices to silence other users.”

Internet Trolls

Leave a Comment