Facebook and other prominent social media companies have come under fire over their privacy and content filter policies in the aftermath of the U.K. terror attack. Following the criticism, Facebook has come up with an anti-terror program called Online Civil Courage Initiatives.

Facebook, anti-terror
geralt / Pixabay

Free ads for anti-terror groups

Many terrorist groups, such as the Islamic State of Iraq and the Levant, have long been using Facebook and other social media platforms as a potent weapon for recruitment. With the initiative, the company aims to end this. Under the new program, the company will offer training, marketing support and advice to anti-terror groups on how to weed out hate speech and extremist content. The social networking giant will also offer financial aid to these organizations.

Under the initiative, foundations like Jo Cox and the Institute for Strategic Dialogue will get advertising credits when they track and try to save individuals from radicalization. Advertising is seen as a potent tool to encourage those working towards de-radicalization and curb the extremist propaganda which surfaced after the London and Manchester attacks.

Facebook COO Sheryl Sandberg said there is no place for hate or violence on Facebook, adding, “We all have a part to play” in fighting terror. Further, Sandberg stated that through this partnership, Facebook has focused on counter-speech by giving a voice to people to speak against violence and extremism.

“The UK OCCI will support NGOs and community groups who work across the UK to challenge the extremist narratives that cause such harm,” the executive said.

The OCCI is already working in France and Germany, notes The Drum.

More needs to be done: Facebook

Previously, Facebook has said that it wants to be “a hostile place for the terrorists.” Recently, Monika Bickert, director of Global Policy Management, and Brian Fishman, Counter-terrorism Policy Manager at Facebook, stated that the company firmly believes that no radical and extremist content should be shared on social media. Both managers are convinced that the challenge for online communities is the same as for real-world communities when it comes to early detection of signals.

Facebook has also implemented artificial intelligence technology to restrict anyone from uploading a photo that has any resemblance to known terrorism photos or videos. The artificial intelligence will also be capable of removing material related to pages, posts and profiles supporting terrorism.

Sandberg acknowledged that there is more to be done in this regard, adding that they (Facebook, its partners, and the community) will keep fighting violence and extremism on the platform.

Other Internet firms too are fighting extremism. Just a week ago, Google said in a statement that its policies will be made more stringent and loopholes will be addressed. Google’s video streaming service has often been accused of neglecting hate speech and other such extremist content with its complex and loosely designed set of rules. Recently, the service faced criticism for posting advertisements beside extremist content. Media reports suggested that one of the London Bridge attackers drew inspiration from YouTube videos of an Islamic preacher.