Google is set to increase its taskforce, who will be responsible to weed out extremist content on YouTube, to over 10,000 by next year. In a blog post, YouTube stated that monetization rules would also be revised to identify videos and channels that are not in line with the parameters set for advertising.
Addressing monetization concerns as well
On Monday, CEO Susan Wojcicki stated that they are determined to keep YouTube free of any extremist content, and make it a safer platform for creators and advertisers.
This latest announcement by the company comes in the wake of criticism by British Prime Minister Theresa May, who has long been scolding the social media companies to regulate the content responsibly, following a series of terrorist attacks in the U.K.
“The tech companies have made significant progress on this issue, but we need to go further and faster to reduce the time it takes to reduce terrorist content online,” May said during her speech at the United Nations in October.
Recently, British media reported that big brand advertisements were tied to the videos of children and teens, which drew inappropriate public comments. Several reports stated that advertisers were not comfortable in placing their ads on YouTube thereafter. Following the criticism, YouTube pulled 150,000 videos of children last week.
YouTube stated that it would continue to work closely with the National Center for Missing and Exploited Children (NCMEC) to identify and report the potentially illegal behavior to law enforcement.
Addressing the monetization concerns of the creators, Wojcicki, in a separate blog post on YouTube’s Creators blog, stated that the creators have made it clear that YouTube needs to be more focused while reviewing the content so that they do not demonetize a valid video. Further, he noted that increased members would help YouTube fulfill the task better, and avoid inaccurate demonetizations, thus, giving creators more stability on the revenue front.
“We will be talking to creators over the next few weeks to hone this new approach,” YouTube CEO said.
Machine Learning makes it easy to police extremist content
The YouTube CEO also stated that the company had developed a “computer-learning” technology capable of weeding out radical content on the platform, where hundreds of minutes of videos are uploaded each minute. The technology helps to report inappropriate or exploitative content.
Talking about the machine-learning program, Wojcicki noted that human reviewers are an essential part of removing content and training the machine-learning system because human judgment is significant in making “contextualized” decisions on content. Since June, moderators have manually reviewed about 2 million videos for violent extremist content, along with training machine-learning systems to identify similar objectionable content.
In recent weeks, machine learning has helped YouTube to identify and remove the content, as well as inappropriate accounts. According to YouTube, machine learning has helped human moderators remove approximately five times more videos than they had removed previously. In addition, algorithms flagged about 98% of the videos removed for violent extremism. Further, the advanced technology has helped to take down approximately 70% of extremist content within eight hours of uploading, notes The Guardian.