Facebook Has Bad News For Its Underage Users

Updated on
facebook underage users
natureaddict / Pixabay

Facebook is now identifying and locking the accounts of users aged below thirteen. Prior to this, the world’s largest social networking platform only investigated the accounts that were flagged as belonging to underage users.

What inspired Facebook?

In a blog post, Facebook’s global policy management Vice President, Monika Bickert, stated that an account that the company believes belongs to a user aged below thirteen would be put on hold, and the person would not be able to unlock it without providing proof of their age.

“We do not allow people under 13 to have a Facebook account. If someone is reported to us as being under 13, the reviewer will look at the content on their profile (text and photos) to try to ascertain their age,” the executive said.

Facebook’s latest action on underage users follows an undercover documentary report by the UK’s Channel 4 and Firecrest films. In the documentary, a journalist enlisted as a Facebook content reviewer at a third-party firm in Dublin, Ireland claimed that they were directed to ignore the underage users.

Further, the documentary reveals that the reviewers didn’t take any action unless the users themselves said that they are underage. “If not, we just like pretend that we are blind and that we don’t know what underage looks like,” the reviewer said. The documentary also reveals that the reviewers are not removing posts that include racist, abusive and violent content, something that Facebook claims is against their standards.

Strict guidelines on underage users

Legally, Facebook, just like various other websites, is safe from any obligation in case an underage user signs in. However, after the Channel 4 documentary, it is clear that the social networking site is not making any effective efforts to stop usage by underage users.

“We take these mistakes incredibly seriously and are grateful to the journalists who brought them to our attention. We have been investigating exactly what happened so we can prevent these issues from happening again,” Bickert added.

Facebook is now promising to increase its safety and security employees twofold, reaching 20,000. However, the number of security researchers seems miniscule considering Facebook has over 2 billion accounts.

Facebook has established its age policy in line with the U.S. Child Online Privacy Protection Act. As per the act, the websites can gather data from the children below thirteen years of age with parental consent. Though the age policy has not changed, Facebook, on its end, has issued new guidelines to the reviewers.

A Facebook spokesperson also confirmed about the “operational” change to TechCrunch stating that the reviewers would be given proper training to implement the age restriction policy for both Instagram and Facebook.

A couple of months back, Twitter brought in a similar policy, whereby the micro-blogging site blocked the accounts of those who were underage when they first signed up for the service – even if they’re now well over 18. After the new privacy regulation came into being, Twitter blocked several accounts of users who self-declared that they might have been underage when they signed up for the first time.

Speaking to the Guardian, a user who is now 20 said: “I received a message saying my account was now locked, and would require parental consent in order to process my data, or my account will be deleted.”

Tightens security norms

Lately, Facebook has been actively taking steps to tighten the noose on hate mongers and uphold its privacy policy. Recently, the company announced that it would remove any posts that “could lead to physical violence” to curb hate speech and false information spreading on the platform. The new policy was announced in the aftermath of inter-religious violence in Sri Lanka claiming three lives in March 2018.

Facebook already categorizes hate speech and threats as a violations of its rules, and such posts are automatically removed by the company. However, the new policy goes one step further by eliminating content that may not be explicitly violent, but may encourage such behavior. The company will also remove inaccurate or misleading content either originally created or shared to instigate a volatile situation.

“There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down,” a Facebook spokesperson said, adding that the new policy would be implemented soon. To efficiently implement the new policy, the social networking giant would collaborate with local organizations and authorities to identify false posts that may inflame violence.

Leave a Comment