4 Ways to Protect Your Brand from the Risks of Video Streaming

Updated on

This year alone, Facebook’s user base exceeded the two billion mark worldwide. Facebook Live statistics reveal that one in five videos is a live streaming and nearly 20% of the platform’s overall video content is live video. And Instagram statistics show that posts containing videos receive about 38% more engagement on the platform than those containing images.

Get The Full Ray Dalio Series in PDF

Get the entire 10-part series on Ray Dalio in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues

Q3 2020 hedge fund letters, conferences and more

Additionally, TikTok estimates that it has 800 million active users around the globe, with 53% of its engaged users claiming to have uploaded at least one video in the last month. As of February of this year, YouTube uploads averaged 300 hours of video per minute.

Uploading video instantly is appealing to both those who create and consume content. In terms of content moderation, however, providing users with the ability to stream live video has made the process notably more challenging. And the results of faulty content moderation can be upsetting and even horrific.

The Challenges of Moderating Live Streamed Content

Facebook, which depends largely on automated systems for content removal, reported that it took down 15.4 million pieces of graphic and violent content between July and September of 2018. Live streams, on the other hand, can be even more difficult to moderate, as was apparent on August 31, when a Facebook Live user took his own life while live streaming from his computer.

Although the original video was removed by Facebook on the day it was streamed and it has used automation technology to remove copies and uploads since then, this didn’t stop the video from spreading to Instagram, Twitter, TikTok, and YouTube, where an image of the user ran alongside ads, attracting thousands of viewers.

This isn’t the first time that videos of violence and crime have been broadcast live, or that platforms such as Facebook and YouTube have declared their intention to use moderation to stop the redistribution of viral videos depicting self-harm, among other inappropriate content. But despite their declarations, major platforms still face moderation shortcomings, compounded by a pandemic that forced Facebook, Twitter, and YouTube to send home thousands of content moderators. And in the absence of a large portion of their human teams, these social media giants turned to their Artificial Intelligence solutions almost exclusively for enforcement of moderation policies.

Here’s the good news:

You can still take advantage of the many benefits of allowing video streaming on your app or website if you manage associated risks with user-generated content (UGC) moderation. And the first step is to put a content moderation plan in place before your organization allows real-time video on its site or app. Today, we’ll look at the most effective ways to minimize the risks associated with video streaming. But first, let’s have a closer look at the benefits that video streaming can offer your brand.

Benefits to Video and Live Streaming

Allowing users to live stream video can be beneficial to your brand for several reasons. Compared to other video marketing strategies, live broadcasting can be cost-efficient. Real-time video can also convey creativity and improve audience reach. Additionally, live video is instant content, which can allow for direct communication between your business and your audience. Since this demonstrates that you care enough about your customers or clients to address them directly, live streamed content establishes your brand as trustworthy.

According to a Brandlive survey, 74% of businesses use live broadcasting to engage with their consumers. So should your business join the live streaming ranks?

That depends.

If your brand offers digital services, then you may want to consider live video. Hosting an educational event, such as a live webinar, can be a helpful way to promote your brand.

Video streaming can also be appealing to brands that run frequent UGC and social media contests. Announcing contest winners through a live video from Instagram’s IGTV app is another example of how live streaming can boost your brand’s marketing efforts.

Live streaming has become a popular marketing tool over the past few years, with 1.1 billion hours of live video watched in 2019. And 82% of individuals report that they would rather see live video from a brand than the average social media post. With so many users preferring to watch video over any other content type, live broadcasting is a viable marketing option, as long as you go the extra mile to protect your brand’s reputation by making content moderation part of your video marketing strategy.

Approaches To Video And Live Stream Moderation

The downside to live streaming is its unpredictable nature, which may make your platform susceptible to offensive content. To minimize these risks, you need to create systems that enhance the flow of live broadcast moderation while improving its efficiency and scalability.

Before allowing livestreaming on your platform, here are 4 approaches to consider for moderation of live streamed content:

1. Involve Users, Moderators, and Authorities

If you give your users the ability to flag offensive content, then any live streamed content that is violent or criminal in nature may be brought to your attention early on.
When a video is flagged repeatedly, live human moderators should be alerted. They can then review the live video and remove inappropriate content before it is viewed by the masses or spreads to other platforms. In the event that the flagged video is broadcasting a crime or any other illegal content, authorities should be alerted so further action can be taken.

Many companies handle their site’s sensitive content by having an actual human being review and approve user-generated content (UGC) only after it has been flagged. By bringing video that is already live to the attention of a human moderation team, it’s possible that they can quickly shut down content that may be brand damaging and disturbing.

Unfortunately, it’s also possible that by the time users have flagged a video manually, unsuspecting viewers may already be exposed to the content. Automated video moderation that uses artificial intelligence, however, can be effective at quickly detecting violence, hate symbols, nudity, criminal acts, and other offensive content in videos.

2. Use Automated Image Moderation to Scan Frames of Video

AI can process a high volume of images in a short amount of time, offering automatic detection and elimination of inappropriate content. Advanced automated image moderation technology can automatically take frames from videos at optional intervals and scan them for nudity, terrorism, graphic violence, and other categories.

The technology, however, still struggles with something that human eyes don’t:
The ability to distinguish between contextual nuances and legitimately offensive content. To address this dilemma, leverage the talent of highly trained human moderators combined with automation technology to monitor live video broadcasts and catch undesirable content.

The AI can identify images with a high probability of containing untoward content, limiting the volume of video that needs to be escalated to highly trained human moderators to content that cannot be classified accurately. Human moderators who are trained to flag additional violations then review this content.

3. Monitor What’s Trending

Your company is responsible for what everyone’s looking at on your site, which is where community scoring comes into play. A critical moderation technique, community scoring allows you to monitor and review content that is trending among your online community.

Why does trending content matter? The reality is that if there are a lot of eyeballs on specific content on your site, your team should be watching it too. Through community scoring, video is escalated to live moderators when views spike, enabling the human moderation team to determine whether the trending content is inappropriate or innocent.

4. Review Reputation Scores

Reputation scoring is an effective moderation approach that takes into consideration the track record of any user who is submitting a live video, should they have one, when deciding which newly posted UGC videos should be escalated for human review. To begin, determine whether the user is new, or has been part of your brand’s online community for a significant amount of time. Long-time users who have been posting for a while without any flags will need far less attention from your moderators.

On the other hand, long-time users who have a reputation for posting inappropriate content in the past will have a low reputation score, and therefore require closer monitoring when streaming live videos. When it comes to new users to your site, a live moderation team should review their first few videos closely to establish an initial reputation score.

Flag Offensive Language And Comments

In the case of video streaming on your platform, inappropriate images aren’t the only issue requiring moderation. Chat within a live video should be moderated as well.

On the side of a video, there’s always a good deal of talk, and a profanity filter service is a useful tool for flagging offensive language which may (or may not) be related to an offensive live video.

After a video is live, you will still have to be vigilant regarding user comments. Be sure comments are scanned for potential scammers or users misrepresenting themselves, and ban the violating accounts immediately. Since it’s your responsibility to eliminate any explicit material, harmful links, or targeted harassment, have a live moderation team check for any inappropriate or illegal links or promotions within the content and change or remove them.

Moderating Live Video: A Team Effort

If you decide to only permit video submissions on your platform that are not streamed live, then you can pre-moderate all content before allowing it to go live. Pre-moderation, however, is not an option when it comes to live streaming. The good news is that the various building blocks and tools mentioned above can be implemented on your website to make instant videos work optimally.

If you’re going to allow live video, but you’re still not sure how to approach live broadcasting moderation, find a content moderation company that can provide guidance for managing the risks of live video and UGC through a hybrid moderation approach. Look for a content moderation partner that has a thorough training and quality control program for their in house moderation team, as well as advanced AI capabilities for moderating videos. In addition they should offer an automated profanity filter service accompanied by more complex text analysis tools designed to flag threats, abuse, and criminal activity.

User-generated video is an important tool for building brand recognition and trust, but the online ecosystem has a level of anonymity that makes UGC video risky, and moderation a necessity for any brand willing to allow live streaming. To ensure that users portray your brand as upstanding and visitors are protected from offensive or even harmful content, there are two options for truly avoiding the risk of live videos:

Use a hybrid approach to moderation. Or don’t offer live broadcasting on your site or app at all.