A powerful profanity filter service to catch offensive content

Updated on

ValueWalk’s Q&A session with Jonathan Freger, the Co-Founder and CTO, and Joshua Buxbaum, the Co-Founder and Sales Director at WebPurify. In this interview, Jonathan and Joshua discuss their and their company’s background, artificial intelligence and profanity filter service, risk of offensive UGC, how WebPurify differs from disqus, the defining line between acceptable self-expression and offensive content, opinions on proposals from Senators Hawley and Warren, if AI is a hype or the real deal, and propose Algorithmic Transparency.

Can you tell us about your background?

Get The Full Series in PDF

Get the entire 10-part series on Charlie Munger in PDF. Save it to your desktop, read it on your tablet, or email to your colleagues.

Q2 hedge fund letters, conference, scoops etc

Jonathan Freger: Co-Founder/CTO

Before founding WebPurify, Jonathan was a founder and Technology Director of the interactive marketing agency Deep Focus, where he was responsible for guiding the technical vision of the organization and leading development teams for clients such as Microsoft, Nintendo, and Calvin Klein.

His passion for APIs drove him to research and develop the WebPurify Profanity Filter in 2007 and subsequently create advanced AI moderation services for user-generated images and videos.

He currently oversees WebPurify's technical teams.

Joshua Buxbaum: Co-Founder/Sales Director

Joshua brings 20 years of sales experience to the WebPurify team.

His commitment to customer satisfaction is one of the driving forces behind the company’s success, and his proven track record in client communications has contributed significantly to the evolution of WebPurify’s services.

Joshua is a graduate of the George Washington University with a B.A. in Communications.

He currently oversees the Sales and Client Services team in WebPurify's Los Angeles Office.

What about your business?

WebPurify started over 12 years ago, initially offering a profanity filter service designed to catch offensive text.

Our users were equally concerned with images and video submissions, so we began to expand our services to address those concerns both with our technology and live teams.

Today, we are excited to be a content-moderation industry leader helping hundreds of companies protect their users and brands from offensive user-generated content, 24/7.

What products do you offer?

For image and video UGC, we offer our 24/7 highly trained live moderation team in addition to our Artificial Intelligence service called AIM.

We also provide a powerful profanity filter service designed to catch the various and creative ways users attempt to submit offensive text in 15 different languages.

What exactly is your profanity filter service moderating and for whom?

The risk of offensive UGC is universal, across all kinds of companies.

Our clients range from children's sites, dating sites, e-commerce platforms, gaming platforms, apparel companies allowing users to customize their products, and interactive agency campaigns.

Do you work with businesses, governments or what?

Our typical clients are large companies with high volumes of incoming user-generated content to their platform.

How does your profanity filter service differ from services like disqus?

Disqus is a blog comment hosting service; we are a content moderation service that can be plugged into commenting systems to monitor and report abusive content.

How do you draw the line between censorship and obscenity with your profanity filter service?

Determining "censorship" is subjective as different audiences may interpret content differently. For example, a photo of a nude woman in a museum could be interpreted as art or pornographic depending on the viewer.

Defining the line between acceptable self-expression and offensive content is an ongoing challenge that many of the large social platforms are currently encountering.

We do not take a stance on what should be allowed or not. Our clients determine where they want to draw the line, and we enforce it.

As you would imagine, a children's site may have significantly stricter rules than a dating site, for example.

How do you deal with foreign governments or even local ones asking you to take part in censoring political dissidents?

We do not currently work with any foreign governments.

There is a big debate over tech and their role in the political process, how does that tie into what you do?

At this point, we don’t have any clients asking us to limit or monitor political discussions.

Do you have any opinions on recent proposals from Senators Hawley or Warren?

Senator Hawley’s proposal is more focused on curbing addiction to social media while Senator Warren is more interested in breaking up Facebook.

Our services are used to curb abusive and objectionable content from platforms.

How does your profanity filter service work without needing tons of humans and time - is it all AI?

We have found that the most effective and scalable moderation solution must incorporate a highly trained live team in addition to AI.

We offer both AI and live moderation and work with each client to help them determine the most effective way to utilize them together.

There is a lot of debate about AI can you define it the term for us:

Artificial Intelligence is the training of machines to perform human tasks.

As we said earlier, at this point, our AI services are extremely valuable in the moderation process, but the machines still struggle with the context of an image, and so humans are still a vital part of the process.

Do you think it is hype or the real deal?

Artificial Intelligence is an essential tool in the moderation process for high volume projects, but there has undoubtedly been much hype around its ability to replace the need for human review, and that is not the case at this point.

If you could pass any legislation related to profanity filter service, AI or censorship what would it be?

We need to take a look at Algorithmic Transparency.

As we move into the world of AI, it will be essential to monitor bias (be it intentional or not), this can only be done if companies are forced to disclose the components of their Algorithms.

Leave a Comment