1. What is content moderator?
A Content Moderator not only takes the responsibility of approving content but also makes sure that content is placed in the right category before getting it live on a website or an online platform. In other words, the content moderator evaluates and filers the submitted content of users to be in line with the content requirements and standards of a particular platform.
2. How a content moderator protects us
For example, a user can post unlimited videos on Facebook or Youtube channels, but that doesn’t mean that the user is able to upload any sort of content. Users can’t add unwanted subject matter such as adult, spam, indecent photos, profanity, and illegal content. The team of content moderators working at Alphabet Inc (Youtube’s Parent Company) and Facebook makes sure every user-generated content has to pass the content guidelines of the company before and after publishing.

An example of the job of Content Moderators
On the other hand, vulnerable people can potentially be exposed to scams or disturbing content. And let’s face it, we are not able to watch over our kids 24/7 and make sure they aren’t watched to anything distressing and harmful. Even sites specifically aimed at children can fall victim to unpleasant content (YoutubeKid could be an example). All content moderators around the globe are on a mission to limit the risk of Internet users reading/watching/listening content they may consider upsetting or offensive.
3. Why content moderators are important to social media?
3.1 Content Moderators of Facebook
Over the last few years, Facebook has invested massively in contracting content moderators around the world. So the decision to send all of its contract workers home, as the coronavirus outbreak swept the planet, was not a decision the company made lightly. Particularly as the job of content moderation is not working you can exactly bring home with you. The disturbing nature of the work is damaging enough to a moderator’s mental health when they’re working in a professional environment, it would be considerably more worrisome if done at home, surrounded by the moderator’s family. “Working from home on those types of things, that will be very challenging to enforce that people are getting the mental health support that they (need),” said Mark Zuckerberg.
That left the task of identifying and removing offensive content from Facebook largely to the algorithms. The results have been less than stellar. Just one day after Facebook announced its plans to rely more heavily on AI, some users complained that the platform was making mistakes. Facebook’s machine-learning content moderation systems began blocking a whole host of legitimate posts and links, including posts with news articles related to the coronavirus pandemic, and flagging them as spam. Despite Facebook’s ‘vice president for integrity’, Guy Rosen, declaring “this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce”, many industry specialists and pundits were suggesting the cause of the problem was Facebook’s decision to send its contracted content moderators home.

Facebook increases pay and support for content moderators by 10% in 2019 (According to Bloomberg)
3.1 Content Moderators of Twitter
Twitter took a similar tack. They informed users earlier this year that they would increasingly rely on machine learning to remove “abusive and manipulated content.” The company at least acknowledged that artificial intelligence would be no replacement for human moderators.
In a blog post from March 2020, the company said “We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,”

Social media AI content moderation can’t replace human moderators
To compensate for the anticipated errors, Twitter said it wouldn’t permanently uphold suspensions “based solely on our automated enforcement systems.”
As perhaps expected, and similar to YouTube and Facebook, there were less than consistent results from Twitter’s shift to more reliance on automation. In a recent letter to shareholders, Twitter reported that half of all tweets on the platform deemed to be abusive or in violation of policy are being removed by its automated moderation tools before users have a chance to report them.
In France, however, campaigners fighting against racism and anti-Semitism noticed a more than 40 percent increase in hate speech on Twitter. Less than 12 percent of those posts were removed, the groups said. Clearly, the AI still has some blind spots.
To learn more about outsourcing your content moderation processes and the value that it can bring your business, contact PureModeration for a free consultation and trial.
READ MORE: How Content Moderators Benefit Us All?
💬 Chat with us on our website for a free consultation and trial.
Leave a Comment