+842439728621
For Sales: chris@puremoderation.com --- For General Enquiries: hello@puremoderation.com
Advanced Business SolutionsAdvanced Business SolutionsAdvanced Business SolutionsAdvanced Business Solutions
Inquire Now
  • About Us
    • Our Team
  • Our Services
    • Content Moderation Services
      • Text Moderation (40 Languages)
      • Image Moderation
      • Video Moderation and Live Streaming Moderation
      • Sentiment Analysis and Social Listening
    • Customer Care Services
      • Multilingual Live Phone Support
      • Email Support Outsourcing
      • Customer Ticket Support
      • Live Chat Support
      • Social Media Customer Support
      • In-app Customer Support
      • Technical Customer Support
      • Retail/E-Commerce Customer Support
      • Video Chat Customer Support
    • Data Entry Services
      • Multilingual Data Entry
      • Manual Identity Verification/ Biometrics/ KYC
    • Data Labeling Services
      • AI training
      • Manual Data Labeling
    • Game Management Services
      • QA and Testing
      • Game Moderation & Guideline Enftorcement
      • Multiplayer Enrichment
    • Seasonal BPO Services
      • Consultancy
      • Artificial Intelligence (AI) Training
      • Localization
  • Pricing
  • Knowledge
    • Blog
    • Outsourcing Glossary
      • BPO
      • Content Moderation
      • Content Moderator
      • Back Office Support
      • IT Help Desk Support
      • Customer Support
      • Live Chat Support
      • Live Chat Agents
      • Data Entry
  • Contact Us
    • Chat with us on our website 24/7

Machine Learning (AI) Never Replace Human Content Moderation

    Home Pure Moderation Blog Machine Learning (AI) Never Replace Human Content Moderation
    NextPrevious
    Social media AI content moderation cann't replace human content moderators

    Machine Learning (AI) Never Replace Human Content Moderation

    By Pure Moderation | Pure Moderation Blog, Content Moderation Services, News | 2 comments | 17 November, 2020 | 23

    The Internet has democratized many facets of everyday life. Consequently, it has allowed everybody, from regular folk and progressive thinkers to ideological extremists and harmful predators (and everyone in between) to share their views. The consequential proliferation of online harmful content has meant regulation was inevitable.

    However, as regulatory pressure from policymakers increases, online platforms are increasingly using automated procedures to take action against inappropriate (or illegal) material on their systems such as hate speech, pornography, or violence. But are these algorithms really up to the task? While automated systems can spot the most obvious offenders, which is undoubtedly useful, does AI lack the ability to understand cultural context and nuance?  Is it possible to utilize a single tool or approach to effectively regulate the internet while maintaining its benefit to society? Or is a more holistic approach required? Here we will explore some of the more recent events and stories, in particular the role that the COVID-19 pandemic has played in forcing many tech giants to utilize AI moderation perhaps before it was ready. We will also consider the best role both humans and AI can play in the future of online content moderation and ask: “Will an algorithm ever truly be able to replace a human moderator?”.

    Human Content Moderation ensures more accuracy

    AI or Human Content Moderation ensures more accuracy?

    1. YouTube brings back human content moderation after AI systems over-censor

    The spread of the coronavirus pandemic around the world this year has been unprecedented and rapid. In response, tech companies have had to contend with the dual aim of ensuring their services are still available to their users, while also reducing the need for people to come into the office. As a result, many social media companies have become more reliant on AI to make decisions on content that violates their policies on things like hate speech and misinformation. YouTube announced these changes back in March 2020. 

    In the same blog post, YouTube warned that automated systems will start removing some content without human review, and due to these new measures “users and creators may see increased video removals, including some videos that may not violate policies”.

    Nevertheless, YouTube was surprised at just how active the AI moderation turned out to be in its attempts to spot harmful content. YouTube told the Financial Times in September 2020 that the greater use of AI moderation had led to a significant increase in video removals and incorrect takedowns.

    All in all, approximately 11 million YouTube videos were removed between April and June 2020. 320,000 of these takedowns were appealed, and half of those appealed videos were reinstated. All of which was happening at roughly twice the usual rate, an indication that the AI system was somewhat overzealous in its attempts to spot inappropriate or illegal content.

    content moderation

    Due to the coronavirus pandemic, Youtube content moderation has to rely upon AI

    Since then, YouTube has brought back more human moderators to ensure more accuracy with its takedowns. While one could consider this experiment a failure, YouTube’s chief product officer Neal Mohan suggests machine learning systems definitely have their place. “Over 50 percent of those 11 million videos were removed without a single view by an actual YouTube user and over 80 percent were removed with less than 10 views,” he said. “And so that’s the power of machines.” Machines clearly still have a lot to learn, however. 

    2. The problem of Facebook: online content moderation can’t be solved with artificial intelligence

    Over the last few years, Facebook has invested massively in contracting content moderators around the world. So the decision to send all of its contract workers home, as the coronavirus outbreak swept the planet, was not a decision the company made lightly. Particularly as the job of content moderation is not working you can exactly bring home with you. The disturbing nature of the work is damaging enough to a moderator’s mental health when they’re working in a professional environment, it would be considerably more worrisome if done at home, surrounded by the moderator’s family. “Working from home on those types of things, that will be very challenging to enforce that people are getting the mental health support that they (need),” said Mark Zuckerberg.

    That left the task of identifying and removing offensive content from Facebook largely to the algorithms. The results have been less than stellar. Just one day after Facebook announced its plans to rely more heavily on AI, some users complained that the platform was making mistakes. Facebook’s machine-learning content moderation systems began blocking a whole host of legitimate posts and links, including posts with news articles related to the coronavirus pandemic, and flagging them as spam. Despite Facebook’s ‘vice president for integrity’, Guy Rosen, declaring “this is a bug in an anti-spam system, unrelated to any changes in our content moderator workforce”, many industry specialists and pundits were suggesting the cause of the problem was Facebook’s decision to send its contracted content moderators home.

    A former Facebook security executive, Alex Stamos, went a little further in his speculation.

    “It looks like an anti-spam rule at FB is going haywire,” he wrote on Twitter. “Facebook sent home content moderators yesterday, who generally can’t (work from home) due to privacy commitments the company has made. We might be seeing the start of the (machine learning) going nuts with less human oversight.”

    There were other issues. Social media platforms, such as Facebook, play an important role in Syria. Campaigners and journalists rely on social media to document potential war crimes. But as AI struggles to understand context and intention, scores of activists’ accounts were closed down overnight, often with no right to appeal those decisions, due to the graphic content of their posts

    Facebook increases pay and support for content moderators

    Facebook increases pay and support for content moderators by 10% in 2019 (According to Bloomberg)

    And yet, a lot of questionable posts remained untouched. According to Facebook’s own transparency report, the number of takedowns in high-profile areas like child exploitation and self-harm fell by at least 40 percent in the second quarter of 2020 because of a lack of humans to make the tough calls about what broke the platform’s rules.

    3. Twitter: AI-driven content moderation often fails to understand the context.

    Twitter took a similar tack. They informed users earlier this year that they would increasingly rely on machine learning to remove “abusive and manipulated content.” The company at least acknowledged that artificial intelligence would be no replacement for human moderators.
    In a blog post from March 2020, the company said “We want to be clear: while we work to ensure our systems are consistent, they can sometimes lack the context that our teams bring, and this may result in us making mistakes,” 

    Social media AI content moderation cann't replace human moderators

    Social media AI content moderation can’t replace human moderators

    To compensate for the anticipated errors, Twitter said it wouldn’t permanently uphold suspensions “based solely on our automated enforcement systems.” 
    As perhaps expected, and similar to YouTube and Facebook, there were less than consistent results from Twitter’s shift to more reliance on automation. In a recent letter to shareholders, Twitter reported that half of all tweets on the platform deemed to be abusive or in violation of policy are being removed by its automated moderation tools before users have a chance to report them. 
    In France, however, campaigners fighting against racism and anti-Semitism noticed a more than 40 percent increase in hate speech on Twitter. Less than 12 percent of those posts were removed, the groups said. Clearly, the AI still has some blind spots.

    4. AI cannot moderate content alone

    The move toward more AI shouldn’t be a surprise. For years, tech companies have been pushing automated tools as a way to supplement their efforts to fight the offensive and dangerous content that can inhabit online platforms. Coronavirus has been an opportunity to really see where the big companies are as far as their machine-learning algorithms go. So far, it has not been a resounding success. 

    For sure, AI can help content moderation move faster and automated systems are certainly doing quite a bit to help. They act as ‘first responders’, for example, dealing with the obvious problems that appear on the surface, while pushing more subtle, suspect content toward human moderators.

    human and robot moderation

    By pairing the efficiency of AI with the context-understanding empathy and situational thinking of humans, the two become an ideal partnership for moderation.

    But the way they do so is relatively simple. Many use visual recognition to identify a broad category of content, like “human nudity” or “guns”. This is prone to misunderstanding the context, for example, categorizing images of breast-feeding mums as the same as pornographic content has ruffled some feathers in the past. 

    Technology, simply, struggles to understand the social context for posts or videos and, as a result, can make inaccurate judgments about their meaning.

    Things become much trickier when the content itself can’t be easily classified even by humans. Context-dependent content like fake news, misinformation, satire, and so on, do not have simple definitions, and for each of them, there are grey areas. Someone’s background, personal ethos, or mood might make the difference between one definition and another.

    The problem with trying to get machines to understand this sort of content is that it is essentially asking them to understand human culture, which is a phenomenon too fluid and subtle to be described in simple, machine-readable rules.

    By pairing the efficiency of AI with the context-understanding empathy and situational thinking of humans, the two become an ideal partnership for moderation. Together, they can safely, accurately, and effectively vet high volumes of multimedia content. 

    5. Pure Moderation provides Content Moderation services ACROSS ALL PLATFORMS

    This balance is the backbone of Pure Moderation’s content moderation service which combines the strengths of both humans and AI to moderate content at scale to create safe and trustworthy online environments for organizations and their communities. 

    By working with Pure Moderation, you can be assured that your users, your brand, and your company’s legal liabilities are protected. With just the right blend of AI and human, we can deploy experienced in-house teams, utilizing the latest in innovative moderation tools, to oversee live video stream, image moderation, text moderation, sentiment analysis, and social listening. You can integrate into our moderation tool via our API and we’ll take care of the rest. If you have your own system that you would like us to use, we can adapt to your needs to ensure smooth collaboration. 

    In the digital age, the only certainty is that innovation will continue to create challenges and opportunities alike. And it will take innovative thinking to ensure those challenges are kept in check and those opportunities are to the fore. 

    To learn more about outsourcing your content moderation processes and the value that it can bring your business, contact Pure Moderation for a free consultation and trial.

    💬 Chat with us on our website for a free consultation and trial.

    5 / 5 ( 10 votes )
    Content Moderation, content moderation BPO, content moderation company, content moderation outsourcing, content moderation services
    Pure Moderation

    Pure Moderation

    More posts by Pure Moderation

    Related Post

    • Podcast Audio Content Moderation

      The Essential Role of Humans in Content Moderation

      By PureModeration | 0 comment

      The Essential Role Of Humans in Audio Content Moderation Despite the latest advances in AI, recent events have demonstrated just how essential human involvement in the moderation of User Generated Content remains and will continueRead more

    • Human Content Moderation - Working for the online Trust and Safety. How online platforms moderate content

      How to outsource Content Moderation Services to protect your brand

      By PureModeration | 8 comments

      While content moderation is not a one-size-fits-all solution, what follows are some essential and important questions every business needs to ask itself about this topic before looking for a vendor of content moderation in 2021.Read more

    • What is content moderation?

      Global Content Moderation 2021: Should Social Media Platforms Be More Responsible For What Users Upload?

      By PureModeration | 0 comment

      Each day, people all over the world watch over 5 billion YouTube videos, post over 500 million tweets, and share in excess of 95 million posts on Instagram. With such a huge amount of newRead more

    • Create Online Community Guidelines

      How To Build An Online Community [8 Secrets You Never Knew]

      By PureModeration | 2 comments

      Knowing how to build an online community and engage with your customers online, and allow a forum for them to connect and engage with each other is essential for any organization. In fact, successful communitiesRead more

    • Human Content Moderation - Working for the online Trust and Safety

      HUMAN CONTENT MODERATION: For Online Safety and Trust

      By PureModeration | 2 comments

      Why we need online content moderation? There are 3.5 billion active daily users of social media so the importance of an online presence is vital for businesses, of any size (more than 50 million smallRead more

    2 comments

    • Avatar
      Jan Reply February 3, 2021 at 1:08 pm

      Ai content moderation is very useless. Facebook blocked my account because I post photos of a naked statue in Italy .. better use human moderation

    • Avatar
      Tony Reply February 25, 2021 at 11:51 am

      Twitter: AI-driven content moderation often fails to understand the context.

    Leave a Comment

    Cancel reply

    Your email address will not be published. Required fields are marked *

    NextPrevious

    Search For Our Services

    Recent Posts

    • 7 Popular Business Process Outsourcing Trends
    • 4-Step Process For Business Process Outsourcing: How Does BPO Work?
    • How Do Companies Benefit From BPO Companies? 9 Must-Read Reasons
    • Omnichannel Customer Service: 7 Secrets To Exceptional Customer Service
    • 9 Crucial Customer Service Objectives Businesses Should Know

    Categories

    • Audio Content Moderation
    • Back office Outsourcing
    • Business Process Outsourcing
    • Content Moderation Services
    • Customer Support Services
    • Data Entry outsourcing
    • data labeling
    • Data Labeling Outsourcing
    • Game Management Outsourcing
    • News
    • Outsourcing Glossary
    • Pure Moderation Blog
    • Social Listening Outsourcing

    Pure Moderation and Gear Inc. Team

    https://www.youtube.com/watch?v=Og9TPO13zGE

    About Us

    Pure Moderation is a global leader in BPO services with extensive experience, a significant number of interactions per month, and a high rate of client retention. We are also proud of our after-sales client care which includes trustworthy guarantees, staff training, and onsite & offsite support. Having worked with clients from different parts of the world, we build trust through high-quality services, open communication, and confidentiality.

    DMCA.com Protection Status

    Contact Us

    Location

    VTC Online Building, 18 Tam Trinh, Hanoi, Vietnam

    Phone Number

    +84 243 972 8621

    Email Address

    • For Sales: Chris@puremoderation.com
    • For Information: hello@puremoderation.com

     

    Copyright 2021 Puremoderation's team | Sitemap | All Rights Reserved
    • About Us
      • Our Team
    • Our Services
      • Content Moderation Services
        • Text Moderation (40 Languages)
        • Image Moderation
        • Video Moderation and Live Streaming Moderation
        • Sentiment Analysis and Social Listening
      • Customer Care Services
        • Multilingual Live Phone Support
        • Email Support Outsourcing
        • Customer Ticket Support
        • Live Chat Support
        • Social Media Customer Support
        • In-app Customer Support
        • Technical Customer Support
        • Retail/E-Commerce Customer Support
        • Video Chat Customer Support
      • Data Entry Services
        • Multilingual Data Entry
        • Manual Identity Verification/ Biometrics/ KYC
      • Data Labeling Services
        • AI training
        • Manual Data Labeling
      • Game Management Services
        • QA and Testing
        • Game Moderation & Guideline Enftorcement
        • Multiplayer Enrichment
      • Seasonal BPO Services
        • Consultancy
        • Artificial Intelligence (AI) Training
        • Localization
    • Pricing
    • Knowledge
      • Blog
      • Outsourcing Glossary
        • BPO
        • Content Moderation
        • Content Moderator
        • Back Office Support
        • IT Help Desk Support
        • Customer Support
        • Live Chat Support
        • Live Chat Agents
        • Data Entry
    • Contact Us
      • Chat with us on our website 24/7
    Advanced Business Solutions