Each day, people all over the world watch over 5 billion YouTube videos, post over 500 million tweets, and share in excess of 95 million posts on Instagram. With such a huge amount of new content consumed daily, it’s clear that existing arrangements for content moderation are flawed.
content moderation
Moving forward with a clear and uniform approach to the future of content moderation poses a number of difficulties. Firstly, there is an ongoing debate over the defense of free speech and the desire to protect people from poor quality or deliberately misleading information. A further problem is centered on liability and responsibility for oversight. Up until now, social media platforms have been viewed as “passive intermediaries” and subject to far fewer regulations when compared to typical content creators such as broadcasters or newspapers. This is due to ‘Section 230’, a contentious part of the Communications Decency Act passed by US Congress in 1996, which allows, for the most part, online platforms to avoid liability for harmful or illegal content posted on their sites. So, should social media platforms be more responsible for what users upload? Is it right to leave tech giants such as Facebook and Google to dictate what can and cannot be said or shown on the Internet? Should the moderation of misinformation be left up to governments? Could that power be abused by non-democratic (or even democratic) regimes? Here we take a look at just some of the intricacies surrounding the controversy.
1. Harassment and Abuse: 64% of adults under 30 experiencing online harassment.
2. Political Challenge: The growing concern over misinformation “FAKE NEW” on the Internet.
3. Directing Public Opinion: Who’s responsible for defining the rules for promoting one kind of speech over the other through social media’s algorithms
4. Pure Moderation Advocates Transparency – Your Content Moderation Partner
1. Harassment and Abuse Content Moderation
According to a Pew Research Centre Survey from September 2020, 41% of US adults have personally experienced online harassment, and 25% have experienced what they would term as ‘severe harassment’ (physical threats, stalking, sexual harassment, and sustained harassment). 20% of all U.S. adults believe they have been targeted due to their outlook on certain subjects, such as their political views, or their opinions on gender, ethnicity, religion, and sexual orientation issues. It’s a particular issue for younger adults with 64% of adults under 30 experiencing online harassment.
It’s clearly a problem that needs addressing but the difficulty lies in differentiating between genuine harassment or abuse and what could be considered friendly banter in another culture. Is it OK to use what some might consider to be offensive terms in an open and frank discussion about race? Context and culture can make a huge difference to intent, and automated systems – sometimes human moderators too – can have considerable difficulty in making those distinctions. The challenges are amplified when dealing with the non-English speaking world, where companies such as Facebook are accused of not devoting enough resources to combating online hate speech. Simply put, we know how to define illegal content, such as terrorism-related propaganda or child abuse images, yet we lack clear definitions and boundaries for any speech that lies outside these categories.
2. Political Challenges Content Moderation
In early 2019, the European Commission published a report to help the EU reach a conclusion about the growing concern over misinformation on the Internet. The spread of false information can have grave consequences, particularly in times of a global health emergency. In the end, the report explicitly recommended not to regulate against misinformation, citing questions raised over the potential to infringe on freedom of speech. It’s a complicated issue, yet some authoritarian regimes have recruited armies of online trolls to spread their propaganda and destabilize opposition. At the same time, human rights defenders are desperate to highlight that even traditionally democratic governments are working to censor and flag content they don’t agree with, using the increasingly problematic tag of ‘FAKE NEWS’ to essentially legislate the truth.
In fact, there is a growing number of media experts who say journalists should stop using the term ‘FAKE NEWS’ as it has become a popular mechanism by which politicians can discredit genuine criticism of them in the media.
3. Directing Public Opinion Content Moderation
While legislation like Section 230 may protect websites and other online platforms from what their users say or upload, there are some who feel they should be accountable for what the platforms themselves choose to promote through their algorithms’ content moderation. So, is regulating ‘algorithmic amplification’ – the process by which Facebook or Twitter decide what stories to promote on newsfeeds or YouTube recommends videos to users – the answer?
Amplification features can be helpful, of course. They help us find relevant information on the web or within individual sites. They can help users find similar news stories to keep them updated on a particular topic, discover new recipes they may want to try with their favorite ingredients or explore the back-catalog of interesting musicians from their preferred genre on Spotify.
But these features have also caused or contributed to real damage in the world and, in some cases, even brought together violent extremists who otherwise may never have met.
However, policymakers trying to set rules for harmful or illegal online content focusing on regulating algorithmic amplification won’t find it an easy path. Unfortunately, we keep coming back to the same issues: Who’s responsible for defining the rules for promoting one kind of speech over the other? Who enforces those rules? There are some serious First Amendment considerations to be taken into account and any future amplification law would run the risk of being challenged on the grounds of it being potentially unconstitutional. Such laws could face serious hurdles based on human or fundamental rights law in other countries also. Perhaps, with much wise deliberation and bipartisan support in Congress, a series of nuanced laws could be delivered in the future, but for now, nobody will blame us if we remain somewhat skeptical about that happening anytime soon.
4. Pure Moderation Advocates Transparency – Your Content Moderation Partner
Pure Moderation is a dedicated content moderation services company with a passionate motivation to support several business types and models and help create a safe online community for users. We offer years of experience and a broad scope of expertise in moderating user-generated content, from text to images and video, uploaded to chat rooms and forums, and even over live streams.
We also offer expert and accurate insight into your customers’ feelings through sentiment analysis and social listening, to help strengthen the connection with your target demographics and improve marketing strategies.
Trust is the keystone to any long-lasting relationship. Whether it’s between Pure Moderation and our clients, or between your business and your customers. Your brand makes a promise to your customers each and every day about the quality of your product, the efficiency of your service, and the security of your online communities. Our teams of expert moderators are on hand to ensure the content hosted on those communities lives up to that promise. Pure Moderation focuses on a customer-centric approach to content moderation with the aim of impacting the user experience as minimally as possible while ensuring a quality experience for visitors to your community.
As mentioned above, moderators with knowledge of different cultural contexts and sensitivities, not to mention proficiency in the particular language, are vital for high-quality moderation. Pure Moderation offers moderation support in multiple languages, to protect users from harmful or illegal content, no matter their cultural background.
If you want versatile assistance for your content moderation needs that adheres to your demands,
💬 Chat with us on our website for a free consultation and trial.
or Email Us: chris@puremoderation.com
READ MORE:
Why Facebook, Youtube, and Twitter cannot apply AI in content moderation
How to Outsource CONTENT MODERATION to protect your brand?
Pure Moderation
Leave a Comment