
AI Moderator Removes 11 Million Harmful YouTube Videos in a Quarter
In Q1, human moderators eliminated only 5 million videos which is less than that by AI
The pandemic has brought more automation into almost all sectors. Employees started working from home and patched the gap that company revues made. However, Google-owned YouTube, the world’s largest video platform, has taken both Artificial Intelligence (AI) and human help in doing a usual job.
Fake news and misinformation are a great threat to the tech sector. One fake news can change the fate of an organization. The world has seen the impacts of fake news for a very long time and especially during the pandemic. Social media platforms like YouTube, Twitter, Whatsapp, Facebook, etc. are fighting the battle every day. Most of these platforms use manual labor to flag-off such content. However, YouTube chose artificial intelligence way.
Artificial Intelligence (AI) detects harmful content
By using AI and its methods, YouTube has removed around 11.4 million harmful videos from its platform between April and June of 2020, according to YouTube’s latest Community Guidelines Enforcement report. This is remarkably the maximum number of content removed in a single quarter.
YouTube is glad that the remote working circumstance did not affect the usual working speed, thanks to artificial intelligence help. Out of the 11.4 million videos removed by the company concerning harmful content, around 10.85 million were flagged by AI-moderators. The automated system filtered videos that were against YouTube’s company policy.
Earlier during the Q1 of 2020, human moderators identified 5 million videos to be against its policies. But AI-moderators spiked the number with its automation features. Even though YouTube adopted remote working policies, the company couldn’t allow human moderators to review videos outside the office. YouTube feared that this could risk-sensitive content and user data being shared. Machine Learning, an application of artificial intelligence, helps detect potential harmful content. It later sends the video to human reviewers who take a look at it and confirm the assessment. Human review is a must when automation functions on its own. It not just trains the machine learning system, but also provides feedback that improves the accuracy of YouTube’s system over time.
Machine learning functions on two steps while flagging a harmful video,
- Gathering- The first step that machine learning takes is gathering video content.
- Analysis- The next step involves analyzing the gathered video. Machine learning flags-off the video that comes under the bar of policies fixed by YouTube.
Reason for removal
Videos are removed according to the policy provided by YouTube. The policy has reasons like child safety concerns, detection of sex/nudity, spreading scams, misinformation, harassment or cyberbullying, or promoting violent extremism or hate speech. Some of the flagged videos include ‘dare challenges’ or videos posted out of innocence that might endanger minors. Around 42% of the videos were removed even before being viewed by anyone.
Millions of videos flagged by AI were later determined by human reviewers not to have violated any policy. YouTube acknowledged that the accuracy in the case of videos covering certain sensitive areas such as violent extremism and child safety wasn’t expected.
Generally, social media platforms employ thousands of human moderators to look for harmful content. This involves bad images and videos that affect mental stability. The implementation of automation could better the working atmosphere of tech employees in the moderation sector.