Video sharing platform YouTube recently released its quarterly community guidelines enforcement report, outlining measures taken to enforce its policies along with transparent data reports by the numbers. With a large number of staff remaining at home due to COVID-19, YouTube relied more heavily on technology and machine learning to ensure its policies were being enforced this quarter. Q2 of 2020 was the first full quarter this modified enforcement structure was in effect and as a result, YouTube has removed more videos than ever previously reported.

YouTube began releasing fully transparent data reports each quarter beginning in October of 2017. While the YouTube blog offers updates to its policies and how the site is enforcing them, the community guidelines enforcement report offers interesting data complete with granular pie charts and bar graphs. Earlier this year, YouTube explained that due to the focus on the safety of its employees during the COVID-19 pandemic, it would rely on more technology to enforce its policies. Normally, YouTube's policies are enforced by a combination of humans and technology in which machine learning detects potentially harmful content, then flags it for a human to review and make a final decision. When the pandemic hit, YouTube had to decide whether to phase back its technology toward a violation limit that could be handled by the much smaller amount of human reviewers available; or up the use of automated systems to flag more content that could potentially be harmful to the community, knowing that many videos would be removed without human review.

Related: AI Bias: How YouTube Looks To Conspiracists & Climate Deniers

YouTube admits in a blog posting that it took a 'better safe than sorry' approach and chose the latter, and the automated enforcement numbers are staggering. Between April and June of 2020, 11.4 million videos were removed, 10.85 million of which were first spotted by automated flagging. The technology is quick too, 42-percent of the videos removed hadn't had a single view yet, and not a single removal had more than ten views. The largest reason for violating policy was child safety - this is usually dares or challenges, anything that could potentially endanger minors. That is more than 3x the number of removals compared to the previous quarter. Second on the list was spam or misleading information, which was also the number one reason for complete channel deletion at 92-percent of violations. So while spam is clearly a huge issue on the platform, it's encouraging to see automated machine learning weed it out quickly. Additionally, there were way more violations of spam and other misinformation than there were for hate speech or cyberbullying. Those sort of violations appeared more in the comments under the channels of YouTube creators, but were also removed quite quickly.

There's A New (Automated) Sheriff In Town

YouTube bans

Data is beautiful, and YouTube's transparency report shows genuine results in policy enforcement through automated moderation. While some of the removals of videos, comments, or even channels doubled, YouTube admits that this reliance on machine learning to handle all of its community regulation is not a permanent strategy... yet. The video sharing platform has made that apparent by admitting first and foremost that it knew going into this experimental quarter that the technology in place wasn't always going to be correct. YouTube was willing to sacrifice a few innocent videos to ensure the safety of the community, but made strong efforts to ensure the its creators were not as disrupted. For this quarter, YouTube decided not to issue strikes on content removed without human review first, except in cases where it was very obvious the content violates its policies.

In knowing that the automated decisions made by the current system would be less accurate than human review, YouTube anticipated more appeals than previous quarters, particularly from its creators. It moved more resources to help expedite the review of the appeals to get its creators' flagged videos back up and streaming again. While only 3-percent of removed videos led to an actual appeal, YouTube revealed that it saw the sheer number of appeals as well as the number of videos reinstated double from last quarter. To give further perspective, the number of videos under appeal that were reinstated jumped from 25-percent in Q1 to 50-percent in Q2. While this shows the dedication and understanding that YouTube's human staff is giving to its content creators, it also gives a solid perspective and just how liberal the automated technology was in handing out flags for removal.

In looking at the full report, it can easily be argued that while relying heavily on technology to enforce community policies is not an ideal situation for a platform like YouTube, the data is encouraging for future considerations toward machine learning and automated flagging. YouTube video removals are still very much a case by case court system that requires the accuracy of humans, but that isn't to say those skills cannot eventually be programmed.

More: All the Recent Bans & Suspensions: YouTube, Twitter, Facebook, Reddit, Parler

Source: YouTube