Computers help YouTube remove 6.7M problematic videos
{{#rendered}} {{/rendered}}
Extremist videos are disappearing from YouTube at a faster clip — all due to computer algorithms.
On Monday, the Google-owned platform shared details about its efforts to fight objectionable videos with AI-powered machine learning. And the efforts appear to be paying off.
From Last October to December, YouTube's machine learning systems helped it delete 6.7 million videos over sexual imagery, spam or terrorist content. And most of the videos — at 76 percent — were removed before they received a single view.
{{#rendered}} {{/rendered}}
On the flip side, the computer algorithms failed to prevent the remaining videos from gaining a brief audience. But in a blog post, YouTube said the platform was originally slower when it came to taking down extremist video clips.
More From PCmag
"For example, at the beginning of 2017, 8 percent of the videos flagged and removed for violent extremism were taken down with fewer than 10 views," it said.
However, last June the video streaming service started using its machine learning flagging system. "Now more than half of the videos we remove for violent extremism have fewer than 10 views," the blog post added.
{{#rendered}} {{/rendered}}
The larger question is whether the AI-powered systems can improve over time. YouTube's own data might provide an answer. It plans on publishing a quarterly report on the video takedowns, the first of which went out on Monday.
"Our advances in machine learning enabled us to take down nearly 70 percent of violent extremism content within 8 hours of upload and nearly half of it in 2 hours," the quarterly report said.
Unfortunately, YouTube isn't offering any historical data. The latest report only covers the recent Oct. to Dec. period, when it removed a total of 8.2 million videos, the majority of which came from its automated flagging system.
{{#rendered}} {{/rendered}}
However, YouTube's computer algorithms don't delete any videos on their own. A human will review a flagged clip to confirm it violates the platform's policies. The company plans on hiring 10,000 people this year to help it review content.
YouTube is relying on the machine learning as the streaming service is facing scrutiny over content moderation. The platform has long been battling terrorist content from creeping into the service. But other critics have pointed to the rise of misinformation on YouTube as another disturbing trend.
For instance, in February, a false conspiracy video about the Parkland, Florida high school shooting managed to trend over YouTube, before it was taken down.
{{#rendered}} {{/rendered}}
Whether computer algorithms can be effective at grayer areas of content moderation is its own area of debate. But for now, YouTube said its machine learning system is focused on flagging the most egregious video clips, such as those that incite violence or contain child abuse.
This article originally appeared on PCMag.com.