The initial set of four steps, announced by Google in June, included using new technology “to help identify extremist and terrorism-related videos”; adding more human experts to flag videos; removing money-making opportunities from videos that come close to, but don’t explicitly violate, YouTube policies; and expanding counter-radicalization efforts by posting and promoting anti-terrorist videos.
In a new update published on August 1, YouTube says it is making progress on all of these fronts, specifically including the following:
- Speed and efficiency: Our machine learning systems are faster and more effective than ever before. Over 75 percent of the videos we’ve removed for violent extremism over the past month were taken down before receiving a single human flag.
- Accuracy: The accuracy of our systems has improved dramatically due to our machine learning technology. While these tools aren’t perfect, and aren’t right for every setting, in many cases our systems have proven more accurate than humans at flagging videos that need to be removed.
- Scale: With over 400 hours of content uploaded to YouTube every minute, finding and taking action on violent extremist content poses a significant challenge. But over the past month, our initial use of machine learning has more than doubled both the number of videos we’ve removed for violent extremism, as well as the rate at which we’ve taken this kind of content down.
In addition, YouTube says it is enforcing tougher standards and removing comments and recommendations from videos that, “don’t violate our policies but contain controversial religious or supremacist content.”
The full list of updates can be found on the official YouTube blog.