The UK Government has created and trained an AI program that is able to detect extremist video content related to Islamic State with a staggering 99.995-percent accuracy rate. The tool is not only notable for its accuracy, but also in the way that it works on content; most screening systems that serve such a purpose do their work once a video is already uploaded, potentially allowing some people to see the video before it's found and taken down. The new AI from the UK government, however, works during the upload process, analyzing content as fast as it can be uploaded. Taking this approach, close to 100% of videos assigned to the system for screening will not make it to viewers unless the AI can safely verify that they contain no Daesh propaganda. The remaining .005% of screened videos, on the other hand, will apparently require human review.
This AI system works by analyzing video data frame by frame as it's uploaded, and is capable of reporting videos for review and removal the second they're uploaded. On almost all video platforms, that means that nobody will be able to find the video until it is reviewed and cleared. While the tool cannot intercept the upload process and actually stop it from happening, it can intervene far faster than human reviewers or AI programs that work after a video has finished uploading. Home Secretary Amber Rudd has said that she has considered the possibility of making the platform-agnostic tool mandatory by law, but hopes that it will not come to that. It provides a viable solution for smaller video sources, and saves larger ones the trouble of creating such a tool.
One of the larger issues facing tools like this on video platforms is the potential for false positives. A false alarm could do serious harm to a video creator, and thus a video provider's reputation. On YouTube, for example, a strike system is in place. No matter how true or false a report may be, YouTubers may see their channels suspended or even shut down after a certain volume of reports. Channels with legitimate complaints about the system have to go through an appeals process that, anecdotally, is not always fruitful or easy to navigate. The UK, for its part, has criticized technology companies for not taking proactive action, and has created this system to prove that it is possible to address this issue in a satisfactory manner for everybody.