Terrorism. It's a sensitive subject to discuss, it's an issue that has been with us throughout the ages, and sadly, it doesn't seem like it's going to go away anytime soon. As often occurs when attacks strike close to home, governments often search for new ways to limit the frequency of such attacks, and the European Parliament (EP) is no different. A day after the French government decided to enact new laws that would view web companies such as Google, Facebook and Twitter as accomplices of 'Hate-speech' offenses if they hosted extremist messages on their respective sites, the European Parliament has come up with it's own strategy to cope with extremist content on YouTube.
Google's Public Policy Manager, Verity Harding has informed the European Parliament that the 300 hours of video material being uploaded to YouTube every minute makes it tough to catch all terror-related content and that "...to pre-screen those videos before they are uploaded would be like screening a phone call before it's made." If that level of monitoring were possible, that would mean we would be even closer to a Minority Report style situation.
In response, the Eu's Counter-Terrorism Chief, Gilles De Kerchove has said that "We (EU) have to help them, and refer to them, and signal content. Each member state should have a unit specially trained to do that." The specially-trained units would search for content, and then flag and inform Google of the offending videos, with the expectation that Google would remove said material. The Counter-Terrorism Chief added that the flagging of extremist content by official government units would also see a rise in the amount of content removed by Google. Whereas only around 33% of flagged content is removed when it is reported by the general public, this increases drastically to 93% when the offending content is flagged by a government department, as was the case when the UK's Scotland Yard complained about certain material it had found on YouTube.
This is where we come to the tricky part. What criteria would a video have to tick to become flagged as Extremist? Would it differ from one state to the next? And what about the right to free speech? The government departments would consist of humans, and as we all know, us humans tend to make mistakes as well as have agendas. So, if one's content was flagged and removed, would there be an appeals process? If the content is justifiably flagged as 'extremist content'and removed, what next? Could the uploader expect to be picked up by an Anti-Terrorist Unit at some point if their views didn't toe the party line?
What do you think about the European Parliaments ideas on how to crack down on extremist content on YouTube? Is this a valid response to recent events, or is it simply a knee-jerk reaction that will result in Human Rights being violated? Let us know your thoughts on our Google Plus page.