Extremist and terrorist content and messages have been a big problem for Silicon Valley for quite a while now, and Minister Ben Wallace of the UK says that the companies may face higher taxes if they don't do more to solve that problem. According to Wallace, the UK is considering using taxes to incentivize more action to stem the flow and growth of radical content on the internet, and he said, essentially, that big tech firms have the power to be more cooperative and proactive in this respect than they currently are. Specifically, he names removing radical content more promptly, catching more of it, and cooperating more closely with governments requesting information in connection to investigations that may be related to extremism.
Speaking to British newspaper The Sunday Times, Wallace called Silicon Valley bigwigs "ruthless profiteers," and accused them of prioritizing profits over public safety. On that note, he proposed a tax penalty of sorts that would mirror the Windfall Tax levied on privatized utilities in 1997. That tax was a large, one-time payout based on what the government of the era estimated the utilities made from privatization. It logically follows that a similar tax on tech companies would likely look to the sale of user information to advertisers, and perhaps perceived profits stemming from increased user trust based on declarations from the likes of Google and Facebook that they would require governments to pursue proper channels in order to obtain user information.
Facebook, for its part, has been hard at work fighting the spread of terrorism and associated ideals on its service. According to representative Simon Milner, the company has invested heavily in the identification and destruction of extreme content, and has thus far been able to verify and remove some 83% of such content within one hour of it being flagged by a user or by an AI system. Even with that average on hand, Facebook plans to expand its safety and security team to 20,000 people worldwide before 2018 is over. YouTube, on the other hand, has been relying more on AI, and said that 83% of its own terrorist content removals are done without a human being ever actually flagging the content.