Facebook has been under fire lately over matters involving fake news and malicious Russian advertising appearing on its service, so the firm's chief of security Alex Stamos took to Twitter to respond directly to critics and explain why the service handles things the way it does. Stamos suggested that algorithms are not neutral, and trying to put too much on their shoulders by increasing the scope of their duty to catch and block objectionable content or malicious actors would be asking for trouble. According to Stamos, asking for more selective and enhanced protection of certain users' data from government entities could potentially backfire. He rounded out his statements by saying that everybody involved with the issue at hand was "aware of the risks" inherent in the company's use of AI to police content, and that "a lot of people aren't thinking hard about the world they are asking SV to build," implying that overzealous protections or security efforts could have dire consequences that their advocates aren't expecting.
Stamos targeted people who aren't directly involved in security and algorithm programming, essentially saying that it's difficult for them to have a proper frame of reference on the issue without firsthand experience. As one example, Stamos cited people complaining about things like hate speech but also complaining when non-hateful speech or speech that they agree with winds up censored. He came out in defense of Facebook's use of machine learning and other AI conventions in detecting content or user accounts that should not be on the service and insisted that the company will continue to develop the algorithms used over time.
The salient point of the whole musing was that algorithms for machine learning start out with the biases their creators set up in them, which often takes the form of the algorithms' values aligning with its creators', and can only learn and grow into true neutrality over time. This was in defense of recent attacks on Facebook in the form of assertions that the company could and should do better with policing content and accounts in order to keep the service from perpetuating fake news, hate speech, and propaganda. CEO Mark Zuckerberg recently publicly apologized for the company's performance in those aspects and promised to do better going forward.