Facebook Security Boss Defends An AI-Based News Feed

Facebook has been under fire lately over matters involving fake news and malicious Russian advertising appearing on its service, so the firm's chief of security Alex Stamos took to Twitter to respond directly to critics and explain why the service handles things the way it does. Stamos suggested that algorithms are not neutral, and trying to put too much on their shoulders by increasing the scope of their duty to catch and block objectionable content or malicious actors would be asking for trouble. According to Stamos, asking for more selective and enhanced protection of certain users' data from government entities could potentially backfire. He rounded out his statements by saying that everybody involved with the issue at hand was "aware of the risks" inherent in the company's use of AI to police content, and that "a lot of people aren't thinking hard about the world they are asking SV to build," implying that overzealous protections or security efforts could have dire consequences that their advocates aren't expecting.

Stamos targeted people who aren't directly involved in security and algorithm programming, essentially saying that it's difficult for them to have a proper frame of reference on the issue without firsthand experience. As one example, Stamos cited people complaining about things like hate speech but also complaining when non-hateful speech or speech that they agree with winds up censored. He came out in defense of Facebook's use of machine learning and other AI conventions in detecting content or user accounts that should not be on the service and insisted that the company will continue to develop the algorithms used over time.

The salient point of the whole musing was that algorithms for machine learning start out with the biases their creators set up in them, which often takes the form of the algorithms' values aligning with its creators', and can only learn and grow into true neutrality over time. This was in defense of recent attacks on Facebook in the form of assertions that the company could and should do better with policing content and accounts in order to keep the service from perpetuating fake news, hate speech, and propaganda. CEO Mark Zuckerberg recently publicly apologized for the company's performance in those aspects and promised to do better going forward.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]
Android Headlines We Are Hiring Apply Now