AI May Be Able To Develop And Fortify Prejudice: Report

Cardiff University and MIT researchers have found that AI programs may be able to develop prejudice completely on their own based on real-world interactions, even without being trained on data sets that include human prejudice. According to data obtained from running a large number of AI simulations depicting a game of give and take, researchers learned that AI can form prejudices simply by copying whatever agent is obtaining the most desireable outcome, or highest profit on a short-term basis. This leads to organic prejudice development that goes on to create insulated communities.

The game is simple enough. AI agents decide whether to donate to somebody inside or outside of their group depending on the donation strategies and reputation points displayed by others. This simple game, run over a great many simulations, reveals that bots tend to cling to and imitate other bots who share their logic and outcomes, and the group as a whole gravitates toward whomever is seeing the largest return. If prejudice does not develop, then groups and communities in the simulation can form based on other criteria, and this can actually cause larger or more specialized groups to form different prejudices. The end result is that, in this example, the machine always ends up with some sort of deeply held prejudice or assumption, which manifests as a tendency toward peers and away from the subject of the prejudice.

According to the scientists behind the study, the real-world implications of this study could eventually lead to a thorough in-field analysis. For now, the biggest factor is whether AI can recognize existing prejudices in society, their origin points and targets, and what courses of action or consequences are associated with those prejudices. In this way, the system described above could lead to AI mimicking humans in a sense because the people who are more well-off or score higher in other metrics will be the ones most closely emulated. A system like this would mean that AI would likely cling to whoever is getting the best outcomes, and stay away from those who are seeing smaller returns, essentially chasing advantage and ignoring disadvantaged individuals.

Copyright ©2019 Android Headlines. All Rights Reserved
This post may contain affiliate links. See our privacy policy for more information.
You May Like These
More Like This:
About the Author
2018/10/Daniel-Fuller-2018.jpg

Daniel Fuller

Senior Staff Writer
Daniel has been writing for Android Headlines since 2015, and is one of the site's Senior Staff Writers. He's been living the Android life since 2010, and has been interested in technology of all sorts since childhood. His personal, educational and professional backgrounds in computer science, gaming, literature, and music leave him uniquely equipped to handle a wide range of news topics for the site. These include the likes of machine learning, voice assistants, AI technology development, and hot gaming news in the Android world. Contact him at [email protected]