A latest research conducted by a marketing company Wunderman Thompson's Data group shows mature AI systems, like Google AI, are gender bias when tested on people wearing a face mask.
The AI systems used in this study comes from well-known names in tech, like Google Cloud Vision, Microsoft Azure's Cognitive Services Computer Vision, and IBM's Watson Visual Recognition.
We all know that AI is still a work in progress. Besides, it only gets better with frequent usage over time. However, the result of the study show biases towards men over women. The research also puts many things under question.
The study by Wunderman Thompson's Data group examined whether visual AI systems look at men wearing PPE kits in the same way as they do women. A total of 256 images of both the gender was taken in varying conditions and qualities.
All the AI system used in the study underperformed
As per a report by ZDNet, None of the AI systems used in the study performed which we could call stellar or at least good. Moreover, all of them found it difficult to spot masks on the face of either men or women.
However, male faces with masks were easier to identify than women's faces with masks. The mask research of Google AI gave some alarming results. It identified 28% of the images of the women, as their mouths being covered with duct tape. And that too with up to 95% confidence. The other 8% of times, the AI detected women with masks as they are women with facial hair.
IBM's number was also not pretty good. It showed 23% of cases as women wearing a gag. While, the other 23% cases it was sure that women were wearing chains or restraints rather than a mask. Meanwhile, only 5% of the cases showed the intended results.
Microsoft's Computer Vision seems to have been coded well. It showed 40% of the cases as women are wearing some fashion accessories, not bad. While the other 14% of the time, it identified the masks on women's faces as lipstick. Just 5% of the time it detected a mask on the women's face.
The research puts the education of such mature AI systems like Google, IBM into question
The researchers did not scrutinize the makers of these AI systems, it very well brought to light the biasness these systems are made to through.
Besides, even though AI educates itself over time from learning patterns, ultimately it is the human that is responsible for the ultimate result. Moreover, makers should make sure they reflect a neutral society rather than the historical society where gender bias was prevalent.
This will put the automation into the intended path. Because if the homegrown results are alarming, then there is certainly something wrong with the training data.