Google confirmed that it has blocked Gmail’s relatively new Smart Compose tool from suggesting gendered pronouns such as he, she, his and hers on the grounds that a wrong assumption could offend users. The tool will work as normal for the most part, but will only suggest gender-neutral pronouns. The AI behind Smart Compose learns from the emails that users send and receive and those it helps compose, so overexposure to gendered terms or favoring of a certain gender among its user base could lead to gender bias in the tool itself, and since Google can’t control or reign in such behavior, it has opted instead to sidestep the issue entirely.
Background: Gender is a rather sensitive issue these days, and Google is in one of the riskier positions when it comes to saying something wrong due to its position as a top entity in the worldwide technology scene, an industry historically dominated by men that is currently being swept by a large number of initiatives meant to remedy this situation. Therefore, now would be a bad time to showcase an AI that was raised on biased data and has no qualms misgendering somebody. AI programs are notorious for being the sum of their teachings, and Google of all companies knows to tread lightly on issues like this as it made a highly publicized misstep in an adjacent area before.
Impact: Taking the extra steps to adjust AI programs against biased and prejudiced behavior and speech by hand may be the only way to keep such behaviors from occurring, for now. As long as such behaviors are prominent online and statistical tendencies point toward data that could cause biases, AI programs will continue to respond accordingly without some sort of manual intervention. While it could be argued that the solution that Google has chosen here is far from elegant, it’s about the best that can be done without fundamental alterations to the algorithm to compensate for data that could result in forming biases. As such, Google seems to be doing the best that could be expected for a tool of Smart Compose’s scale and scope. Development of an AI that’s not prone to racism, gender biases and other unfortunate behaviors learned from real-world data and the humans it’s designed to serve is a process that’s still largely ongoing in the industry. Once somebody figures it out and shares the solution, workarounds like this won’t be necessary. Until then, expect to see more moves like this from AI companies who don’t want to see their product lampooned for its lack of political correctness.