Google is seemingly bringing its Search engine in line with recent political and social climate changes around the world by tweaking its search engine algorithms to actively demote content that is offensive, hurtful, or factually inaccurate. The company is doing this by rolling out new guidelines for their content quality raters, a worldwide collection of around 10,000 independent contractors that Google hires to rate content yielded by its search engine regarding how well it fits a particular search. Google is now asking raters to do a better job of fact-checking search results and is introducing a new content tag called "Upsetting-Offensive."
The "Upsetting-Offensive" tag is to be used when raters believe a site is deliberately trying to push a negative viewpoint, promote hatred, or outright mislead users in a majorly upsetting way. Some examples that Google gives to their content raters include tagging a white supremacist website in search results about the Holocaust for insinuating that the tragedy hadn't happened. The incendiary tone and subject matter, combined with the sensitive nature of the topic itself are enough to earn the tag in this example. Meanwhile, a history website on the Holocaust gets a pass because of its basis in accepted facts, methodical approach to the topic, and clear academic intent.
Two key points of any such system that have been addressed in a somewhat unorthodox fashion by Google are political leanings and users who go looking for incendiary content. For the former, there are no guidelines, meaning a news article about current events that openly condemns a prominent political figure will not get demoted unless it is inflammatory in tone or falls short when fact-checked. However, users looking for content that makes the average user uncomfortable are to be treated as though they are searching for information by virtue of innocent curiosity. This means that a search for the meaning of a racial slur or facts pointing to a controversial or potentially offensive argument will see the results they intended, for the most part, but rated based on factual accuracy and neutrality of tone. It is worth noting that the raters have no actual power, at least not directly; the data that they generate is instead fed to Google's search algorithms, which use machine learning to figure out what content to filter on their own.