YouTube CEO Susan Wojcicki announced at the annual SXSW expo that YouTube will be working with Wikipedia to start showing applicable Wikipedia article text alongside any videos concerning conspiracy theories, according to a report from Bloomberg. According to Wojcicki, the plan is to start with a list of conspiracy theories that have been garnering a lot of discussion around the internet, then hunt down videos on YouTube concerning those. From there, YouTube will show text boxes with a little bit of information about the conspiracy theory in question, sourced from Wikipedia, which users can click or tap to read the full article.
This move comes after a long fight against misleading conspiracy theory videos on YouTube, and almost immediately after one of those made it into the platform's front page features. The video in question was not only an arguably misleading conspiracy video, it latched onto a recent tragedy, alleging that one of the survivors of the Parkland, Florida school shooting was actually a paid crisis actor. That video got through checks and balances by using footage from a reputable news source, lending it undue credibility and making it easier for unwary viewers to be led astray. This new system could fix issues like that, though there is the possibility that vandalism, a prominent issue on Wikipedia, may rear its head and further complicate matters.
YouTube, along with other Google-owned products and properties, has been in a long war against openly spewing misinformation and inappropriate content to unsuspecting users who don't go looking for such things. The fight against fake news evolved into a full-blown, web-wide crisis during the 2016 United States Presidential election, and content control issues in a similar vein have since become a focus of tech companies. Efforts to control these sorts of issues are motivated not only by companies' own desire to earn and keep users' trust and loyalty, but also lawmakers around the world setting hard lines and laying out threats, with the EU's treatment of Google and Facebook being a prime example. Due to the user-generated nature of the content being policed, it's likely impossible to control the issue completely, but Google is hard at work using both manpower and AI to help steer users away from inappropriate and misleading content if they're not actively seeking it.