Google & Experts Mostly Agree On How AI Should Be Regulated

Advertisement
Advertisement

Google has been a focus of controversy over some of its AI use but its leadership and some experts reportedly agree on how the tech should be regulated. That's based on recent comments on the topic from Google CEO Sundar Pichai and Oxford Internet Institute professor Sandra Wachter. Both Mr. Pichai and Ms. Wachter agree that regulation is needed. Each also says that addressing individual technologies is not the correct approach.

The agreement centers around one of AI's core aspects. Namely, the pair agree that AI is a multifaceted technology, cutting across many disciplines and markets. As such, it's not particularly well-defined. Because of that, regulators must consider AI in terms of verticals.

Separating regulation by sectors, such as healthcare, criminal justice, and consumer use is the key to getting regulations done right. That's a sentiment shared by former Googler and AI expert Laura Nolan, who left the company over ethical concerns centered around Google's Project Maven. Splitting regulation is the only way to accomplish regulation without stifling competition and innovation Google and the AI experts agree.

Advertisement

There does seem to be some disagreement about exactly what needs to be addressed more individually though.

Mr. Pichai, speaking in an interview with the Financial Times, cites additional concerns related to the technology. There are still challenges with algorithmic bias and accountability, the CEO says. Both the experts and Google agree that adjustments to existing laws, pertaining specifically to data protection and nondiscrimination laws, could help ensure AI is properly regulated.

Conversely, the two AI experts outside of Google indicate that technologies related to facial recognition and warfare need to be addressed more closely.

Advertisement

AI is no longer just about digital assistants and better phones

The ongoing discussion about how to best address AI regulations is nothing new. Almost immediately after the technology began to emerge, a variety of experts came forward with warnings about the implications. That includes dozens of statements made by Telsa and Space X founder Elon Musk, among others.

In the interim, the technology has — publicly, in any case — chiefly settled on innovations that make users' lives more convenient. The markets for smart home products and better smartphone software, particularly cameras, have exploded. Self-driving cars follow in closely behind those, with stiff competition between automakers and tech companies.

Beyond the consumer market, other uses for AI have emerged as well.

Advertisement

Those have ranged from the benevolent, with Google's Deep Mind exploring medical applications, to almost dystopian. On the latter front, Amazon's efforts with AI over the past several months have included bids to widely introduce AI technology for law enforcement. Among those, the company has gone so far as to begin marketing the technology to police departments, complete with reports about using emotion-recognition to better pinpoint suspect activity.

Google seems to be on the right track, despite its sordid AI record

As noted above, Google has seen its own controversy within the dystopian-leaning segment of the technology too. The AI-driven Project Maven is just one perfect example of that, intended to give recognition capabilities to drones. But the company does appear to be learning from those missteps.

This is not the first time the search giant's CEO has stepped forward to outline how Google thinks AI needs to be regulated. The company's proposal, while slightly different from those offered by other AI experts, appears to be on the right track too. The only thing remaining to be agreed on seems to be which individual technologies if any to advance regulations for.

Advertisement