Google CEO Sundar Pichai is one of the most qualified people to talk about improvements to AI handling and ethics that Google has to make before allowing the technology to seep into all of its products, so that's just what he did in a recent interview with The Verge. Pichai talked about the scope and scale of modern AI, as well as how Google plans to integrate it across product lines. Human-controlled ethics, controls, and diligence go hand in hand with this topic, and when a controversial issue came up, Pichai saw fit to use the opportunity to demonstrate his point. That issue was the fact that Google's search algorithm was recently found to be promoting misinformation concerning the recent mass shooting in Las Vegas. Pichai said that the algorithms are set to improve on their own over time, but humans will occasionally have to step in, at least for a while. This set the tone for the rest of the talk.
Pichai talked at length about the future of Google's products and how he plans to eventually integrate the company's full stack with AI underpinnings. According to him, as AI systems advance, they will find an increasingly large number of new use cases over time, expanding their purview. He talked excitedly about Google integrating AI more fully into its stack of hardware and software products. He said that a "hybrid" approach, using AI systems on-device and in the cloud, makes a lot of sense given the direction that the field is heading in. Still, it's going to take time; machine learning is just like any other type of learning in that repetition and full understanding are key. One example of this that he cited is Google Fit and his personal use case. Each day, he opens the app to the default view and swipes over to another. Rather than having the Google Fit team change the app's behavior, he wants the AI integrated with the app to figure out what he wants and why by way of being steered toward that content on a daily basis.
On the subject of privacy, Pichai said that AI could potentially solve the quandary that everybody is having these days by figuring out exactly what data is needed to meet the user's needs, and having only that on hand, rather than keeping information entered elsewhere that's not needed for a particular AI application to do its job, or having the AI use the full stack of information normally used to serve the user advertisements, or for other purposes. The overall center of the talk was that AI is still developing, and still needs help with direction. There are ways that AI can grow without needing its hand held, so to speak, but for now, humans will occasionally have to tell AI systems that something is wrong, or that they have incorrectly identified a user need or goal.