LaMDA Will Fundamentally Change How You Search & Conversative AI

Google LaMDA AI conversation search presser from Google Blog

Google I/O is well underway and the search giant has now introduced its next iteration on conversative AI, dubbed LaMDA. Built on Google’s own open-source neural network architecture, Transformer, LaMDA is different from other AI in that it doesn’t just focus on reading words and predictive features. Instead, it was trained on dialogue.

That difference in training allowed LaMDA to learn nuances in conversation, in order to become more conversative as an AI. Namely, by understanding differences between open-ended conversation and other “forms of language.”

Summarily, that means that LaMDA doesn’t simply look for a closed-ended response to dish out in answer to queries sensibly. But also specifically in context to the question. So it is capable of flowing in a more conversational manner, with open-ended understanding and responses. So the AI can flow from one topic to another more naturally, just like a conversation with a person.


How does LaMDA work for search and conversative AI?

Google’s example of LaMDA in action is a poignant one. The company shows LaMDA AI in action with the search query that starts with the statement “I’d like to have a conversation demonstration for a blog post” being posed to the Google technology. In response, the AI informs the user that it is, in fact, a “friendly and knowledgable demonstration.” And that it can engage in an interesting conversation about “virtually anything.”

The user starts by asking about a washing machine and tv that are malfunctioning. From there, the conversation takes an unexpected turn to horses, how many legs they have, and how they can help fix the issue.

Of course, horses can’t help fix those issues. And during Google’s I/O 2021 announcement, the company specified that, like real-world conversations between people, the answers don’t always make the most sense. Or follow the same topic throughout. At least with regard to the initial topic. Instead, each portion of the conversation is linked to previous statements or questions contextually.


How and when will this apply to real-world results?

Now, Google doesn’t specify exactly when it plans to start incorporating LaMDA more directly into its products. Only that it will continue researching the technology for incorporation into AI and search. For instance, Google Search and Google Assistant. The results it is showing today are still “early” and it looks forward to sharing more soon.

In the interim, the company is working to ensure that responses are as factually accurate and considerate as they are witty, unexpected, and insightful. That includes ensuring that it is guarded against internalized bias, hateful speech, or misleading information. As well as minimizing the possibilities for LaMDA to be misused.

02 Google LaMDA AI conversation search example
02 Google LaMDA AI conversation search example