Google Lens Will Combine Visual Searches With Accompanying Text

Google Lens AH NS 02

In a recent development, Google is adding a few new features to its Google Lens app. Notably, it is getting updated with new AI-powered language features that will help you search for words and images combined.

This will help users to narrow down their searches using accompanying text. For users who shop online quite often, they can simply add commands to specify the garments they are looking for.

For instance, if you have an image of a color-blocked shirt, and you are looking for similar items, then you can simply use Google Lens and add the command, “socks with this pattern.” It will show you results of socks matching the pattern of the image of the t-shirt.


So, using this new AI-powered language feature, you can ease up the search process. Well, that’s not all. Google is also adding a new ‘Lens mode’ option on the Lens app for iOS.

The new Lens mode on the Google Lens iOS app will let users search using any image while searching the web. Do note that this feature is set to arrive “soon” and will be available only for users in the US.

Google will also be launching Google Lens within the Chrome browser

Additionally, as noted by The Verge, Google is also going to launch Google Lens for its Chrome browser. You could select an image or video to find visual search results, all this without the need to leave their current tab.


Just like the iOS app Lens mode, Google Lens for Chrome browser will be available ‘soon.’ However, the good thing is the app for Chrome browser will be launched globally, and will not be limited only to users in the US.

All these features are based on the company’s new AI language, which was showcased during the I/O event earlier this year. It was called the MUM. Besides, Google wants to improve its search tools using the new AI language it has developed.

Further, Google is also introducing some new AI-powered web and mobile search tools to enhance user experience. All these new changes and features show that the company wants to keep evolving the Lens app and make machine learning techniques more effective for daily use.


Google wants to turn AI-scanning into a more useful and powerful tool. The company gives an example of someone trying to fix a bike but does not know what the mechanism is called.

The user can simply make use of the Lens app and add the search text, for example, “how to fix this part” and Google will immediately identify the mechanism in question and show the appropriate results.