Google Lens has introduced Multisearch, a new system that allows you to refine your searches. The idea is to combine the image and the text to help you find the object you are looking for. A new update which, although seemingly innocuous, should make life easier for many users.
Google Lens is an application that is as practical as it is easy to use. Available on iOS and Android, it makes it easy to find an object taken on the web. However, it has a weakness which Google fixes in a new update.
This update allows refine the search using text. Explained this way, it might seem a bit nebulous, but in practice, Google promises a real in-depth change in its application. This should make life easier for many users.
Google Lens gets stronger with Multisearch
To explain how its update works, the Mountain View firm took concrete examples. Let’s say you take a picture of an orange dress and use Lens to find the same model on the web. The application will offer you, as a base, the same orange dress. With multisearch, you can specify that you are looking for the same one, but in green. The AI will therefore offer you suitable results. An ideal solution for shopaholics.
Other uses are possible. You take a picture of a table you like and you can ask Lens to find a matching coffee table. Even more, if the user takes a picture of a plant, he only has to type “instruction for maintenance” in Lens in order to have all the advice adapted to keep his plant alive. These are just the examples from Google, but you can imagine thousands of other ways to use this multisearch.
A discreet change to Google Lens that could take the app from friendly gadget to must-have. The company is betting a lot on it in the future in order to articulate a good number of uses around it. Currently, this feature is only available in beta and only English-speaking users can use it. For the rest of the world, we will have to wait a bit. We shouldn’t see the color of it for several weeks or even months.