Google Now Lets You Search For Things You Can’t Describe, Starting With An Image

You like the look of this dress, but you prefer having it in green. You want these shoes, but prefer flats to heels. What if you could have curtains with the same pattern as your favorite notebook? I don’t know how to Google for these things, but Google Search Product Manager Belinda Zeng showed me real-life examples of each earlier this week, and the answer was always the same: take a photo, then type a single word in Google Lens.

Today Google is launching a US-only beta of the Google Lens Multisearch feature it teased last September at its Search On event, and while I’ve only seen a rough demo so far, you shouldn’t don’t have to wait long to try it out for yourself: it’s rolling out to the Google app on iOS and Android.

gif;base64,R0lGODlhAQABAIAAAAUEBAAAACwAAAAAAAQABAAACAkQBADs



Take a screenshot or photo of a dress, then tap, type “green” and search for a similar dress in a different color.
GIF: Google

Although it’s mostly about shopping to begin with – it was one of the most common requests – Google’s Zeng and the company’s director of research, Lou Wang, suggest it could do a lot more than this. “You might imagine that you have something broken in front of you, you don’t have the words to describe it, but you want to fix it… you can just type in ‘how to fix’,” says Wang.

In fact, it might already work with some broken bikes, Zeng adds. She says she also learned how to style nails by screenshotting photos of beautiful nails on Instagram, then typing in the keyword “tutorial” to get the kind of video results that didn’t automatically appear. on social media. You can also take a photo of, for example, a rosemary plant and get instructions on how to care for it.

gif;base64,R0lGODlhAQABAIAAAAUEBAAAACwAAAAAAAQABAAACAkQBADs



Belinda Zeng from Google showed me a live demo where she found matching curtains with a leaf notebook.
GIF by Sean Hollister / The Verge

“We want to help people understand questions naturally,” says Wang, explaining how multiple search will expand to more videos, images in general, and even the kinds of answers you might find in a traditional Google text search. .

It seems the intention is also to put everyone on a level playing field: rather than associating with specific stores or even limiting video results to Google-owned YouTube, Wang says it will bring up results from “any platform we are able to index from the open web”.

google

When Zeng took a photo of the wall behind her, Google found ties that had a similar pattern.
Screenshot by Sean Hollister/The Verge

But it won’t work with everything, like your voice assistant doesn’t work with everything, because there are endless possible requests and Google is still determining the intent. Should the system pay more attention to the image or your text search if they seem to contradict each other? Good question. For now, you have a bit more control: if you prefer to match a pattern, like the leaf notebook, get close to it so Lens can’t see it’s a notebook . Because remember, Google Lens is trying to recognize your image: if it thinks you want more notebooks, you might have to tell it you don’t.

Google hopes that AI models can lead to a new era of search, and there are big open questions whether context — not just text — can get it there. This experiment seems limited enough (it doesn’t even use its latest MUM AI models) not to give us the answer. But it seems like a neat trick that could go to some fascinating places if it becomes a basic Google search feature.

Leave a Comment