Google added a new feature to it’s Lens service called Multisearch, which uses dual input of both images and text from users to find products on the web.
Available to US users initially, Google said this is made for shopping-related queries as of now, but it’s not limited to that specific use. With Multisearch, users can find more relevant products on the vast web by inputting specific keywords and desired images.
Google Lens Multisearch
As we pass onto the next level of evolution, people are getting bored of finding something online with text searches. So, Google came up with Lens integrated into the browser, which lets users find something with images. Also, Google Lens is capable of OCR, so it can extract a text out of something too.
And now, adding to this service is the Multisearch – a new feature that will let users find something with both images and text. Though Google Lens AI is smart, it still needs someone to train and make it better. Thus, inputting two key things – an image and some keywords – will make it helpful for both.
Google previewed this feature in September last year and brings it to US users initially. This is meant to help users in their “shopping-related searches initially“, said Liz Reid, vice president of Google Search to CNN Business. Although, it’s not limited to that and has a wider scope.
Google said this feature uses a powerful machine-learning tool called “multitask unified model” (MUM) to perform the searches, using both the inputs of text and image. With a combined effort of natural language processing (for text) and AI vision (for image), it surfs across the web to find something that’s relevant to your asking.
Other Trending News:- News