Google announced two significant features to Google Lens, including Voice Questions in Lens and Video Understanding in Lens.
Voice Questions in Lens allow users to ask questions using their voice in conjunction with images from Google Lens. Previously, users had to type their questions. The process involves holding down the shutter button in Lens and speaking the question. Currently, it only supports the English language.
Video Understanding in Lens expands the capabilities of Google Lens from supporting only static image searches to now including video clips. An example shown by Google is taking a video of swimming fish and asking the question “why are they swimming together?” using voice commands. Like Voice Questions, this feature also currently supports only the English language.
Both features are now available globally in the Google app on both Android and iOS platforms, but users must enable them in the Search Labs > AI Overviews and more section first.
Source: Google
TLDR: Google Lens introduces Voice Questions, allowing users to verbally ask questions alongside images, and Video Understanding, enabling video clip searches with voice commands. Users can access these features globally on the Google app by enabling them in the Search Labs section.
Leave a Comment