At last night’s Google I/O 2024 event, the Google DeepMind team showcased their masterpiece, Project Astra, an AI personal assistant that can engage in continuous conversations with users, answer various questions based on what it sees through the mobile camera. For instance, holding the phone out to capture the surroundings and asking, “Where are we?”, to which Astra accurately responds.
Astra also excels in its memory capabilities by accurately answering questions about the location of glasses worn by the user, based on the camera footage it previously captured. The latter half of the demonstration involved using camera-equipped glasses, reminiscent of Google Glass, and having Astra assist in answering questions regarding things written on a whiteboard. Google mentioned that this demonstration was continuously captured in a single take and operated in real-time mode with the artificial intelligence running on Gemini. Moreover, it enhanced video frame processing features and event timeline generation to aid in memory recall.
If one were to compare Astra with the GPT-4o demo from OpenAI, they might mistakenly perceive them as similar products. However, Google has yet to reveal the specifics of when Astra will be available for practical use.
TLDR: Google DeepMind introduced Project Astra, an AI personal assistant capable of engaging in continuous conversations and accurately answering questions based on visual input, at Google I/O 2024. Astra’s memory capabilities were showcased, along with its real-time video processing enhancements and event timeline generation. It remains to be seen when Astra will be available for use.
Leave a Comment