In an interview with Wired, Sundar Pichai, the CEO of Google, discussed the launch of Gemini Advanced, which is currently Google’s most advanced artificial intelligence model. Pichai emphasized that Gemini’s greatest strength is its multimodal capabilities, meaning it can be trained using various types of data such as text, images, sound, and code. This allows Gemini to accept a wide range of inputs without the need for format conversion, giving users the ability to command Gemini through text, voice, or images, distinguishing it from competitors like OpenAI/Microsoft whose models work separately.
Pichai noted that the human brain also functions multimodally and Google has previously released several services with this characteristic, such as Google Lens, which searches using images, and Multisearch, which searches for both images and text simultaneously.
Another topic Pichai discussed was whether chatting with AI will compete with search. He stated that it remains to be seen as no one knows what the future holds, but Google is open to all possibilities. He highlighted the importance of staying open and not limiting themselves to any specific approach, as being confined to one direction may lead to missed opportunities.
Regarding the future business model of Gemini, Pichai was asked if it will include advertisements. He responded by acknowledging the ongoing debate and gave an example of YouTube, which offers both a free version with ads and a premium version without ads. He explained that having advertisements allows for easier distribution of the service to a wider audience, but there is also a market for paying for an enhanced experience.
In conclusion, Pichai’s interview shed light on the uniqueness of Gemini Advanced, its multimodal capabilities, and its potential impact on the market. Only time will tell how chatting with AI will evolve and whether ads will play a role in Gemini’s business model.
TLDR: Sundar Pichai, CEO of Google, discussed the launch of Gemini Advanced, Google’s most advanced AI model. Gemini’s multimodal capabilities allow it to accept various inputs without format conversion. Pichai compared this to the human brain and mentioned previous multimodal services by Google. He also addressed the possibility of AI chat competing with search and the potential inclusion of ads in Gemini’s business model.
Leave a Comment