Google’s annual developer conference, Google I/O, kicked off today with a keynote address by CEO Sundar Pichai. As expected, AI was a prominent theme, with Pichai highlighting significant advancements in Google’s Gemini AI models and the company’s growing prowess in the field. Here’s a deeper dive into some of the key numbers Pichai shared:
Gemini 1.5 Pro and 1 million tokens
Google introduced Gemini 1.5 Pro earlier this year allowing AI models to handle long context.It can run 1 million tokens in production, which it claims is more than any other large-scale foundation model yet.
Pichai said that more than 1.5 million developers use Gemini models across the company’s tools. He said that Gemini 1.5 Pro is available to all developers globally across 35 languages.
All Google products with over 2 billion users use Gemini
“All of our 2-billion user products use Gemini,” Pichai said, adding that over 1 million people have signed up to try Gemini AI in just three months.
Over 6 billion photos and videos uploaded everyday
While talking about Google Photos, Pichai said that the app – which was launched almost nine years ago – sees more than 6 billion photos and videos being uploaded every single day.
Support for long context queries
While discussing multimodality in AI models, Pichai said that Gemini AI models have taken a step further in terms long context, enabling us to bring in even more information: hundreds of pages of text, hours of audio or an hour of video, entire code repos…or, if you want, roughly 96 Cheesecake Factory menus.
Google announces 6th generation TPUs
The top executive announced the company’s 6th generation of TPUs, called Trillium. Trillium is said to be Google’s most performant and most efficient TPU to date, delivering a 4.7x improvement in compute performance per chip over the previous generation, TPU v5e.
Google’s liquid cooling system capacity is growing
Pichai said that Google’s total deployed fleet capacity for liquid cooling systems is nearly 1 gigawatt and growing — that’s close to 70 times the capacity of any other fleet.
“Underlying this is the sheer scale of our network, which connects our infrastructure globally. Our network spans more than 2 million miles of terrestrial and subsea fiber: over 10 times (!) the reach of the next leading cloud provider,” he added.
The number of times Google said “AI” during the presentation
Among various other jokes, there was a tongue in cheek moment on stage where Google seemed to take a dig at last year’s reports where publications counted how many times did Google executives use the word “AI” over the course of the presentation.
Pichai said that Gemini has done the heavy lifting and counted it. All Google executives used “AI” terms over 120 times during the 110-minute presentation.
Gemini 1.5 Pro and 1 million tokens
Google introduced Gemini 1.5 Pro earlier this year allowing AI models to handle long context.It can run 1 million tokens in production, which it claims is more than any other large-scale foundation model yet.
Pichai said that more than 1.5 million developers use Gemini models across the company’s tools. He said that Gemini 1.5 Pro is available to all developers globally across 35 languages.
All Google products with over 2 billion users use Gemini
“All of our 2-billion user products use Gemini,” Pichai said, adding that over 1 million people have signed up to try Gemini AI in just three months.
Over 6 billion photos and videos uploaded everyday
While talking about Google Photos, Pichai said that the app – which was launched almost nine years ago – sees more than 6 billion photos and videos being uploaded every single day.
Support for long context queries
While discussing multimodality in AI models, Pichai said that Gemini AI models have taken a step further in terms long context, enabling us to bring in even more information: hundreds of pages of text, hours of audio or an hour of video, entire code repos…or, if you want, roughly 96 Cheesecake Factory menus.
Google announces 6th generation TPUs
The top executive announced the company’s 6th generation of TPUs, called Trillium. Trillium is said to be Google’s most performant and most efficient TPU to date, delivering a 4.7x improvement in compute performance per chip over the previous generation, TPU v5e.
Google’s liquid cooling system capacity is growing
Pichai said that Google’s total deployed fleet capacity for liquid cooling systems is nearly 1 gigawatt and growing — that’s close to 70 times the capacity of any other fleet.
“Underlying this is the sheer scale of our network, which connects our infrastructure globally. Our network spans more than 2 million miles of terrestrial and subsea fiber: over 10 times (!) the reach of the next leading cloud provider,” he added.
The number of times Google said “AI” during the presentation
Among various other jokes, there was a tongue in cheek moment on stage where Google seemed to take a dig at last year’s reports where publications counted how many times did Google executives use the word “AI” over the course of the presentation.
Pichai said that Gemini has done the heavy lifting and counted it. All Google executives used “AI” terms over 120 times during the 110-minute presentation.