Google I/O 2025 recap: AI updates, Android XR, Google Beam and everything else announced at the annual keynote

Today is one of the most important days on the tech calendar as Google kicked off its I/O developer event with its annual keynote. As ever, the company had many updates for a wide range of products to talk about.

The bulk of the Android news was revealed last week, during a special edition of The Android Show. However, Tuesday's keynote still included a ton of stuff including, of course, a pile of AI-related news. We covered the event in real-time in our live blog, which includes expert commentary (and even some jokes!) from our team.

If you're on the hunt for a breakdown of everything Google announced at the I/O keynote, though, look no further. Here are all the juicy details worth knowing about:

Quelle surprise, Google is continuing to shove more generative AI features into its core products. AI Mode, which is what the company is calling a new chatbot, will soon be live in Search for all US users.

AI Mode is in a separate tab and it's designed to handle more complex queries than people have historically used Search for. You might use it to compare different fitness trackers or find the most affordable tickets for an upcoming event. AI Mode will soon be able to whip up custom charts and graphics related to your specific queries too. It can also handle follow-up questions.

The chatbot now runs on Gemini 2.5. Google plans to bring some of its features into the core Search experience by injecting them into AI Overviews. Labs users will be the first to get access to the new features before Google rolls them out more broadly.

Meanwhile, AI Mode is powering some new shopping features. You'll soon be able to upload a single picture of yourself to see what a piece of clothing might look like on a virtual version of you. 

Also, similar to the way in which Google Flights keeps an eye out for price drops, Google will be able to let you know when an item you want (in its specific size and color) is on sale for a price you're willing to pay. It can even complete the purchase on your behalf if you want.

AI Overviews, the Gemini-powered summaries that appear at the top of search results and have been buggy to say the least, are seen by more than 1.5 billion folks every month, according to Google. The "overwhelming majority" of people interact with these in a meaningful way, the company said — this could mean clicking on something in an overview or keeping it on their screen for a while (presumably to read through it).

Still, not everyone likes the AI Overviews and would rather just have a list of links to the information they're looking for. You know, like Search used to be. As it happens, there are some easy ways to declutter the results

We got our first peek at Project Astra, Google's vision for a universal AI assistant, at I/O last year and the company provided more details this time around. A demo showed Astra carrying out a number of actions to help fix a mountain bike, including diving into your emails to find out the bike's specs, researching information on the web and calling a local shop to ask about a replacement part.

It already feels like a culmination of Google's work in the AI assistant and agent space, though elements of Astra (such as granting it access to Gmail) might feel too intrusive for some. In any case, Google aims to transform Gemini into a universal AI assistant that can handle everyday tasks. The Astra demo is our clearest look yet at what that might look like in action.

Gemini 2.5 is here with (according to Google) improved functionality, upgraded security and transparency, extra control and better cost efficiency. Gemini 2.5 Pro is bolstered by a new enhanced reasoning mode called Deep Think. The model can do things like turn a grid of photos into a 3D sphere of pictures, then add narration for each image. Gemini 2.5's text-to-speech feature can also change up languages on the fly. There's much more to it than that, of course, and we've got more details in our Gemini 2.5 story. 

You know those smart replies in Gmail that let you quickly respond to an email with an acknowledgement? Google is now going to offer personalized versions of those so that they better match your writing style. For this to work, Gemini looks at your emails and Drive documents. Gemini will need your permission before it plunders your personal information. Subscribers will be able to use this feature in Gmail starting this summer.

Google Meet is getting a real-time translation option, which should come in very useful for some folks. A demo showed Meet being able to match the speaker's tone and cadence while translating from Spanish to English. 

Subscribers on the Google AI Pro and Ultra (more on that momentarily) plans will be able to try out real-time translations between Spanish and English in beta starting this week. This feature will soon be available for other languages.

An example of camera sharing using Google's Gemini Live AI.
Google

Gemini Live, a tool Google brought to Pixel phones last month, is coming to all compatible Android and iOS devices in the Gemini app (which already has more than 400 million monthly active users). This allows you to ask Gemini questions about screenshots, as well as live video that your phone's camera is capturing. Google is rolling out Gemini Live to the Gemini iOS and Android app starting today.

Google Search Live is a similar-sounding feature. You'll be able to have a "conversation" with Search about what your phone's camera can see. This will be accessible through Google Lens and AI Mode.

A new filmmaking app called Flow, which builds on VideoFX, includes features such as camera movement and perspective controls; options to edit and extend existing shots; and a way to fold AI video content generated with Google's Veo model into projects. Flow is available to Google AI Pro and Ultra subscribers in the US starting today. Google will expand availability to other markets soon.

Speaking of Veo, that's getting an update. The latest version, Veo 3, is the first iteration that can generate videos with sound (it probably can't add any soul or actual meaning to the footage, though). The company also suggests that its Imagen 4 model is better at generating photorealistic images and handling fine details like fabrics and fur than earlier versions.

Handily, Google has a tool it designed to help you determine if a piece of content was generated using its AI tools. It's called SynthID Detector — naturally, it's named after the tool that applies digital watermarks to AI-generated material

According to Google, SynthID Detector can scan an image, piece of audio, video or text for the SynthID watermark and let you know which parts are likely to have a watermark. Early testers will be able to to try this out starting today. Google has opened up a waitlist for researchers and media professionals. (Gen AI companies should offer educators a version of this tech ASAP.)

Google AI Ultra pricing chart
Google

To get access to all of its AI features, Google wants you to pay 250 American dollars every month for its new AI Ultra plan. There's really no other way to react to this other than "LOL. LMAO." I rarely use either of those acronyms, which highlights just how absurd this is. What are we even doing here? That's obscenely expensive. 

Anyway, this plan includes early access to the company's latest tools and unlimited use of features that are costly for Google to run, such as Deep Research. It comes with 30TB of storage across Google Photos, Drive and Gmail. You'll get YouTube Premium as well — arguably the Google product that's most worth paying for.

Google is offering new subscribers 50 percent off an AI Ultra subscription for the first three months. Woohoo. In addition, the AI Premium plan is now known as Google AI Pro.

As promised during last week's edition of The Android Show, Google offered another look at Android XR. This is the platform that the company is working on in the hope of doing for augmented reality, mixed reality and virtual reality what Android did for smartphones. After the company's previous efforts in those spaces, it's now playing catchup to the likes of Meta and Apple.

The initial Android XR demo at I/O didn't offer much to get too excited about for now. It showed off features like a mini Google Map that you can access on a built-in display and a way to view 360-degree immersive videos. We're still waiting for actual hardware that can run this stuff.

Xreal's Project Aura is the second official Android XR headset
Xreal

As it happens, Google revealed the second Android XR device. Xreal is working on Project Aura, a pair of tethered smart glasses. We'll have to wait a bit longer for more details on Google's own Android XR headset, which it's collaborating with Samsung on. That's slated to arrive later this year.

A second demo of Android XR was much more interesting. Google showed off a live translation feature for Android XR with a smart glasses prototype that the company built with Samsung. That seems genuinely useful, as do many of the accessibility-minded applications of AI. Gentle Monster and Warby Parker are making smart glasses with Android XR too. Just don't call it Google Glass (or do, I'm not your dad).

Google is giving the Chrome password manager a very useful weapon against hackers. It will be able to automatically change passwords on accounts that have been compromised in data breaches. So if a website, app or company is infiltrated, user data is leaked and Google detects the breach, the password manager will let you generate a new password and update a compatible account with a single click.

The main sticking point here is that it only works with websites that are participating in the program. Google's working with developers to add support for this feature. Still, making it easier for people to lock down their accounts is a definite plus. (And you should absolutely be using a password manager if you aren't already.)

On the subject of Chrome, Google is stuffing Gemini into the browser as well. The AI assistant will be able to answer questions about the tabs you have open. You'll be able to access it from the taskbar and a new menu at the top of the browser window.

It's been a few years since we first heard about Project Starline, a 3D video conferencing project. We tried this tech out at I/O 2023 and found it to be an enjoyable experience.

Now, Google is starting to sell this tech, but only to enterprise customers (i.e. big companies) for now. It's got a new name for all of this too:

Created 3h | May 20, 2025, 9:10:16 PM


Login to add comment

Other posts in this group

Solar trade association warns of 'devastating energy shortages' if incentives are cut

The Solar Energy Industries Association released an

May 20, 2025, 11:30:11 PM | Engadget
Google XR glasses hands-on: Lightweight but with a limited field of view

One of the biggest reveals of Google I/O was that the company is officially back in the mixed reality game with its own

May 20, 2025, 11:30:09 PM | Engadget
Fortnite is finally back in the US App Store

Fortnite is

May 20, 2025, 11:30:08 PM | Engadget
Our favorite budget streaming stick drops to only $20 for Memorial Day

The popular Amazon Fire Stick HD

May 20, 2025, 9:10:20 PM | Engadget
Google demos Android XR glasses at I/O with live language translation

Google has dug back into its past and introduced its latest take on smart glasses during

May 20, 2025, 9:10:19 PM | Engadget