Everything Google unveiled at I/O 2025: Gemini, AI Search, smart glasses, more

9 hours ago 52
Google I/O 2025 signage
Maria Diaz/ZDNET

A big "Welcome to I/O" sign greeted me yesterday as I walked into one of the biggest developer conferences of the season.

See, most software folks regard Google I/O as the most important one, as millions of developers worldwide must adapt to what's next across the search giant's various operating systems and services.

At this point, Google has branched well beyond its core strengths of Android, Chrome, Search, and Workspaces -- AI is the next frontier, and the company has announced several key advancements since last year's developer conference.

CNET on Google I/O: The latest AI upgrades coming to Gemini, XR and more

Here's a roundup of the most important news (in no particular order) from Google I/O 2025, from Gemini upgrades to special services that may transform the way we communicate, interact, and share ideas in the workplace. Stay tuned as more announcements come in throughout the week.

Developers should know that Google released Gemini 2.5 Pro ahead of schedule. The AI model was originally slated for I/O, but launched weeks ago instead due to an overwhelmingly positive response from beta testers.

With the timeline shifted, Google announced several improvements to Gemini 2.5 Pro and Flash models, including Deep Think support for the former, which allows the AI to enact high-level research techniques to consider multiple hypotheses before responding.

Also: 8 best AI features and tools revealed at Google I/O 2025

Google says Gemini 2.5 Flash has also been buffed across the board, from reasoning to multimodality to coding to response efficiency, requiring 20% to 30% fewer tokens, according to the company's evaluations.

Gemini 2.5 models will also now support audio-visual input and native audio out dialogue via a preview version in the Live API, allowing developers to build and fine-tune the tone, accent, and speaking style of conversational experiences.

Show more

The idea of a "world model" sounds like something straight out of a movie, but Google thinks it's got the foundations to make it possible.

For example, Veo, its AI video generator, has a deep understanding of physics and the way things interact with each other, while Gemini Robotics has demonstrated that robots can maneuver, operate, and adapt to any given environment.

Also: One of Gemini's coolest features is now open to all Android and iOS users - for free

By leveraging Gemini 2.5 Pro, Google wants to build an AI that's "intelligent, understands the context you are in, and that can plan and take action on your behalf, across any device." We've seen glimpses of this future through the lenses of Gemini Live, powered by aspects of Project Astra.

Going a step further, Google has developed Project Mariner, a browser-based agentic AI, to handle up to 10 different tasks at a time now, from booking flights to researching to shopping. The latest version of the research prototype will be available to Google AI Ultra subscribers in the US first.

Show more

Google is packaging its latest AI features in two subscriptions: Google AI Pro and Google AI Ultra.

The Pro plan includes the company's full suite of AI products, but it has higher rate limits and special features than the free versions. This includes the Gemini app (with Advanced), Flow, NotebookLM, and more.

Also: Google unveils a $250 AI Ultra subscription - what's included

The Ultra plan includes Google's highest rate limits, early access to upcoming experimental AI products like Project Mariner, Veo 3, and Gemini 2.5 Pro with Deep Think mode.

Show more

The $250-per-month subscription (or 50% off the first three months for first-timers) also comes with early access to Agent Mode, which lets you make prompts at a desktop level -- such as live web browsing, researching, and more -- and Google's AI models will handle the rest.  

Google has also introduced new versions of its generative media models, with Veo 3, Imagen 4, and Flow.

Veo 3 now supports audio prompt generation, such as traffic noises in a busy street, birds singing in a park, or even dialogue between characters. Previously, the AI video generator could only visualize prompts, with a muted playback that made the content dull in comparison to real-life videos.

Also: Google's popular AI tool gets its own Android app - how to use NotebookLM on your phone

To further enhance the video output, Google has introduced Flow, an AI filmmaking tool that lets creators adjust the way videos are produced, from camera angles and motions to the cast and location.

Lastly, Imagen 4 gets a bump up in accuracy and clarity, especially with finer details like fabric textures, water droplets, and animal fur. The AI image generator can also produce content in various aspect ratios and up to 2K resolution.

Show more

Google began rolling out AI Mode in Search earlier this year, and the feature will now be widely available in the US, with no Labs sign-up required. It'll come with some new capabilities, too.

Deep Search in AI Mode, for example, expands the number of background queries from tens to hundreds to put together a more robust and thought-out search response. The result is a fully cited report that Google says will take just minutes to research and create.

Also: Your Google Search experience will never be the same, thanks to 8 new AI features

For visual assistance, the Google Search experience is also getting Project Astra's multimodal capabilities, so users can simply point their camera at an object or setting and ask about it like they would with a Google image search.

There's also a new AI Mode shopping experience that helps you find inspirations, narrow down buying options, and see yourself in an outfit (via image generator) when you upload an image of yourself.

Project Mariner's agentic functions will also transfer to AI Mode, allowing users to prompt Google to find the best event ticket deals or book restaurant appointments. While the AI won't complete the purchase for you, it will present multiple options, including the one that best fits your inquiry (say you searched for the cheapest option), for you to approve.

Finally, Google has expanded AI Overviews to over 200 countries and territories, and more than 40 languages, including Arabic, Chinese, Malay, Urdu, and more.

Show more

Google's AI takeover extends to its Workspace services, including Gmail, Meet, and Docs.

On Gmail, Google is introducing personalized smart replies that study your past replies to a contact or thread -- whether it's formal or more casual and conversational -- to generate responses that match the tone and context. There's also an inbox cleanup feature that lets you prompt Gemini to delete emails from a specific sender within a given time frame.

Also: Google Workspace gets a slew of new AI features. Here's how they can help your daily workflow

Furthermore, Gemini will now suggest a meeting booking window if it detects that you're trying to set up a meeting in a Gmail thread, saving you the extra step of opening your Calendars or Meet.

Google Meet is also offering near-real-time speech translation, translating spoken words to the listener's preferred language. The beta feature will first roll out to Google AI Pro and Ultra subscribers.

And for Docs, users can now establish source-grounded writing help, setting specific source links so that Gemini will only pull information and suggestions from them and nothing else. The feature will be generally available next quarter.

Show more

Project Starline made waves a few years ago when it turned boring 2D video calls into lifelike, face-to-face simulations without requiring large-sized rigs. Now, it is officially being branded as Google Beam.

Google Beam uses AI to transform standard 2D videos into more realistic 3D experiences, similar to the multi-layering technology we've seen with Apple and Meta's spatial video processing.

Also: Google Beam is poised to bring 3D video conferencing mainstream

The end goal? To develop a communication platform that builds trust and a deeper understanding of each other through realistic body movements, cues, and eye contact, says Google.

To help with its expansion into the enterprise, Google is partnering with HP (which recently acquired Humane for its AI hardware prowess) to create communication devices running Google Beam.

Show more

Android XR was first unveiled last December, but Google mainly covered the general software experience for XR and VR headsets only.

At I/O, Google shifted its focus toward Android XR glasses, everyday wearables that can leverage cameras, microphones, and speakers to interpret what you see and assist with Gemini. Many big tech companies, including Meta and Apple share this same vision of wearables, but Google may have the upper hand.

Also: Xreal's Project Aura are the Google smart glasses we've all been waiting for

With Android XR's Gemini capabilities and a configurable in-lens display, Google demonstrated how users will be able to pull up a directional system when navigating city streets, similar to how HUD screens work on some cars.

You'll also be able to visualize and respond to incoming text messages, translate conversations in real time, and capture photos via voice commands, reducing the friction of taking your phone out of your pocket.

To attract a wider audience, Google is partnering with popular eyewear brands like Gentle Monster and Warby Parker to create more stylish smart glasses. As for when we can expect Android XR glasses to hit the market, Google says they'll come later this year.

Show more

Want more stories about AI? Sign up for Innovation, our weekly newsletter.

Read Entire Article