Google rolls out Gemini overlay and Live features for Android devices

Google, at its Made by Google event, announced new AI-driven enhancements for billions of Android users, with a focus on privacy and security.

The company revealed that it is deeply integrating Gemini into Android, essentially rebuilding the operating system with AI at its core, transforming the capabilities of smartphones.

Gemini on Android: New Features and Experiences

Revamped Assistant Experience: Google has overhauled the assistant experience with Gemini, allowing users to interact naturally, as if conversing with another person.

Gemini understands user intent, follows the flow of conversation, and completes complex tasks seamlessly, thanks to its deep integration into the operating system.

Gemini Overlay: Starting today, Gemini’s overlay can be accessed on top of any app, allowing users to ask questions about on-screen content, like getting details about a YouTube video.

Users can also create images within the overlay and seamlessly drag them into apps like Gmail and Google Messages.

Gemini Live: A new feature called Gemini Live offers a mobile conversational experience, where users can engage in complex queries, explore ideas, or brainstorm career options.

This feature begins rolling out today in English to Gemini Advanced subscribers on Android, with plans to expand to more languages in the coming weeks.

Privacy and Security with Google

Google emphasized the importance of security and privacy in all AI-driven tasks. With user permission, Gemini can integrate personal data with Google’s extensive knowledge base to provide tailored assistance.

For instance, it can create a workout routine based on emails from a personal trainer or use a resume stored in Google Drive to draft a work bio. This is done securely, without the need to rely on third-party AI services.

Android, now featuring the Gemini Nano model, is the first mobile OS to offer on-device multimodal AI. This ensures that sensitive data never leaves the phone.

On the Pixel 9, for example, Gemini Nano enables features like Call Notes, which summarize phone call audio, and Pixel Screenshots, which organize images securely on the device.

Availability and Compatibility

Google said that Gemini is the most widely available AI assistant, supporting 45 languages across over 200 countries and territories. It works with many phone models from different manufacturers and adapts to various Android devices, including foldables.

For example, on the Samsung Galaxy Z Fold6, Gemini supports multi-window and split-screen modes, while on the Motorola Razr+, it can be accessed from the external display to quickly summarize recent emails.

Google also noted that existing devices will receive these updates, allowing Android phones to improve over time. Users can access Gemini by swiping up from the corner on supported Samsung devices, or by holding down the power button on compatible Pixel and other devices.

Upcoming Updates

Additional AI-powered features, including a new share feature in Circle to Search that allows users to circle a selection and instantly share it with others, will be available within the next month on supported Android devices.

Speaking about the announcement, Sameer Samat, President of Android Ecosystem, said,

We’re integrating AI into every aspect of our technology—from data centers and operating systems to devices. For AI to be genuinely useful, it must seamlessly blend into our daily lives, and the ideal place to experience this is on your Android device. With Gemini deeply embedded in Android, we’re reimagining the operating system with AI at its core and transforming the capabilities of smartphones.


Related Post