Google on Wednesday announced the launch of a new beta feature in its Gemini app designed to personalise responses by allowing the AI assistant to connect across a user’s Google ecosystem.
The feature begins with integration across Gmail, Google Photos, Search and YouTube history.
While Gemini previously had the ability to retrieve information from individual Google apps, the new update enables the AI to reason across a user’s data. This allows it to deliver proactive and more connected responses, such as linking an email conversation to a video previously watched.
According to Google, this means Gemini can now understand context without users having to specify where it should search for information.
The company explained that the beta experience, known as Personal Intelligence, is switched off by default. Users are given full control to decide if and when they want to connect their Google apps to Gemini.
Google acknowledged privacy concerns, noting that not all users are comfortable with AI accessing personal photos or YouTube history. The company added that if users choose to enable the feature, Gemini will only activate Personal Intelligence when it determines that doing so will be useful.
“Personal Intelligence has two core strengths: reasoning across complex sources and retrieving specific details from, say, an email or photo to answer your question,” wrote Josh Woodward, VP, Gemini app, Google Labs, and AI Studio, in a blog post. “It often combines these, working across text, photos and video to provide uniquely tailored answers.”
Woodward illustrated the feature’s capabilities with a personal example, explaining how he was once waiting at a tyre shop and could not remember his car’s tyre size. While most AI chatbots can provide general tyre size information for vehicles, he said Gemini was able to offer a more personalised response. In his case, the AI suggested all-weather tyres after identifying family road trip photos stored in Google Photos. He also revealed that when he forgot his licence plate number, Gemini was able to retrieve it from a photograph in his Photos library.
“I’ve also been getting excellent tips for books, shows, clothes and travel,” Woodward wrote. “Just this week, it’s been exceptional for planning our upcoming spring break. By analyzing our family’s interests and past trips in Gmail and Photos, it skipped the tourist traps. Instead, it suggested an overnight train journey and specific board games we could play along the way.”
Google stated that it has implemented safeguards for sensitive topics, stressing that Gemini will not make proactive assumptions involving sensitive data such as health information. However, the company clarified that Gemini will still discuss such data if users directly ask about it.
The technology firm also addressed concerns about data usage, explaining that Gemini does not train directly on personal content from Gmail inboxes or Google Photos libraries. Instead, the model is trained on specific prompts entered into Gemini and the responses it generates. In the examples shared, the road trip photos, licence plate image and Gmail emails were not used to train the AI. Google said they were only referenced to generate responses.
Personal Intelligence is currently being rolled out to Google AI Pro and AI Ultra subscribers in the United States. Google disclosed plans to expand the feature to additional countries and eventually make it available on Gemini’s free tier.
As part of the announcement, Google shared several example prompts users can try. These include “Help me plan my weekend in [city i.e. New York] based on things I like to do,” “Recommend some documentaries based on what I’ve been curious about,” and “Based on my delivery and grocery receipts in Gmail, Search history, and YouTube watch history, recommend 5 YouTube channels that match my cooking style or meal prep vibe.”

