If Gmail isn’t training Gemini AI… Then what really sparked the panic? Google finally breaks its silence”
Google has denied viral claims suggesting Gmail data is used to train Gemini AI. The company clarified that no user emails or attachments feed Gemini’s training models, calling recent reports misleading.
Over the last few days, there have been several rumours on social media and online claiming that Gmail is being used as a data feed to train Gemini AI, Google’s family of large language-model products. Reports suggested your personal email data and attached content may be fuelling the next generation of Google AI in secret. Google has now responded directly, rebutting the claim and being clear about what is, and isn’t, happening.
The false claim that led to the uproar
The speculation kicked off when a report stated a security company had identified new language in Gmail’s settings, specifically around so-called “Smart Features” which include automated replies, email labelling or categories and travel booking tracking updates.
Users noted changes in prominence and placement of these toggles in the settings page and some took it as a signal of policy change, that Google had quietly decided to begin harvesting inbox content to train Gemini AI.
The social media narrative also stoked the fire, with claims that the user had to opt out of Smart Features, otherwise Google will use their Gmail content to train its AI. Some posted screenshots and YouTube-style exposé videos with bold, red text claiming that Gmail had automatically opted the user into this new feature.
Google’s official response
Google responded quickly, issuing a clarification via its official Gmail Twitter account, as well as from a spokesperson to the press. The message clearly states that:
“We have not changed anyone’s settings related to Gmail Smart Features.” (emphasis theirs)
They also reiterated that these Smart Features have existed in Gmail for years to offer things like automated reply suggestions, travel itineraries and message labelling, as well as that “Gmail content is not used to train Gemini AI models”.
A spokesperson, Jenny Thomson, was even more direct: “We do not use your Gmail content for training our Gemini AI model.” (emphasis theirs)
So what’s really happening
This incident is a case of privacy concerns at the intersection of AI. Perception can be reality when people fear their private Gmail content might be used to train a large language model. Google has moved to quash those concerns but the broader context is that machine learning powers Gmail’s Smart Features and users are attuned to settings text and where toggles are placed on their preferred web services.
The confusion comes from the simple fact that machine learning and personalised features are embedded in our favourite web services, making it more likely we’ll spot these changes. This article notes that the fear was understandable, even if the conclusion was not.
Recommendations for users
So, what can users do to ensure they understand? If you’re concerned about your Gmail privacy settings, it’s easy to see what’s turned on under Settings → Smart Features & Personalisation. Turning the settings off is a toggle, so that doesn’t mean Gmail content is being harvested for model training.
Machine-learning-powered features for personalisation (auto-suggestions, for example) are not the same as being served up in a large-scale AI model training set for Gemini.
It’s also worth being aware of setting text and where things are placed within the user experience. It may not always be a major policy change behind a user experience update, but transparency is always good.
TL;DR: Despite a number of viral claims and articles, Google is now officially stating that no, Gmail emails and attachments are not being used to train Gemini AI. The rumour has more to do with changes to how the Smart Features section of Gmail settings is positioned, than any actual data-harvesting policy shift.
What's Your Reaction?