Google just dropped a tool that can unmask AI-made videos – Here’s what it found
Google has introduced a new Gemini app feature that detects whether a video was created using AI. The tool uses SynthID watermarking to reveal hidden digital fingerprints in uploaded clips.
Google has unveiled a new tool that enables users to verify whether a video was created or modified using its artificial intelligence (AI) models. With this update, which Google has started to roll out, a user can simply upload a video clip of up to 90 seconds in length and 100 megabytes in file size and then ask “Is this an AI-created video?” This tool is helpful in today’s environment where deepfakes and AI-generated content are becoming more widespread and sophisticated.
Upload a video to Gemini, Ask a question, Get an answer
To use the new feature, a user simply uploads a video clip to the Gemini app or widget and then asks the question “Is this an AI-created video?” The tool scans the video frames and audio track to detect the presence of SynthID, Google’s invisible watermarking technology used to tag AI-generated content.
If the watermark is detected, Gemini also points out in which elements it was found. The watermark could have been applied to the video frames, the audio track, or both, so the tool points this out in its response.
SynthID: Google’s invisible watermark for AI content
SynthID is Google’s watermarking technology that embeds a “digital fingerprint” into the pixels of an image and the audio track of a video. The watermark is inaudible and imperceptible to humans, and it is resilient to compression, resizing, and other modifications that can happen as the image or video is shared.
Google had originally released SynthID as a tool for identifying AI-generated images. With this update, the company is bringing the technology to video detection.
Deepfakes and synthetic media: A (Fake) news problem
Synthetic content created with AI has real-world consequences when it is deployed in harmful ways, such as misinformation and fraud. While Gemini cannot tell a user if a video clip is “true” in a journalistic sense, it can answer the question “Was this created with Google AI?”
The more people have awareness of what they are seeing online, the better. Gemini’s AI-detection feature is another small step toward a safer and more transparent digital world.
Limitations: Gemini can detect Google AI but not other tools
Right now Gemini’s video detection feature can only identify videos made using Google’s own AI tools. It cannot detect videos created with AI models from other companies like OpenAI, Meta, or Runway.
However, this is a good first step and as more AI models are created, it’s likely that industry-wide video-detection standards will develop as well.
What's Your Reaction?