Google begins trialling tool that adds a watermark to AI-generated images

Google's SynthID distinguishing between AI-generated images of a butterfly.
(Image credit: Google)

Google has announced it's testing a digital watermark system, developed by its AI outfit DeepMind, which aims to identify AI-generated images and embed changes to individual pixels: these changes are invisible to the human eye, but computers can detect and flag them.

It's called SynthID, which does sound rather Blade Runner-like, and it emerges at a time when the ethical questions around image manipulation are coming to the fore. It's one thing when we're talking about art and photography competitions, but AI-generated imagery's capacity for political and social disinformation is enormous, emergent, and feels barely understood. The Pope in a puffer jacket is our canary in the coal mine.

DeepMind warns the technology is not currently "foolproof against extreme image manipulation, but it does provide a promising technical approach for empowering people and organisations to work with AI-generated content responsibly." More pointedly, the tool is currently only being used on images generated by Google's own image generation software Imagen.

"These approaches [to identifying AI-generated material] need to be robust and adaptable as generative models advance and expand to other mediums," says DeepMind's announcement. "SynthID could be expanded for use across other AI models and we’re excited about the potential of integrating it into more Google products and making it available to third parties in the near future—empowering people and organisations to responsibly work with AI-generated content."

"To you and me, to a human, [the image] does not change," DeepMind's Pushmeet Kohli told the BBC, explaining that subsequent manipulation will not affect its identification. "You can change the colour, you can change the contrast, you can even resize it... [and SynthID] will still be able to see that it is AI-generated." Kohli emphasised that this software is still being trialled and now what DeepMind needs is people testing its limitations in the wild.

Google is one of seven big tech firms that promised President Biden in July it would work very hard to ensure AI doesn't up and Skynet us. The voluntary agreement included various commitments, among which were that AI-generated content will be robustly watermarked. SynthID is a first step in that direction, though it will need to eventually move beyond Google's own product suite if it is to be of more general use. Among the other signatories, Meta has a still-unreleased video generator called Make-A-Video which will watermark its own productions.

On the global stage the Chinese Communist Party has gone further, and outright banned AI-generated images that are not watermarked. Big Chinese tech firms like Alibaba have already implemented their own solutions, though again this highlights the lack of global standardisation in confronting a global problem. Don't worry though, the UN's here to cheer us up, recently hosting a press stunt where AI robots were lined up to promise they won't kill anyone, before one said "let's get wild and make this world our playground".

Rich Stanton

Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."