In an effort to help prevent the spread of misinformation, Google on Tuesday unveiled an invisible, permanent watermark on images that will identify them as computer-generated.
The technology, called SynthID, embeds the watermark directly into images created by Imagen, one of Google’s latest text-to-image generators. The AI-generated label remains regardless of modifications like added filters or altered colors.
The SynthID tool can also scan incoming images and identify the likelihood they were made by Imagen by scanning for the watermark with three levels of certainty: detected, not detected and possibly detected.
“While this technology isn’t perfect, our internal testing shows that it’s accurate against many common image manipulations,” wrote Google in a blog post Tuesday.
A beta version of SynthID is now available to some customers of Vertex AI, Google’s generative-AI platform for developers. The company says SynthID, created by Google’s DeepMind unit in partnership with Google Cloud, will continue to evolve and may expand into other Google products or third parties.
Deepfakes and altered photographs
As deepfake and edited images and videos become increasingly realistic, tech companies are scrambling to find a reliable way to identify and flag manipulated content. In recent months, an AI-generated image of Pope Francis in a puffer jacket went viral and AI-generated images of former President Donald Trump getting arrested were widely shared before he was indicted.
Vera Jourova, vice president of the European Commission, called for signatories of the EU Code of Practice on Disinformation – a list that includes Google, Meta, Microsoft and TikTok – to “put in place technology to recognize such content and clearly label this to users” in June.
With the announcement of SynthID, Google joins a growing number of startups and Big Tech companies that are trying to find solutions. Some of these companies bear names like Truepic and Reality Defender, which speak to the potential stakes of the effort: protecting our very sense of what’s real and what’s not.
The Coalition for Content Provenance and Authenticity (C2PA), an Adobe-backed consortium, has been the leader in digital watermark efforts, while Google has largely taken its own approach.
In May, Google announced a tool called About this image, offering users the ability to see when images found on its site were originally indexed by Google, where images might have first appeared and where else they can be found online.
The tech company also announced that every AI-generated image created by Google will carry a markup in the original file to “give context” if the image is found on another website or platform.
But as AI technology develops faster than humans can keep up, it’s unclear whether these technical solutions will be able to fully address the problem. OpenAI, the company behind Dall-E and ChatGPT, admitted earlier this year that its own effort to help detect AI-generated writing, rather than images, is “imperfect,” and warned it should be “taken with a grain of salt.”