DeepMind partners with Google Cloud to watermark images generated by artificial intelligence

Image credits: Google

In partnership with Google Cloud, Google DeepMind (the artificial intelligence research division of Google). release A tool for watermarking and identifying AI-generated images – but only images generated by Google’s Image Creation Model.

The tool, called SynthID and available in beta to users of Vertex AI (Google’s platform for building artificial intelligence applications and models), embeds a digital watermark directly into image pixels — making them outwardly imperceptible to the human eye but detectable by an algorithm. . SynthID only supports Imagen, Google’s text-to-image model, which is available exclusively in Vertex AI.

Google previously It said it would include metadata to denote visual media generated by generative AI models. SynthID obviously goes further than that.

“Although generative AI can unlock enormous creative potential, it also presents new risks, such as enabling creators to spread false information — whether intentionally or unintentionally,” DeepMind wrote in a blog post. “The ability to identify AI-generated content is critical to enabling people to know when they are interacting with generated media, and to help prevent the spread of misinformation.”

Image credits: Deep mind

DeepMind claims that SynthID, which it developed and partnered with Google Research (Google’s research and development team) to improve, remains in place even after modifications such as adding filters to images or changing the colors and highly compressing images. DeepMind says the tool takes advantage of two AI models, one for watermarking and one for identification, that have been trained together on a “variety” set of images.

SynthID cannot recognize watermarked images with 100% confidence. But the tool distinguishes between cases where the image is may be Contains a watermark against an image Very likely to contain one.

See also  Gran Turismo 7 now has the lowest Sony user score ever on Metacritic

“SynthID is not foolproof against severe image manipulation, but provides a promising technology approach to enabling people and organizations to work with AI-generated content responsibly,” DeepMind wrote in the blog post. “This tool can also evolve alongside other AI models and methods that go beyond images such as audio, video, and text.”

Watermark techniques for generative art are nothing new. French startup Imatag, launched in 2020, offers a watermarking tool that it claims is unaffected by image resizing, cropping, editing or compression, similar to SynthID. Another company, Steg.AI, uses an AI model to apply watermarks that withstand resizing and other modifications.

But pressure is mounting on technology companies to provide a way to show that works have been generated by AI.

Recently, the Cyberspace Administration of China issued regulations requiring generative AI vendors to tag generated content — including text and image generators — without affecting user usage. And at the recent hearings of the US Senate Committee, Senator Kirsten Sinema (R-Arizona) confirmed The need for transparency in generative AI, including the use of watermarks.

In May, at its annual Build conference, Microsoft committed to watermarking images and videos created by AI “using encryption methods”. Elsewhere, AI startups Shutterstock and Midjourney have adopted it Guidelines To include a tag indicating that the content was generated by a generative AI tool. OpenAI’s DALL-E 2, a text-to-image tool, inserts a small watermark in the bottom right of the images you create.

But so far, a common standard for watermarks — whether for creating or detecting watermarks — has proven elusive.

See also  New iPhone 14 Pro, 'M2' iPad Pro, and Apple TV+ win Best Picture award on AppleInsider podcast

SynthID, like other technologies that have been proposed, will not be useful to any image creator that does this not like that Imagen – at least not in its current form. DeepMind says it is considering making SynthID available to third parties in the near future. But whether third parties—particularly third parties developing open source AI image generators, which lack many of the generators’ firewalls behind the API—will embrace the technology is another matter entirely.



Leave a Reply

Your email address will not be published. Required fields are marked *