post-thumb

Google and OpenAI collaborate on AI content tracking technology

The rise of artificial intelligence (AI) has prompted questions about how to identify AI-generated content and distinguish it from human creations. Recently, President Biden announced that several major technology companies, including Google and OpenAI, have committed to developing watermarking schemes to help identify content created with their AI tools.

Digital watermarking is a technique that embeds invisible markings into digital content files, such as images, audio, or video, to signify their source and authenticity. It has been used for years in various types of content to trace origins and deter piracy. The watermarking process inserts a small piece of data, called a payload, into the content file, which can be extracted using specialized software.

AI tools can be modified to embed a watermark when they produce content, and the payload can point to an online registry containing information about the AI tool used and the user involved in its creation. Watermark extraction tools can be made freely available, allowing users to examine content and determine its AI origins.

Watermarking has similarities to the Content Authenticity Initiative (CAI), started by Adobe in 2019, which aims to track the origin and provenance of content, particularly news content. Adobe recently announced that it is adding the ability to record the use of generative AI in its CAI tools.

However, challenges exist in implementing watermarking schemes. Different types of content require different watermarking techniques, and there are no standard algorithms for specific content types. Each AI tool vendor would likely have to develop its own watermarking scheme and address patent liability and technology licensing. Collaboration among AI technology vendors on standard payload formats and a common registry could simplify the process.

Identifying AI-generated content is crucial due to the potential explosion of AI-created content. Companies like Mubert have already claimed to have generated millions of AI-produced music tracks. Watermarking would be voluntary, and alternative methods of identifying AI-generated content, such as detection tools, are being developed. An arms race between AI detection and content creation tools is anticipated.

While some view the detection of AI-generated content as challenging, similar skepticism existed about content recognition technology to detect copyrighted material online in the past. Over time, content recognition technology improved and is now widely used. The same may happen with AI detection.

In conclusion, watermarking schemes offer a potential solution for identifying AI-generated content, but challenges remain. Collaboration and standardization efforts are necessary, and alternative detection tools are being developed. The outcome of this ongoing process will shape the future of content creation and recognition in the age of AI.

Share:

More from Press Rundown