Detecting deepfakes and Generative AI: Standards for AI watermarking and multimedia authenticity

Go back to programme

Detecting deepfakes and Generative AI: Standards for AI watermarking and multimedia authenticity

Background

The rapid developments in AI technology, such as deep learning, have led to an increased pervasiveness and proliferation of misinformation through deepfakes, a type of synthetic AI-generated media (could be video, images, text and audio) and are becoming increasingly difficult to detect both by the human eye and by existing detection technologies. These developments have significantly increased cybersecurity risks, digital copyright infringements and potentially impact trust in digital systems.

Objectives of the workshop

The rise of generative AI technology calls for a focus on international standards for determining the authenticity of multimedia, use of watermarking technology, enhanced security protocols and extensive cybersecurity awareness.

Governments and international organizations are already working towards setting policy measures, codes of conduct and regulations to enhance the security and trust of AI systems.

The main objectives of the workshop are to:

  1. Provide an overview of the current risks of deepfakes and AI generative multimedia and the challenge of regulators in ensuring a safe, secure and trusted environment;
  2. Discuss the effectiveness of AI watermarking, multimedia authenticity and deepfake detection technologies, their application use cases, governance issues and gaps that need to be addressed;
  3. Discuss the areas where technical standards are required and where ITU will have an important role to play;
  4. Explore opportunities for collaboration on standardization activities on AI watermarking and multimedia authenticity protocols.
  5. Highlight the importance of the policy measures for international governance for AI, the industry led initiatives such as the Coalition for Content Provenance and Authenticity (C2PA), and the work of international organisations in this area.

Deepfake technology allows people to swap faces in videos and images, change voices, and alter texts in documents. Deceptive videos and images created using generative AI can be used for identity theft to impersonate popular personalities (ie politicians) to spread fake news, bypass identity verification methods to commit fraud, scam people and ruin reputations. 

AI-generated content can closely resemble or even reproduce copyrighted material, raising questions about copyright infringement. Additionally, the use of copyrighted data in training AI models could lead to legal claims regarding the unauthorized use of such material. Generative AI in content creation could make it more challenging for creators to attest and defend ownership of their content. 

The main objectives of this session will be to introduce the risks that Deepfakes and generative AI content pose for safety and trust of AI systems and the policy and legal measures that are being introduced to address these issues.

In this session, the evolution of deepfake detection technology will be discussed and examples of innovative technologies used for detecting deepfakes in video, audio, images and text will be examined with a focus on their level of application and accuracy as well as whether they can help in complying with the requirements of policy and regulatory measures that governments are planning to implement for AI.

This session will examine the industry led initiatives such as the Coalition for Content Provenance and Authenticity (C2PA) Content Credentials specifications and AI watermarking innovations. The session will also discuss how a combination of AI watermarking to clearly identify generative AI multimedia outputs and Content Credentials tied to a real-world identity via cryptography could help in protecting digital copyright ownership.

Share this session

Are you sure you want to remove this speaker?