Room Alpha
Workshop

Challenging the status quo of AI security

In person
  • Date
    11 July 2025
    Timeframe
    09:00 - 12:15 CEST
    Duration
    3h 15 minutes
    Share this session

    A dynamic opening to set the stage for the day, helping the audience frame key questions and tensions in AI security.

    This keynote will provide grounding viewpoints through two mental models (OODA Loop and DIKW Pyramid) to help us anticipate new and emerging challenges with Agentic AI and Identity. These mental models will give a broader context for the subsequent panels and help the audience gain greater clarity on how to tackle these challenges and what new standards may be needed.

    This session unpacks the security risks of agentic AI, systems that perceive, plan, remember, and act autonomously, and why they raise new challenges for data protection, in particular the rapidly evolving landscape of multi-agent AI systems and the frameworks needed for effective interaction between diverse autonomous agents. It explores key threats, sharing insights from ITU’s work, while offering practical approaches to safeguard data and examining current standards, gaps, and emerging global efforts to secure AI.

    Identity management is a pre-requisite for any AI agent-based transaction. This topic cuts across all sectoral AI developments featured at AI for Good - whether in health, art, or beyond. This session will explore how digital identity frameworks can help identify who AI agents are, whom they act on behalf of, and verify their authorization and/or delegation. It will highlight how robust identity management not only secures online environments but also enables secure and scalable task delegation to autonomous agents.

    AI is a double-edged sword in cybersecurity, transforming both defense and attack strategies. For every defensive advancement made with AI-powered threat detection, adversaries counter with AI-enhanced evasion techniques and this race is evolving faster than ever before. This session will examine the complex interplay between AI and cybersecurity covering security for AI, AI in/for cybersecurity, and AI-enabled cyber threats (often referred to as “dark AI”), with a focus on how to defend against malicious use of AI and leverage AI’s potential to elevate cybersecurity to new levels of cyber resilience and cyber immunity.

    Share this session
    • 48
      Days
      04
      Hours
      29
      Min
      15
      Sec