Room U
Workshop

Trustworthy AI testing and validation

In person
  • Date
    8 July 2025
    Timeframe
    14:00 - 17:15 CEST
    Duration
    3h 15 minutes
    Share this session

    The main objectives of the workshop are to:

    a) discuss the research around AI system testing and verification methods,

    b) provide an overview of the different methodologies that are used to test and verify AI systems and their strengths/limitations,

    c) identify any gaps in current methodologies for AI system testing and verification,

    d) explore examples of some of the methodologies and their applications in AI system testing such as Agentic AI testing and LLM security testing, and

    e) discuss opportunities for international collaboration on AI testing and verification through the ITU Trustworthy AI Testing and Knowledge Platform.

    The main objective of this session will be to discuss the research around AI system testing and verification methods, the different methodologies that are used to test and verify AI systems and their strengths/limitations; the different types and uses of metrics, and the process being choosing metrics.

    The objective of this session is to explore the gaps in testing AI systems, evaluation of trust in AI systems and opportunities for international collaboration to ensure the effective design, development, and deployment of AI systems that integrate considerations for AI trust. The ITU Trustworthy AI Testing and Knowledge Platform will be highlighted in this session.

    Share this session
    • 50
      Days
      07
      Hours
      09
      Min
      13
      Sec