Room U
Workshop

Trustworthy AI testing and validation

In person
  • Date
    8 July 2025
    Timeframe
    14:00 - 17:15 CEST
    Duration
    3h 15 minutes
    Share this session

    The main objectives of the workshop are to: a) discuss the research around AI system testing and verification methods, b) provide an overview of the different methodologies that are used to test and verify AI systems and their strengths/limitations, c) identify any gaps in current methodologies for AI system testing and verification, d) explore examples of some of the methodologies and their applications in AI system testing such as Agentic AI testing and LLM security testing, and e) discuss opportunities for international collaboration on AI testing and verification through an international collaborative platform.

    Schedule

    The objective of this session is to explore the gaps in testing AI systems, evaluation of trust in AI systems and opportunities for international collaboration to ensure the effective design, development, and deployment of AI systems that integrate considerations for AI trust. The need for international collaboration on standards for AI testing and assurance will be highlighted in this session. The outcomes of this session will feed in the Open Dialogue on Trustworthy AI Testing taking place on 9th July. 

    Share this session
    • 17
      Days
      21
      Hours
      30
      Min
      13
      Sec