Towards auditable AI systems

Go back to programme

Towards auditable AI systems

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Watch

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    Artificial Intelligence (AI) systems such as deep neural networks play a growing role as part of decision and control systems in a plethora of applications. While some of these applications are safety- and security-critical, the use of AI systems pose new challenges with regard to aspects such as IT security, safety, robustness and trustworthiness. To meet these challenges, a generally agreed-upon framework for auditing AI systems throughout their life cycle, comprising evaluation strategies, tools and standards, is required. This is under development but, as of now, only partly ready for practical use. In this talk, focusing on the applications domains autonomous driving and biometrics, the current status of AI system auditing is presented together with open questions and future directions.

    WHAT IS TRUSTWORTHY AI SERIES?

    Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.

    The expert talk series will cover the following topics:

    • Measuring Neural Network Robustness
    • Auditing AI Systems
    • Adversarial Attacks and Defences
    • Explainability & Trustworthiness
    • Poisoning Attacks on AI
    • Certified Robustness
    • Model and Data Uncertainty
    • AI Safety and Fairness

    The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world.

    Presentation - Arndt Von Twickel

    Shownotes

    04:00 Responsibilities of BSI in the context of AI

    • BSI is the German Federal Cyber Security Authority (www.bsi.bund.de/EN/).
    • It shapes information security in digitalization through prevention, detection and reaction.
    • BSI works for the government, business and society.

    04:30 What is the role of BSI in the context of AI?

    1. Vulnerabilities of AI system – the evaluation of existing and the development of new methods – as well as the development of technical guidelines and standards – this is the focus of the talk today
    2. AI as a tool to defend IT systems – Recommendations of existing and development of new technologies, guidelines for their deployment and operation
    3. AI as a tool to attack IT systems – How can one protect IT systems from qualitatively new AI-based attacks?

    05:00 AI systems are connected and embedded in safety and security-critical applications – using sensors for inputs and actuators for outputs they interact with the environment

    1. Classical IT: cIT
    2. Symbolic AI: sAI
    3. Connectionist AI systems: cAI <- this is the focus of today’s talk

    06:00 cAI differs qualitatively from cIT and sAI

    • In symbolic AI systems, the developer is directly responsible for the design and testing of the system, interpretation is possible (at least in principle).
    • In connectionist AI systems, the developer can only control the boundary conditions, such as the data fed into the system.

    08:00 connectionist AI (cAI) specific and qualitatively new problems

    1. Input and state space are huge – in particular imaging-based models are very large
    2. Black properties as described in the previous slide on how cAI is qualitatively different
    • Consider the whole life-cycle system -> Safe/secure system

    09:00 embedded nature of cAI

    10:00 IT security of systems
    Consider:

    1. Interpretability
    2. Evaluability
    3. Verifiability
    • BSI does work on: user acceptance, ethics, or data protection – but links are made.

    12:00 Vulnerability of cAI systems

    The development lifecycle consists of the following stages:

    1. Planning
    2. Data
    3. Training
    4. Evaluation
    5. Operation
    • Which attack vectors exist
    • Backdoor attack – highly dependent on training data
    • Adversarial attack, attack the operating cAI system (classic threats)

    15:00 AI-specific attacks on road systems

    • Poisoning attack – e.g. post-its can be used to get a system to misinterpret e.g. a 50km/hr sign as an 80km/hr sign

    17:00 Adversarial attacks

    • Adding noise
    • This can be partially addressed by not allowing the attacker access to the system – however, this can be circumvented using a substitute model

    19:00 What kind of measures of defence are there?

    • Training of the developer on AI-specific attacks
    • Squeezing, compression (i.e. pre-processing of the data)
    • Certification of training data
    • Adversarial training – during training you will already feed the algorithm with data that use adversarial attacks – this is currently considered to be one of the main ways of dealing with adversarial attacks

    Single measures are not effective against adaptive attackers, a combination of methods is needed.

    22:00 Test and improvements of the robustness of neural networks

    • Can be used to improve datasets – and then the models
    • E.g. angles, lighting etc. of images of signs (basic and specific features of inputs)
    • E.g. light falling on a traffic sign through the leaves of a tree, which then move in the wind

    26:00 AI System development process

    • Linear process but every phase is connected
    • The combination of perturbations in natural situations is common, and further affects the performance.
    • The order in which perturbations occur also has effects on the performance of models.

    30:00 Open Challenges

    • What should be the communication between developer and users: training and education.
    • Establish a range of acceptability and uncertainty

    35:00 How to achieve acceptable levels of security, safety, audit quality

    • Use of research and development.
    • Take into consideration favourable boundary conditions.

    40:00 Start of Q&A

    41:00 Question: can we embed an insecure system in a secure environment, and will that be secure?
    E.g. going to the moon involves risky systems, but with redundancies

    • This is a functional safety approach
    • So far in most systems there is a human in the loop
    • For e.g. traffic signs, there normally is a redundancy with a map which also contains speed limits, but with a GPS failure this redundancy does not work

    42:00 Question: How long will it take that there are clear criteria for AI models? Do you expect changes?

    • The approach of making “from the ground up” regulations will not work, the systems are embedded so we need to build up on existing systems.
    • The regulation will start with looking at specific use-cases, once there are a lot of specific use-cases we will develop a generalised model

    43:00 Question: About robustness scores – are there ways to quantify this?

    • This depends on the context – it determines what is sufficient
    • E.g. optimising for performance first, and then trying to fix robustness to attacks will lead to a decrease in the performance over the original performance
    • Rather a form of co-optimisation is needed – this will be a major challenge

    44:00 Question: what about using not the maximum but the average reliability score? – looking at the product of the probability with the cost

    • In some fields there is a very low tolerance for errors, such as in health, or self-driving cars – the approach should be to develop evaluations for individual use cases

    46:00 Question: What is the failure acceptability of AI in vehicles?

    • Political and ethical discussion – humans are not error-free – we should compare with existing systems, e.g. AI models should be at least as good as existing systems

    53:00 Do we need to be able to understand the system to audit it?

    • It depends on the system and the level of acceptability of failure. A better understanding will deliver safer systems.

     

     

    Share this session
    In partnership with

    Are you sure you want to remove this speaker?