Steps toward trustworthy machine learning

Go back to programme

Steps toward trustworthy machine learning

How can we trust systems built from machine learning components? We need advances in many areas, including machine learning algorithms, software engineering, ML ops, and explanation. This talk will describe our recent work in two important directions: obtaining calibrated performance estimates and performing run-time monitoring with guarantees. I will first describe recent work by Jesse Hostetler on performance guarantees for reinforcement learning. Then I’ll review our research on providing guarantees for open category detection and anomaly detection for run-time monitoring of deployed systems. I’ll conclude with some speculations concerning meta-cognitive situational awareness for AI systems.

WHAT IS TRUSTWORTHY AI SERIES?

Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.

The expert talk series will cover the following topics:

  • Measuring Neural Network Robustness
  • Auditing AI Systems
  • Adversarial Attacks and Defences
  • Explainability & Trustworthiness
  • Poisoning Attacks on AI
  • Certified Robustness
  • Model and Data Uncertainty
  • AI Safety and Fairness

The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world.

Presentation - Thomas G. Dietterich

ITU News - Can AI be made trustworthy?

Shownotes

03:30 – Outline

  • Part 0: Robust AI and Robust Human Organizations
  • Part 1: Competence Modeling -Calibrated prediction intervals for reinforcement learning – AI models considers its own limitations
  • Part 2: Anomaly Detection – Open category detection with guarantees – once a system is running, how to we monitor it

05:10 High Reliability Human Organizations

  • Preoccupation with failure
    • Fundamental belief that the system has unobserved failure modes
    • Treat anomalies and near misses as symptoms of a problem with the system
  • Reluctance to simplify interpretations
    • Comprehensively understand the situation
  • Sensitivity to operations
    • Maintain continuous situational awareness
  • Commitment to resilience
    • Develop the capability to detect, contain, and recover from errors. Practice improvisational problem solving
  • Deference to expertise
    • During a crisis, authority migrates to the person who can solve the problem, regardless of their rank

10:15 Designing AI Systems to be HROs

  • There is more research needed in the “near misses” cases
  • There is also a need for dealing with unknown problems

Assessment: Designing AI as an HRO

  • B for anomaly detection

Designing a Human + AI Team as an HRO

  • The AI solution can be good in tracking the situation – how does it keep with human in the loop without bombarding them with information

Assessment: Human + AI HROs

  • The assessment here is no As and Bs, but more Cs and Ds

15:00 Part 1: Competence Modeling: Prospective MDP Performance Guarantees

  • Method to become an AI system autonomous.
  • Time on the horizontal axis of the trajectory-wise prediction interval – we want a true trajectory that the robot stay within the bound with probability 1- [delta]

A human can look, if the bounds are too wide, the human might want to take control

17:00 Summary of the Approach

Natural ecosystems are odd, the measure is negative, e.g. an invasive species causing damage

Quantile Regression (New method)

  • Out of the 100s of trajectories, we calculate the e.g. the 5th and 95th quantiles
  • There is no statistical guarantee that the system is within the bounds.
  • This uses quantile random forests – which don’t make an assumption on the distribution

Quantile Regression for Trajectories

The behaviour function has the variables the system will control, as output is the behaviour of the system.

The curves are a function only of the starting t0, so that humans can decide at the beginning weather to use it or not – but this does not give a guarantee

24:00 Conformal Guarantees

If we divide the dataset into two sub dataset D1 and D2, we then calculate corrections and sort in ascending order.

28:00 Conformal Guarantees in ℎ dimensions: Compute “exceedances” for each

29:20 Conformalized Quantile Regression: SCALEDSDTRAJECTORY

If the variance is 0, it is set to 0

The Green lines can be thought of as a box, in a high-dimensional space

A 50-dimensional problem is boiled down to a 1-dimensional problem

This is all distribution independent, the maximum tends to have extreme values, heavy tales, a Gaussian distribution assumption would not be correct.

33:00 Problem 1: Tamarisk Invasions in River Networks

  • Change of notation: Tau is trajectory
  • Tamarisk is an invasive species in north America, e.g. the Rio Grande basin
  • A given network thought of as a binary tree.
  • We assume that one each edge, there can grow only one tree: native, invasive, or none.
  • Each has different costs, killing also doesn’t always work, e.g. only 70% of the time, if invasive seeds land on an empty edge, it can take up space of a native tree

37:09 Example Prospective Intervals and Actual Trajectories

  • Notation: t for Tamaris

The first example has one tamaris tree and can be fixed quickly.

The second example has 3 tamaris trees, can take a while e.g. 10 years to control the system

The third example, this was a failure, 5 invasive and no native trees, in a best-case scenario this would take 6 years, could take much longer. In the end, it took indeed 10, 11, or 12 years – in one edge the trees had to be killed 3 times.

39:00 Tamarisk prediction interval coverage

  • Using 5000 test trajectories, 4 delta and 4 datasets.
  • Grey raw quantile
  • Blue from slides
  • Green improvement estimates upper and lower bounds (CONFIDENCE INTERVALS)

The guarantee achieved its nominal value

41:08 MDP 2: Starcraft Battles

Reinforcement Learning play out a 2-team battle, red start out with fewer units, at time t=14 they receive a random amount of reinforcements

From the blue team, can we predict how the game will go? –

42:10 Startcraft prediction interval coverage

  • We have to be careful with the Interpretation of Prediction Intervals
  • Once you have 1,000 points it’s fine

2 weaknesses

  1. needs a lot of data
  2. might be misinterpreted

We want to give a decision maker a guarantee.

We want to see if the failures are properly distributed – this is a topic for future research.

45:40 Part 2: Runtime Open Category Detection

At runtime, what happens e.g. if you deploy a system in Australia where you have kangaroos (this really happened to Volvo) – open set detection

47:30 Method: Reject Aliens Using Anomaly Detection

Alien obstacles – compute an alienness score, if it’s not too alien – call the classifier

How to calculate an umbral (t) to set an alien alarm?.

48:50 How to set 𝜏 without labeled data?

  • We want to control the missed alarm rate

49:58 Idea: Use Unlabeled Data that Contains Novel Class Examples

  • This is a mixture distribution

51:03 CDFs of Nominal, Mixture, and Alien Anomaly Scores

  • The idea is to use mixture data on nominal function to obtain the red curve and then, depending on the percentage of failure, obtain the anomaly score.

53:20 Estimating the mixing proportion 𝛼

  • The theory assumes alpha is known, if not we can estimate it; the estimates are pretty accurate.

53:55 Q3: How good are Recall and FPR in practice? UCI Datasets

  • Alpha is the fraction of aliens
  • The spatial dataset which was very large it did very well (blue line)

55:10 Concluding Remarks

  • Robust AI and High-Reliability Organizations
    • Competence modeling for HRO teamwork
    • Anomaly Detection
  • Competence Modeling
    • Calibrated prediction intervals for reinforcement learning
      • Quantile regression (value function approximation) to predict bounds on reward
      • Conformalization to obtain tight probabilistic guarantees
    • Anomaly Detection
      • Open category detection with guarantees
        • Theoretical guarantees on missed alarm rate for novel-class queries
        • Practical algorithms for estimating novelty proportion and setting alarm threshold

56:10 Acknowledgments

  • National Science Foundation
  • DARPA
  • Gift from Huawei, Inc.

56:45 Start of the Q&A ————————————-

57:00 Question: CQR – what are the limits of CQR when the data generation changes?

  • Both techniques assume the world is not changing – assume the distribution is the same – this is huge weakness – we need to add something to AI system that does change detection
  • I think we need a “dynamic machine learning” in statistics this is more mature e.g. time-series forecasting

59:00 Question: QR method – if you have a new unseen data point, can we use the same random forest?

  • We use the QR to predict then add the confidence intervals
  • QR is used at runtime
  • CI adjustment is the same for all starting states

60:00 Q: what is missing? What do you want, in order to give this an A instead of a D?

  • In image detection – the explanation- it would be nice if it could say “this is a new object” and it has these differences, e.g. a it has a long tail and jumps (kangaroo) – the systems now don’t have the right vocabulary.
  • In RL or dynamic case – you want to be updating your model

62:50 Is it possible to classify unknown?

  • The baseline method for anomaly detection
  • I had a project on recognising insects – measuring them – there were 54 classes wanted to recognised – pre DL – but it turned out many more categories of objects – had used grayscale because between 54 its not needed – but it would have been useful for the unplanned insects

65:40 Q: Do we need to move to a more generative model?

  • From a theoretical standpoint it’s the right thing, but deep generative models exhibit many of the same problems – becoming skeptical – representation problem is a central challenge

66:45 Q: With multitask learning – will it learn more meaningful representations?

 

  • That’s a good approach
  • The other approach is to remember everything – problem with that is that it also remembers irrelevant things – system starts reacting to irrelevant changes
  • Self-driving car: if it can reason about stationary object is irrelevant – moving objects on the road are the challenge
Share this session
In partnership with:
Scroll Up