Adversarial attacks and defences

Go back to programme

Adversarial attacks and defences

  • watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • watch

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    Aleksander Madry (MIT) will present his latest research on Adversarial Attacks and Defenses as part of the Trustworthy AI series hosted by Wojciech Samek (Fraunhofer HHI).

    WHAT IS TRUSTWORTHY AI SERIES?

    Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.

    The expert talk series will cover the following topics:

    • Measuring Neural Network Robustness
    • Auditing AI Systems
    • Adversarial Attacks and Defences
    • Explainability & Trustworthiness
    • Poisoning Attacks on AI
    • Certified Robustness
    • Model and Data Uncertainty
    • AI Safety and Fairness

    The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world.

    Aleksander Madry: Why do ML models fail?

    Shownotes

    00:00 Opening remarks by ITU

    01:30 Introduction by Wojciech Samek

    02:00 Why do ML Models Fail? – Aleksander Madry 

    03:00 Machine Learning: A Success Story 

    • Machine learning is involved in every part of our lives. 

    04:00 Are we there yet? 

    • Machine learning is still science fiction? 

    05:00 Towards ML Deployment 

    • ML systems need to provide a positive value, robustness, reliability (less likely to fail), interpretability. Do we have these features? 

    06:00 Answer: no 

    07:00 Machine Learning is Brittle

    • Brittle is not only in the input data, but also within the system. 

    10:00 What is the root of this brittleness?

    • Models are good correlation extractors. 
    • Models identify features in the input data.
    • Problem: spurious correlation

    14:00 Backdoor attack

    • Manipulation of training data to control model behaviour. 
    • If you add an spurious correlation in face recognition, such as glasses, the system may fail. (planted correlation)

    15:00 Such correlations already exist

    • They are the result of a flawed data pipeline. 
    • Ideal: Real word images -> Annotations -> benchmark -> output (Only work in a few images)
    • Real: scaped images -> crowd labels -> noisy annotations -> benchmarking -> inexact output

    18:00 Imagenet 

    • Imagenet is sourced from social media (flickr).
    • How a fish looks on social media?
    • A man holding a fish (Imagenet). If you remove the fish, the system will still identify the image as a fish, only with background.

    21:00 Classification task

    • Each image is assigned a single label.
    • An image has many valid objects and many labels. These labels do not always match with the main topic of the image.

    23:00 Analysis of an ML-based imaging tool

    • X-ray example: analysing pen marks made by a doctor. 
    • Patterns are predictive in imaging.

    26:00 Current ML Paradigm 

    • Optimize over a training set.
    • Generalize to a test set
    • Make robustly.
    • Recognizing data from an image in the pipeline is not all.

    28:00 Human-ML misalignment

    • If you are successful at a task, it does not mean that your system is learning the main concept of your input data. 
    • There are many ways to solve the task and valid classification methods.

    30:00 Potencial Cure: Interpretability

    • What aspects of the input the model uses. 
    • Example: Input -> Saliency Maps. Correlation makes it difficult to interpret the result and there is no room for free interpretability.

    32:00 All the problem we discussed can be traced back to human-ml misalignment

    • Misalignment: Uninterpretability.

    33:00 The dollar question? 

    • How to trade off the raw correlative power of modern ML with robustness and reliability. 

    34:00 Is not at all just about vision 

    • Vision is just the most well-studied subfield of modern ML (and most successful).
    • All issues come from real world input context.
    • We have a standard: human perception system, but this does not make it easier.

    36:00 Takeways 

    • Machine Learning is like a “sharp knife”. It is very useful if you know how to use it. 
    • Correlation extraction is useful, but at the same time is a weakness (double-edged).
    • ML Researchers need to embrace the complexity of real-world data and tasks.

    Practitioners help clarify data generation and articulate the correct objectives. 

    • What would it take to incentivize such cooperation?

    39:00 ===== START Q&A ========

    40:00 Q: if we have a more advanced objective, it will solve the problem of classifying cats/dogs?

    • We have to understand the functioning of the model. Intervention is what you need to learn and have more exact outputs. 

    43:00 Q: What do you think about being more specialized ML methods in specific domains of study?

    • It depends on what your input data is and how you want to interpret it. 

    48:50 is there any measure to quantify robustness? 

    • You need to have a specific task and metrics. 

    50:30 How can you reduce misalignment?  

    53:00 What is the relationship between manifold learning and robustness?

    • Specify the manifold of natural images to learn from them and get a robust system. 

    54:20 Q: Do you have domain specialists, like radiologists in your team? 

    57:30 Q: What is your opinion about ML based apps for diagnosis? 

    • It is more about making money. Incentivization is a way to improve the system.

     

     

     

     

    Share this session
    In partnership with

    Are you sure you want to remove this speaker?