Safety and robustness for deep learning with provable guarantees

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    28 October 2021
    Timeframe
    15:00 - 16:00 CEST, Geneva
    Duration
    60 minutes
    Share this session

    Computing systems are becoming ever more complex, with decisions increasingly often based on deep learning components. A wide variety of applications are being developed, many of them safety-critical, such as self-driving cars and medical diagnosis. Since deep learning is unstable with respect to adversarial perturbations, there is a need for rigorous software development methodologies that encompass machine learning components. Using illustrative examples from autonomous driving and NLP, this lecture will describe progress with developing safety verification techniques for deep neural networks to ensure robustness of their decisions with respect to input perturbations and causal interventions, with provable guarantees. The lecture will conclude with an overview of the challenges in this field.

  • Are you sure you want to remove this speaker?