XAI and trust
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
As part of the Trustworthy AI series, Grégoire Montavon (TU Berlin) will present his research on eXplainable AI (XAI) and trust.
WHAT IS TRUSTWORTHY AI SERIES?
Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.
The expert talk series will cover the following topics:
- Measuring Neural Network Robustness
- Auditing AI Systems
- Adversarial Attacks and Defences
- Explainability & Trustworthiness
- Poisoning Attacks on AI
- Certified Robustness
- Model and Data Uncertainty
- AI Safety and Fairness
The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world.