Trustworthy AI: Poisoning Attacks on AI
Battista Biggio (University of Cagliari) will present his research on Poisoning Attacks on AI as part of the Trustworthy AI series.
WHAT IS TRUSTWORTHY AI SERIES?
Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.
The expert talk series will cover the following topics:
- Measuring Neural Network Robustness
- Auditing AI Systems
- Adversarial Attacks and Defences
- Explainability & Trustworthiness
- Poisoning Attacks on AI
- Certified Robustness
- Model and Data Uncertainty
- AI Safety and Fairness
The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world.
Speakers, Panelists and Moderators
BATTISTA BIGGIO Assistant Professor at University of Cagliari and Co-Founder of Pluribus OneUniversity of CagliariBATTISTA BIGGIOAssistant Professor at University of Cagliari and Co-Founder of Pluribus OneUniversity of CagliariBattista Biggio received the MSc degree in Electronic Engineering, with honors, and the PhD in Electronic Engineering and Computer Science, respectively in 2006 and 2010, from the University of Cagliari (Italy). Since 2007 he has been working for the Department of Electrical and Electronic Engineering of the same University, where he currently is an Assistant Professor. From May 12th, 2011 to November 12th, 2011, he visited the University of Tuebingen (Germany), and worked on the security of machine learning algorithms to contamination of training data.
WOJCIECH SAMEKHead of Department of Artificial IntelligenceFraunhofer Heinrich Hertz InstituteWojciech Samek is head of the Department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany. The Fraunhofer Heinrich Hertz Institute (HHI) is ranked among top 20 Artificial Intelligence Research Labs in the world. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received the Dr. rer. nat. degree with distinction (summa cum laude) from the Technical University of Berlin in 2014. During his studies he was awarded scholarships from the German Academic Scholarship Foundation and the DFG Research Training Group GRK 1589/1, and was a visiting researcher at NASA Ames Research Center, Mountain View, USA. After his PhD he founded the Machine Learning Group at Fraunhofer HHI, which he has directed until 2020. Dr. Samek is associated faculty at the Berlin Institute for the Foundation of Learning and Data (BIFOLD), the ELLIS Unit Berlin and the DFG Graduate School BIOQIC. Furthermore, he is an editorial board member of PLoS ONE, Pattern Recognition and IEEE TNNLS and an elected member of the IEEE MLSP Technical Committee. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award, and part of the MPEG-7 Part 17 standardization. He is co-editor of the Springer book "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning" and has organized various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. Dr. Samek has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed by Thomson Reuters as "Highly Cited Papers" (i.e., top 1%) in the field of Engineering.