Trustworthy AI: Himabindu Lakkaraju

Himabindu Lakkaraju (Harvard Business School) will present her research as part of the Trustworthy AI series.

WHAT IS TRUSTWORTHY AI SERIES?

Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.

The expert talk series will cover the following topics:

  • Measuring Neural Network Robustness
  • Auditing AI Systems
  • Adversarial Attacks and Defences
  • Explainability & Trustworthiness
  • Poisoning Attacks on AI
  • Certified Robustness
  • Model and Data Uncertainty
  • AI Safety and Fairness

The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the world.

Speakers, Panelists and Moderators

  • HIMABINDU LAKKARAJU
    HIMABINDU LAKKARAJU
    Assistant Professor
    Harvard Business School
    I am an Assistant Professor in the Technology and Operations Management Group at Harvard Business School. My research primarily involves machine learning and its applications to high-stakes decision making. I lead the AI4LIFE research group at Harvard and I recently co-founded the Trustworthy ML Initiative (TrustML) to help lower entry barriers into trustworthy ML and bring together researchers and practitioners working in the field. My current research is being generously supported by NSFGoogleHarvard Data Science InitiativeAmazon, and Bayer. Prior to my stint at Harvard, I received my PhD in Computer Science from Stanford University.
  • WOJCIECH SAMEK
    WOJCIECH SAMEK
    Head of Department of Artificial Intelligence
    Fraunhofer Heinrich Hertz Institute
    Wojciech Samek is head of the Department of Artificial Intelligence and the Explainable AI Group at Fraunhofer Heinrich Hertz Institute (HHI), Berlin, Germany. The Fraunhofer Heinrich Hertz Institute (HHI) is ranked among top 20 Artificial Intelligence Research Labs in the world. He studied computer science at Humboldt University of Berlin, Heriot-Watt University and University of Edinburgh from 2004 to 2010 and received the Dr. rer. nat. degree with distinction (summa cum laude) from the Technical University of Berlin in 2014. During his studies he was awarded scholarships from the German Academic Scholarship Foundation and the DFG Research Training Group GRK 1589/1, and was a visiting researcher at NASA Ames Research Center, Mountain View, USA. After his PhD he founded the Machine Learning Group at Fraunhofer HHI, which he has directed until 2020. Dr. Samek is associated faculty at the Berlin Institute for the Foundation of Learning and Data (BIFOLD), the ELLIS Unit Berlin and the DFG Graduate School BIOQIC. Furthermore, he is an editorial board member of PLoS ONE, Pattern Recognition and IEEE TNNLS and an elected member of the IEEE MLSP Technical Committee. He is recipient of multiple best paper awards, including the 2020 Pattern Recognition Best Paper Award, and part of the MPEG-7 Part 17 standardization. He is co-editor of the Springer book "Explainable AI: Interpreting, Explaining and Visualizing Deep Learning" and has organized various special sessions, workshops and tutorials on topics such as explainable AI, neural network compression, and federated learning. Dr. Samek has co-authored more than 150 peer-reviewed journal and conference papers; some of them listed by Thomson Reuters as "Highly Cited Papers" (i.e., top 1%) in the field of Engineering.

Date

24 Jun 2021

Time

CEST, Geneva
15:00
Sessions

Topics

Safety

Register