Explainable AI in cases when you cannot spot the dogs or cats

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    23 March 2026
    Timeframe
    17:00 - 18:00 CET
    Duration
    60 minutes
    • Days
      Hours
      Min
      Sec

    This talk explores the use of explainable AI (XAI) in settings where ground truth is not known a priori and therefore cannot be directly validated. In such scenarios, explanations are essential for assessing model behavior and reliability. The talk begins with a brief discussion of model pruning, based on a 2021 study, highlighting its relevance for improving computational and energy efficiency in deep learning systems. This provides a foundation for understanding how model simplification can relate to interpretability.

    The main focus is on recent work on predicting stem cell differentiation into beta cells. In this context, XAI methods are used to uncover meaningful biological patterns from complex data, despite the absence of fully established ground truth. The aim is to gain insights into the model’s decision-making process and to identify biologically plausible signals that can guide further experimental validation.Overall, the talk illustrates how XAI can support scientific discovery in high-uncertainty domains by enabling a deeper understanding of model predictions beyond standard performance metrics.

     

    Session Objectives:

    By the end of this session, participants will be able to:

    • Explain the challenges of applying XAI in scenarios where ground truth is unknown.
    • Describe the principles of model pruning and its role in improving energy efficiency.
    • Analyze how XAI methods can reveal patterns in complex biological data, such as stem cell differentiation.
    • Evaluate the reliability of model explanations in high-uncertainty settings.
    • Apply basic XAI concepts to interpret model predictions beyond standard performance metrics.

     

    Recommended Mastery Level/Prerequisites:

    Participants should have a basic understanding of machine learning and deep learning concepts. Familiarity with neural networks and introductory knowledge of explainable AI methods is recommended. No prior expertise in biology is required.

    Are you sure you want to remove this speaker?