Explainable Multimodal Agents with Symbolic Representations & Can AI be less biased?

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    17 March 2025
    Timeframe
    16:00 - 17:00 CET Geneva
    Duration
    60 minutes
    Share this session

    PART 1Ruotong Liao

    Perceive, Remember, and Predict: Explainable Multimodal Agents with Symbolic Representations:

    In Ruotong’s work, she focuses on large language models (LLMs) that reason over multimodal, time-dependent data while ensuring explainability.

    Specifically, this talk explores how integrating temporal reasoning and symbolic knowledge over evolving events enables LLMs to make structured, interpretable, and context-aware predictions.

    First, she introduces GenTKG, where LLMs predict evolving events in the future to develop an early crisis warning system. A retrieval-augmented generation framework bridges the gap between symbolic representations of structured temporal data and LLM-driven reasoning, allowing for explainable forecasting.

    Second, she presents VideoINSTA, a multimodal agent for long video understanding through event segmentation. By emphasizing event-based temporal reasoning and content-based spatial reasoning, the agent can iteratively process symbolic knowledge extracted from videos, enabling explainable, human-like video understanding.

    These works aim to develop explainable multimodal agents capable of perceiving, remembering, predicting, and justifying their reasoning over time.

    Part 2 – Felix Friedrich

    What if we could just ask AI to be less biased?

    This talk explores the question: What if we could just ask AI to be less biased? Felix Friedrich will discuss methods for controlling generative models to foster fairness and safety, focusing on analyzing internal representations to reduce bias. Drawing from his research on synthetic data augmentation for multimodal models, including text-to-image, he will highlight how these techniques can improve the inclusivity and fairness of AI systems. The session provides insights into making AI more responsible through better control and understanding of model behavior. 

  • Are you sure you want to remove this speaker?