Explainable AI in the era of Large Language Models

Go back to programme

Explainable AI in the era of Large Language Models

The domain of Explainable Artificial Intelligence (XAI) has made significant strides in recent years. Various explanation techniques have been devised, each serving distinct purposes. Some of them explain individual predictions of AI models by highlighting influential input features, while others enhance comprehension of the model’s internal operations by visualizing the concepts encoded by individual neurons. Although these initial XAI techniques have proven valuable in scrutinizing models and detecting flawed prediction strategies (referred to as “Clever Hans” behaviors), they have predominantly been applied in the context of classification problems. The advancement of generative AI, notably the emergence of exceedingly large language models (LLMs), has underscored the necessity for next-generation explanation methodologies tailored to this fundamentally distinct category of models and challenges.

This workshop aims to address this necessity from various angles. Firstly, to think about what “explaining” means in the context of generative AI. Secondly, to discuss recent methodological breakthroughs, which allow to gain deeper insights into the mysterious world of LLMs. Lastly, we will have a look at the practical implications when a new class of explainable LLMs becomes available, not only from the standpoint of the lay user but also by considering the opportunities for developers, experts, and regulators.

Reimagining Explainable AI Evaluation with LLMs

Anna Hedström

Abstract: Every explainable AI researcher needs to answer the question: how good is my explanation with respect to the model it seeks to explain? Without access to ground truth labels, it is not obvious how to answer this. Researchers have therefore tried a variety of evaluation approaches: human-based, restricted to toy settings, or using metric-based measures to approximate explanation quality. In this talk, we begin by reviewing existing evaluation ideas and identifying pitfalls in prevalent evaluation practices. Lastly, we also peek into the possible implications of large language models (LLMs) increasingly dominating; what the evaluation-centric opportunities and challenges are and what this may mean for the research community and society as a whole.

A manifesto of open challenges & interdisciplinary research directions for eXplainable Artificial Intelligence

Luca Longo

Understanding black box models has become paramount as systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper highlights the advancements in XAI and its application in real-world scenarios and addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts from diverse fields to identify open problems, striving to synchronize research agendas and accelerate XAI in practical applications. By fostering collaborative discussion and interdisciplinary cooperation, we aim to propel XAI forward, contributing to its continued success. We aim to develop a comprehensive proposal for advancing XAI. To achieve this goal, we present a manifesto of 28 open problems categorized into nine categories. These challenges encapsulate the complexities and nuances of XAI and offer a road map for future research. For each problem, we provide promising research directions in the hope of harnessing the collective intelligence of interested stakeholders.

 

RED XAI, BLUE XAI - new challenges and opportunities

Przemyslaw Biecek

Generative models, whether for text, image, audio or other data, open up new opportunities and challenges for researchers in the XAI field. The paper will discuss what challenges exist when explaining generative models, but also discuss how to use generative models to explain other models. We will define two perspectives - RED XAI and BLUE XAI - and provide examples of how these perspectives can be used to create new methods for adversarial exploration of AI models to increase their safety, robustness and trustworthiness.

 

Intelligence Augmentation: Bridging Human and Artificial Intelligence

Mennatallah El-Assady

Intelligence augmentation through mixed-initiative systems promises to combine AI's efficiency with humans' effectiveness. This can be facilitated through co-​adaptive visual interfaces. This talk will outline the need for human-AI collaborative decision-making and problem-solving. I will illustrate how customized visual interfaces can enable interaction with machine learning models to promote their understanding, diagnosis, and refinement. In particular, I will reflect on current challenges and future research opportunities.

 

Share this session

Are you sure you want to remove this speaker?