Towards human-understandable explanations with XAI 2.0
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
The emerging field of eXplainable Artificial Intelligence (XAI) aims to bring transparency to today’s powerful but opaque deep learning models. However, the vast majority of current approaches to XAI only provides partial insights and leaves the burden of interpreting the model’s reasoning to the stakeholder. In this talk we introduce the Concept Relevance Propagation (CRP) approach, which combines the local and global perspectives of XAI and thus allows answering both the “where” and “what” questions for individual predictions in a post-hoc manner, without additional constraints imposed on the model. We further introduce the principle of Relevance Maximization for finding representative examples of encoded concepts based on their usefulness to the model. We thereby lift the dependency on the common practice of Activation Maximization and its limitations. We demonstrate the capabilities of our methods in various settings, showcasing that Concept Relevance Propagation and Relevance Maximization lead to more human interpretable explanations, and thus enabling novel analyses for gaining insights about the reasoning of AI.
This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.