Making Deep Machines Right for the Right Reasons
Deep neural networks have shown excellent performances in many real-world applications. Unfortunately, they may show “Clever Hans”-like behavior—making use of confounding factors within datasets—to achieve high performance. In this talk, I shall touch upon explanatory interactive learning (XIL). XIL adds the expert into the training loop such that she interactively revises the original model via providing feedback on its explanations. Since “visual” explanations may not be sufficient to grapple with the model’s true concept, I shall also touch upon revising a model on the semantic level, e.g. “never focus on the color to make your decision”. Our experimental results demonstrate that XIL can help avoiding Clever Hans moments in machine learning.
This talk is based on joint works with Patrick Schramwoski, Wolfgang Stammer, Xiaoting Shao, Stefano Teso, Franziska Herbert, Anne-Katrin Mahlein, Anna Brugger, and many others.