Explainability and Robustness for trustworthy AI
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
Today, thanks to advances in statistical machine learning, AI is once again enormously popular. However, two features need to be further improved in the future a) robustness and b) explainability/interpretability/re-traceability, i.e. to explain why a certain result has been achieved. Disturbances in the input data can have a dramatic impact on the output and lead to completely different results. This is relevant in all critical areas where we suffer from poor data quality, i.e. where we do not have i.i.d. data. Therefore, the use of AI in real-world areas that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. In sensitive areas where re-traceability, transparency, and interpretability are required, explainable AI (XAI) is now even mandatory due to legal requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can sometimes – of course not always – bring experience and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.