Explainability and Robustness for trustworthy AI

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    16 December 2021
    Timeframe
    15:00 - 16:00 CET, Geneva
    Duration
    60 minutes
    Share this session

    Today, thanks to advances in statistical machine learning, AI is once again enormously popular. However, two features need to be further improved in the future a) robustness and b) explainability/interpretability/re-traceability, i.e. to explain why a certain result has been achieved. Disturbances in the input data can have a dramatic impact on the output and lead to completely different results. This is relevant in all critical areas where we suffer from poor data quality, i.e. where we do not have i.i.d. data. Therefore, the use of AI in real-world areas that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. In sensitive areas where re-traceability, transparency, and interpretability are required, explainable AI (XAI) is now even mandatory due to legal requirements. One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can sometimes – of course not always – bring experience and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.

  • Share this session
    In partnership with
    Discover more from programme stream
    AI liability in the EU and the US: stifling or...

    AI liability in the EU and the US: stifling or...

    3 April 2023 - 16:00 to 18:00

    Living with ChatGPT: Detecting AI text without destroying trust

    Living with ChatGPT: Detecting AI text without destroying trust

    16 February 2023 - 16:00 to 17:30

    AlphaTensor: discovering mathematical algorithms with reinforcement learning

    AlphaTensor: discovering mathematical algorithms with reinforcement learning

    12 January 2023 - 17:00 to 18:15

    A fundamental problem of AI on digital hardware: Will true...

    A fundamental problem of AI on digital hardware: Will true...

    12 December 2022 - 15:00 to 16:10

    How to make AI more fair and unbiased

    How to make AI more fair and unbiased

    28 November 2022 - 15:00 to 16:30