Don’t trust your AI system: Model its validity instead

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    14 November 2022
    Timeframe
    15:00 - 16:30
    Duration
    90 minutes (including 30 minutes networking)
    Share this session

    Machine behaviour is becoming cognitively very complex but rarely human-like. The efforts to fully understand machine learning and other AI systems are falling short, despite the progress of explainable AI (XAI) techniques. In many cases, especially when using post-hoc explanations, we get a false sense of understanding, and the initial sceptical (and prudent) stance towards an AI system turns into overconfidence, a dangerous delusion of trust.  

    In this AI for Good Discovery, Prof. Hernández-Orallo argues that for many AI systems of today and tomorrow we should not vainly try to understand what they do, but to explain and predict when and why they fail. We should model their user-aligned validity rather than their full behaviour. This is precisely what a robust, cognitively-inspired AI evaluation can do. Instead of maximising contingent dataset performance and extrapolating the volatile good metric equally to every instance, we can anticipate the validity of the AI system, specifically for each instance and user.  

    Prof. Hernández-Orallo illustrates how this can be done in practice, identifying relevant dimensions of the task at hand, deriving capabilities from the system’s characteristic grid and building well-calibrated assessor models at the instance level. His normative vision is that every deployed AI system in the future should only be allowed to operate if it comes with a capability profile or an assessor model, anticipating the user-aligned system validity before running each instance. Only by fine-tuning trust to each operating condition will we truly calibrate our expectations on AI. 

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session
    Discover more from programme stream
    AI liability in the EU and the US: stifling or...

    AI liability in the EU and the US: stifling or...

    3 April 2023 - 16:00 to 18:00

    Living with ChatGPT: Detecting AI text without destroying trust

    Living with ChatGPT: Detecting AI text without destroying trust

    16 February 2023 - 16:00 to 17:30

    AlphaTensor: discovering mathematical algorithms with reinforcement learning

    AlphaTensor: discovering mathematical algorithms with reinforcement learning

    12 January 2023 - 17:00 to 18:15

    A fundamental problem of AI on digital hardware: Will true...

    A fundamental problem of AI on digital hardware: Will true...

    12 December 2022 - 15:00 to 16:10

    How to make AI more fair and unbiased

    How to make AI more fair and unbiased

    28 November 2022 - 15:00 to 16:30