Don’t trust your AI system: Model its validity instead

Go back to programme

Don’t trust your AI system: Model its validity instead

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Watch

      Register

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    Machine behaviour is becoming cognitively very complex but rarely human-like. The efforts to fully understand machine learning and other AI systems are falling short, despite the progress of explainable AI (XAI) techniques. In many cases, especially when using post-hoc explanations, we get a false sense of understanding, and the initial sceptical (and prudent) stance towards an AI system turns into overconfidence, a dangerous delusion of trust.  

    In this AI for Good Discovery, Prof. Hernández-Orallo argues that for many AI systems of today and tomorrow we should not vainly try to understand what they do, but to explain and predict when and why they fail. We should model their user-aligned validity rather than their full behaviour. This is precisely what a robust, cognitively-inspired AI evaluation can do. Instead of maximising contingent dataset performance and extrapolating the volatile good metric equally to every instance, we can anticipate the validity of the AI system, specifically for each instance and user.  

    Prof. Hernández-Orallo illustrates how this can be done in practice, identifying relevant dimensions of the task at hand, deriving capabilities from the system’s characteristic grid and building well-calibrated assessor models at the instance level. His normative vision is that every deployed AI system in the future should only be allowed to operate if it comes with a capability profile or an assessor model, anticipating the user-aligned system validity before running each instance. Only by fine-tuning trust to each operating condition will we truly calibrate our expectations on AI. 

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session
    • Watch

      Register

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Start date
      14 November 2022 at 15:00 CET, Geneva | 09:00-10:30 EDT, New York | 21:00-22:30 CST, Beijing
    • End date
      14 November 2022 at 16:30 CET, Geneva | 09:00-10:30 EDT, New York | 21:00-22:30 CST, Beijing
    • Duration
      90 minutes (including 30 minutes networking)
    • Topics
    • UN SDGs

    Are you sure you want to remove this speaker?