No one should trust AI

Go back to programme

No one should trust AI

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

  • Trust is a condition between peers or near-peers in a population of responsible agents, that is, a society, wherein members have good reason to believe they will not be taken advantage of.  Applying the metaphor of “trust” to a software engineering technique facilitates disintegration of social contracts by misdirecting attention from who or what is responsible for the functioning of intentional artefacts. In this talk, Professor Joanna Bryson reviews scientific models of both trust and polarisation, and presents data science both in support of these models and concerning the real geopolitics of the peers of the largest AI companies. While we should not trust AI, Professor Bryson will show how we can achieve transparency through it, and therefore hold its providers and other powerful agencies to account. 

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session
    • Watch

      Register

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Start date
      5 September 2022 at 15:00 CEST Geneva | 09:00-10:30 EST, New York | 21:00-22:30 CST, Beijing
    • End date
      5 September 2022 at 16:30 CEST Geneva | 09:00-10:30 EST, New York | 21:00-22:30 CST, Beijing
    • Duration
      90 minutes (including 30 minutes networking)
    • Topics
    • UN SDGs

    Are you sure you want to remove this speaker?