No one should trust AI
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
Trust is a condition between peers or near-peers in a population of responsible agents, that is, a society, wherein members have good reason to believe they will not be taken advantage of. Applying the metaphor of “trust” to a software engineering technique facilitates disintegration of social contracts by misdirecting attention from who or what is responsible for the functioning of intentional artefacts. In this talk, Professor Joanna Bryson reviews scientific models of both trust and polarisation, and presents data science both in support of these models and concerning the real geopolitics of the peers of the largest AI companies. While we should not trust AI, Professor Bryson will show how we can achieve transparency through it, and therefore hold its providers and other powerful agencies to account.
This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.