Beyond bias: Algorithmic unfairness, infrastructure, and genealogies of data

Go back to programme

Beyond bias: Algorithmic unfairness, infrastructure, and genealogies of data

Banner for the Beyon Bias event on 13 April 2022
  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Watch

      Register

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    This AI and Health Discovery discusses the problems of bias and datasets used in machine learning and AI systems. Join Alex Hanna, Director of Research at the Distributed AI Research Institute (DAIR), to uncover deeper issues with data used in AI, including problematic categorizations and the extractive logics of crowd work and data mining. This talk will first focus on reframing data as a form of infrastructure, and as such, implicating politics and power in the construction of datasets. Secondly, it discusses the development of a research program around the genealogy of datasets used in machine learning and AI systems. It will present why these genealogies should be attentive to the constellation of organizations and stakeholders involved in their creation, including the intent, values, and assumptions of their authors and curators, and the adoption of datasets by subsequent researchers to ensure fairness in automated decision-making systems.

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session

    Are you sure you want to remove this speaker?