Dissecting algorithmic bias

Go back to programme

Dissecting algorithmic bias

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Watch

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    Algorithms can reproduce and even scale up racial biases. A major mechanism by which bias gets into algorithms is label choice: the specific target variable an algorithm is trained to predict. In this talk, I will show that a widely used family of algorithms in health care predicts health care costs, as a proxy for health needs. But because of unequal access to care, Black patients cost less than White patients with the same needs. So when the algorithm is trained to predict cost, it de-prioritizes Black patients relative to their needs. Crucially, label choice bias is fixable: retraining algorithms to predict less biased proxies can turn algorithms into a force for good, targeting resources to those who need them and reducing disparities rather than perpetuating them.

    Share this session
    In partnership with