Responsible AI and MLOPs for healthcare – bias reduction and fair modeling, every algorithm, every time

Go back to programme

Responsible AI and MLOPs for healthcare – bias reduction and fair modeling, every algorithm, every time

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Watch

      Registration

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    Data scientists are the newest members of the care team. Responsible at AI methods and AI governance provide a framework to address algorithmic bias, fairness, and model performance. We are building the most current and curated free and open-source biased detection, bias reduction, and fairness methods taxonomy and community.  Also learn about the responsible MLOPs framework and AI governance model for healthcare. We strive to reduce the friction associated with using responsible AI methods. The healthcare MLOPs workflow should include bias reduction and fair modeling…every algorithm, every time. The diverse communities and healthcare consumers we serve are entitled to equitable value from AI and new approaches to solve health inequity.

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session
    • Watch

      Registration

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Start date
      7 September 2022 at 17:00 CEST, Geneva | 11:00-12:30 EDT, New York | 23:00-00:30 CST, Beijing
    • End date
      7 September 2022 at 18:30 CEST, Geneva | 11:00-12:30 EDT, New York | 23:00-00:30 CST, Beijing
    • Duration
      90 minutes (including 30 minutes networking)
    • Topics
    • UN SDGs

    Are you sure you want to remove this speaker?