Interpretable Neural Networks for Computer Vision: Clinical Decisions that are Computer-Aided, not Automated

Go back to programme

Interpretable Neural Networks for Computer Vision: Clinical Decisions that are Computer-Aided, not Automated

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    • Register

      Watch

      * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

    Let us consider a difficult computer vision challenge. Would you want an algorithm to determine whether you should get a biopsy, based on an x-ray? That’s usually a decision made by a radiologist, based on years of training. We know that algorithms haven’t worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. To do this, at the very least, we would need an interpretable neural network that is as accurate as its black box counterparts. In this talk, I will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement, using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and the concept whitening technique provides a strict advantage over the posthoc use of concept vectors.

    Share this session
    In partnership with

    Are you sure you want to remove this speaker?