Detecting and characterizing systematic deviations in data and model outputs in healthcare
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
With a growing trend of employing machine learning algorithms to assist decision making, it is vital to inspect both the data and machine learning models for potential systematic deviations to achieve a trustworthy AI application. Detection of anomalous samples is a field of active research that aims to identify observations (a subgroup of samples) in a given data that deviate from some concept of normality. Its application is crucial across different domains, e.g., to understand data quality, mis-annotations, detecting adversarial attacks, monitoring model performance, and informing new data collection design.
Using healthcare as a use case, this AI for Good webinar demonstrates data-centric techniques to address specific questions, such as vulnerable groups, heterogeneous intervention effects and new class detection. Moreover, scientific discovery is being facilitated using generative models recently. However, principled evaluation of these models, in domain-agnostic and interpretable ways, is beneficial to efficiently exploit the unique generation capabilities of models. Beyond curated datasets that are often utilized to train machine learning models, data-centric analysis should also extend to traditional data sources, such as textbooks, to identify potential representation biases.
This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.