AI for Good blog

Human-centered AI for mental health in social media

Health

By Dr. Stevie Chancellor, Assistant Professor in the Department of Computer Science & Engineering at the University of Minnesota

During a recent AI For Good Webinar, Dr. Stevie Chancellor discussed how we can develop more human and community-centered artificial intelligence (AI) to combat dangerous mental health behaviours online. In this guest blog Dr. Chancellor offers an in-depth insight into her recent work.

Research overwhelmingly shows that online communities can promote healthy behaviors and outcomes for managing diseases and disorders. However, some online communities can also promote dangerous behaviors, like disordered eating, opioid use disorder and recovery, and suicidal ideation and crisis.

Data-driven approaches like the ones often used for AI oversimplify the complexities of mental disorders and the unique effects these communities have on people and on platforms. Mental disorders are unique to the individual. At the same time, the people who are being predicted on deserve to have a say in the technology that is being created to help them. What is needed to solve this is a solution that blends computational approaches with human-centered insights from stakeholders in these domains.

My background in both Computer Science (CS) as well as Communication and Media Studies has given me a unique perspective on both the CS and human-centered sides. I believe we must fuse together areas of computing alongside the needs and desires of people in these communities to make headway on problems of this scale.

Robustness and accuracy

On the CS side, I examine digital trace data from millions of interactions on social media to understand and identify dangerous mental health behaviours using machine learning, statistical modelling, and computational linguistics. 

For example, we know that individuals’ mental illness severity (MIS) varies over time, and this can influence their own predispositions to participate in communities. In one project, I combined a computational linguistics method called topic modelling with annotations from clinicians and computer scientists about this severity in posts. We used this system to infer the mental illness severity of over 26 million Instagram posts. I then used these markers of severity to forecast MIS up to seven months in advance. This approach is more varied than a binary “diagnosed/undiagnosed”, which is commonly used in other AI approaches to mental health. 

In thinking about the accuracy of our signals, I am also interested in making sure AI prediction systems align with clinical and community-centered ideas of what diagnosis actually looks like in practice. We worked with researchers from the United States Center for Disease Control and 

Prevention to develop a new labelling survey of risk and protective markers for suicide prevention in social media data. This is important because, to help people in moments of crisis, we need to know both the things that increase risk as well as the social facets of their life that decrease risk, like supportive family members and a sense that things can get better.

Compassionate and ethical AI

In addition to making these signals more accurate, I am also interested in centering what people and communities want from these AI systems. There have been high-profile examples of tech companies and non-governmental organizations violating these expectations in recent years like Crisis Text Line, where companies and organizations do not respect the context and confidentiality of when data was shared. 

Human-centered means building systems that meet people where they’re at and with what they want, rather than assuming CS people know better. This means looking at the ethical practices of what we’re doing in scientific research and making sure that they align with what we think we should be doing (link to paper). For example, we are currently interviewing people who use TikTok to consume content about their mental health. We will use these interviews to inform our future work on designing safer and more compassionate recommendation systems in the future. 

I encourage both researchers and practitioners to consider human-centered approaches in developing AI systems and to work with stakeholders in mental health when developing models. 

To learn more about how human-centered AI can help improve mental health online watch the AI for Good webinar