AI for Good blog

Building a better online world: An AI approach to flag sexist content on social media

Artificial Intelligence | Safety

Today, on International Women’s Day, the fight for gender equality extends beyond physical spaces and into the digital world. Social media has become an integral part of our lives, fostering connection and amplifying voices. However, this online space can also be a breeding ground for harassment and discrimination, disproportionately impacting women and girls. Sexist narratives and online violence can silence women, pushing them out of online communities.

A recent AI for Good Webinar, “Unveiling Sexist Narratives: AI Approach to Flag Content on Social Media,” explored how Artificial Intelligence (AI) can be leveraged to tackle this challenge. The session, co-organized by ITU, UN Women and UNICC, shed light on a project that developed an AI model to identify sexist text content on social media posts across Spanish-speaking countries in Latin America.

 

Setting the stage for change

The webinar, moderated by Sylvia Poll, Head of the Digital Society Division at ITU, highlighted the concerning rise of online violence against women. She emphasized the importance of leaving no one behind in the digital age and stressed the need for collaborative efforts from governments, the private sector, academia, and civil society to ensure AI is used ethically and responsibly.

“We cannot try to solve this issue of closing the gender digital divide alone and we need to know what is happening on the ground,” Poll remarked, underscoring the need for a multi-stakeholder approach.

 

Building an AI solution for a complex problem

Anusha Dandapani, Chief Data & Analytics Officer at United Nations International Computing Centre (UNICC), delved into the specifics of the AI model. She explained how the prevalence of sexism often goes unreported, making it difficult to quantify the issue. To address this, the project focused on building a model that could effectively detect sexist narratives in social media content.

“In order for us to understand the specific topic of how gender-based stereotypes or sexism is being sort of relevant in the content that we have to analyze, we need to have a clear and consistent criteria,” Anusha explained.

Natural Language Processing (NLP) and machine learning were central to the model’s development. The team employed pre-trained word embeddings, a technique that captures the semantic relationships between words, to train the model on a dataset of labeled content. This dataset included examples of both sexist and non-sexist language.

Watch the full webinar here:

A crucial aspect of the project was ensuring the model’s cultural and linguistic sensitivity. Unlike English, where most AI models are developed, sexism can manifest differently in other languages. The project addressed this by custom training the model with Spanish-specific data, enabling it to better recognize the nuances of sexist language in that context.

 

Transparency and collaboration: key ingredients for success

Transparency throughout the development process was paramount. The project team made their code publicly available on a GitHub repository, allowing for open collaboration and scrutiny. Additionally, they compared their model’s results with a human-labeled dataset to ensure its accuracy.

“We made sure that anything that we worked on was available from day one on a GitHub repository,” Lizzette Soria, Gender Expert at UN Women, said, emphasizing their commitment to open collaboration.

The initial findings were promising. The model demonstrated a high success rate in identifying sexist content within Spanish text data. Interestingly, the analysis revealed a correlation between the use of emojis and potentially sexist narratives. For example, posts containing “happy faces” or “laughing faces” alongside sexist language raised questions about the user’s intent and the potential impact on the target audience.

These findings underscore the importance of considering the context surrounding online interactions. Sylvia Poll emphasized the value of an “intersectional lens” – acknowledging how sexism can intersect with other forms of discrimination, impacting different groups of women in unique ways.

 

Looking ahead: a future free from online sexism

The webinar concluded with a discussion on the project’s future directions. The team plans to publish a white paper detailing their methodology and results. A key focus will be on disseminating these findings to policymakers, social media platforms, and civil society organizations. Additionally, they aim to explore ways to adapt the model for use in other languages and cultural contexts.

This project exemplifies the potential of AI to be a force for good in promoting gender equality online. By identifying and flagging sexist content, AI models can help create safer and more inclusive digital spaces for all. However, as highlighted during the webinar, it is crucial to ensure that these models are developed and deployed responsibly, with due consideration for ethical implications and potential biases.

The fight against online sexism requires a multi-pronged approach. AI-powered solutions like the one presented can be a valuable tool. However, it is equally important to foster digital literacy and empower users to identify and report sexist content. By combining technological advancements with social awareness, we can work towards creating a more respectful and inclusive online environment for everyone.

Are you sure you want to remove this speaker?