Solutions stage
Panel

Protecting trust in crisis: AI in humanitarian information ecosystems

In person
  • Date
    9 July 2025
    Timeframe
    11:10 - 11:30 CEST
    Duration
    20 minutes
    Share this session

    Artificial Intelligence is rapidly reshaping how we understand and respond to information in humanitarian contexts. From detecting misinformation to moderating harmful content, AI presents powerful tools—but it also carries significant weaknesses and risks. Poorly contextualized systems can overlook local nuance and even amplify false and inciteful narratives that drive division and conflict. Lifesaving humanitarian information can be drowned out by algorithms that favour virality over accuracy, and engagement over ethical responsibility. AI-powered content moderation systems are inadequate in some less-common languages and contexts. In complex displacement settings, these risks can have serious consequences for safety, trust, and protection. This session at the AI for Good Summit will focus on the intersection of AI and information integrity in humanitarian practice, recognizing the urgency of not just understanding the issues, but also shaping policy, safeguards, and practical applications through diverse, multi-stakeholder engagement.

     

    The session will bring together voices from the humanitarian sector, Government and digital platforms to outline quick-fire ideas to spark further interest and collaboration. The goal is a dynamic look at potential ‘AI for Good’ solutions to these challenges.

     

    Session objectives:

    • Raise awareness of humanitarian challenges at the intersection of AI and information integrity, such as deep fakes, training in low-resourced languages, lacking technical capacity, and difficulties in determining trustworthiness of AI-generated information.
    • Highlight potential responses or suggest areas for joint action.

    Broaden collaboration through multistakeholder dialogue and engagement.