Towards real-world fact-checking with large language models

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    24 February 2025
    Timeframe
    16:00 - 17:00 Geneva
    Duration
    60 minutes
    Share this session

    Misinformation poses a growing threat to our society. It has a severe impact on public health by promoting fake cures or vaccine hesitancy, and it is used as a weapon during military conflicts to spread fear and distrust. Current research on natural language processing (NLP) for fact-checking focuses on identifying evidence and predicting the veracity of a claim. People’s beliefs, however, often do not depend on the claim and rational reasoning but on credible content that makes the claim seem more reliable, such as scientific publications or visual content that was manipulated or stems from unrelated contexts. In this talk, Professor Gurevych will zoom in on two critical aspects of such misinformation supported by credible though misleading content. Firstly, she will present her efforts to dismantle misleading narratives based on fallacious interpretations of scientific publications. Secondly, she will show how we can use multimodal large language models to (1) detect misinformation based on visual content and (2) provide strong alternative explanations for the visual content.

  • Share this session
    Discover more from programme stream
    Symmetry, scale, and science: A geometric path to better AI

    Symmetry, scale, and science: A geometric path to better AI

    10 March 2025 - 16:00 to 17:00

    Teaching language models to speak chemistry: From design to synthesis

    Teaching language models to speak chemistry: From design to synthesis

    31 March 2025 - 16:00 to 17:00