Living with ChatGPT: Detecting AI text without destroying trust

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    16 February 2023
    Timeframe
    16:00 - 17:30
    Duration
    90 minutes (including 30 minutes networking)
    Share this session

    Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from humanauthored text. Powerful open-source models are freely available, and user-friendly tools that democratize access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. This session includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability. For details, please see the paper Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods by Evan Crothers, Nathalie Japkowicz, and Herna Viktor. Security guru Bruce Schneier referred to the paper as “a solid grounding amongst all of the hype”.  

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session
    Discover more from programme stream
    AI liability in the EU and the US: stifling or...

    AI liability in the EU and the US: stifling or...

    3 April 2023 - 16:00 to 18:00

    Living with ChatGPT: Detecting AI text without destroying trust

    Living with ChatGPT: Detecting AI text without destroying trust

    16 February 2023 - 16:00 to 17:30

    AlphaTensor: discovering mathematical algorithms with reinforcement learning

    AlphaTensor: discovering mathematical algorithms with reinforcement learning

    12 January 2023 - 17:00 to 18:15

    A fundamental problem of AI on digital hardware: Will true...

    A fundamental problem of AI on digital hardware: Will true...

    12 December 2022 - 15:00 to 16:10

    How to make AI more fair and unbiased

    How to make AI more fair and unbiased

    28 November 2022 - 15:00 to 16:30