Living with ChatGPT: Detecting AI text without destroying trust
* Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay
Advances in natural language generation (NLG) have resulted in machine generated text that is increasingly difficult to distinguish from human–authored text. Powerful open-source models are freely available, and user-friendly tools that democratize access to generative models are proliferating. The great potential of state-of-the-art NLG systems is tempered by the multitude of avenues for abuse. Detection of machine generated text is a key countermeasure for reducing abuse of NLG models, with significant technical challenges and numerous open problems. This session includes both 1) an extensive analysis of threat models posed by contemporary NLG systems, and 2) the most complete review of machine generated text detection methods to date. This survey places machine generated text within its cybersecurity and social context, and provides strong guidance for future work addressing the most critical threat models, and ensuring detection systems themselves demonstrate trustworthiness through fairness, robustness, and accountability. For details, please see the paper “Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods” by Evan Crothers, Nathalie Japkowicz, and Herna Viktor. Security guru Bruce Schneier referred to the paper as “a solid grounding amongst all of the hype”.
This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.