AI for Good stories

How to detect whether text has been generated by ChatGPT?

By Alexandra Bustos Iliescu & Celia Pizzuto The excitement surrounding Artificial Intelligence (AI) is at an all-time high, with OpenAI's ChatGPT leading the way in cutting-edge language model technology. This innovative tool is transforming communication, providing seamless and natural interactions like never before.

Featured Image

By Alexandra Bustos Iliescu & Celia Pizzuto

The excitement surrounding Artificial Intelligence (AI) is at an all-time high, with OpenAI‘s ChatGPT leading the way in cutting-edge language model technology. This innovative tool is transforming communication, providing seamless and natural interactions like never before. From customer service chatbots to virtual assistants, ChatGPT is changing the way we interact with technology through its ability to process, and respond to, human language.

Join us for the next  AI for Good Discovery session on Thursday, 16 February 2023, where national security expert Evan Crothers, a senior machine learning consultant for the Canadian government, will lead the discussion on trustworthy AI.


In the paper “Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods“, authors Evan Crothers, Nathalie Japkowicz, and Herna Viktor examine the potential dangers posed by advanced Natural Language Generation (NLG) systems and the current methods used to detect machine-generated text. With the growing sophistication of these systems and their easy accessibility through open-source models and user-friendly tools, there are growing concerns about potential abuse, such as phishing, disinformation, fraudulent product reviews, academic dishonesty, and toxic spam.

The paper provides a thorough analysis of these threats and the latest methods for detecting machine-generated text, placing them in the context of cybersecurity and societal issues. The authors also offer guidance for future work aimed at reducing the risk of abuse and ensuring the trustworthiness, fairness, robustness, and accountability of detection systems.

Security expert Bruce Schneier has praised the paper as “a solid grounding amongst all of the hype.”

The survey provides a complete overview of the methods used to detect machine-generated text, evaluating both the technical and social aspects of various approaches and introducing novel research on topics such as adversarial robustness and explainability. It begins by giving an overview of Natural Language Generation (NLG) models and conducting a comprehensive analysis of current threat models. The results show that current defences against machine-generated text are not sufficient to protect against most emerging threat models.

The main conclusion of the survey is that there are numerous open problems in the field of machine-generated text detection that need immediate attention. Existing detection methods do not accurately reflect the challenges of class imbalance or the unknown parameters and architectures of generative models, and they lack transparency and fairness mechanisms to prevent them from causing harm.

To prevent widespread abuse of NLG models, there must be collaboration between AI researchers, cybersecurity professionals, and non-technical experts. By working together, we can ensure that high-capacity NLG systems are used for good and minimize their potential harm.

Don’t miss this chance to delve deeper into the growth and potential of ChatGPT and the challenges and opportunities posed by advanced NLG systems. Join us for a 90-minute discussion with Evan Crothers and gain valuable insights into this rapidly evolving field. Register today and be part of the future of AI!

Are you sure you want to remove this speaker?