Generative AI and magical thinking

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    12 January 2026
    Timeframe
    16:00 - 17:00 CET Geneva
    Duration
    60 minutes

    Recent advances in Generative AI have given rise to strong emotions among the general public, including excitement, fear, wonder, and disbelief. To be sure, the emergence of Large Language Models (LLMs) marks a significant milestone in the history of AI. But are systems like ChatGPT and Gemini actually intelligent, and are we at the threshold of so-called Artificial General Intelligence (AGI)? This session provides an overview of how LLM-based systems work, where they excel, and where they fall short, with a special emphasis on opportunities for the complementary strengths of humans and machines.

     

    Session Objectives:

    By the end of the session, participants will be able to:

    • Explain the probabilistic nature of Large Language Models (LLMs) to demystify how they generate responses.
    • Differentiate between the capabilities of current Generative AI and the “magical thinking” often associated with Artificial General Intelligence (AGI).
    • Identify the operational limitations of AI systems to determine where human intervention and oversight are essential.
    • Map complementary strengths of humans and machines to optimize collaboration in their own workflows.

    Recommended Mastery Level / Prerequisites:

    Intermediate

    • Prerequisites: No technical or mathematical background is required. A basic familiarity with using generative AI tools (like ChatGPT, Gemini, or Claude) is helpful for context, but not mandatory.
    • While I will reference concepts like high-dimensional vectors and show architecture diagrams, these will be explained conceptually using analogies.
  • Are you sure you want to remove this speaker?