Panel
In person
DiscoveryLeadersGold

From hallucination to verification: Why trusted, responsible AI needs grounded knowledge

  • Date
    8 July 2026
    Timeframe
    16:00 - 16:30
    Duration
    30 minutes
    • Days
      Hours
      Min
      Sec

    Most responsible AI frameworks focus on guardrails, filters, and alignment of outputs. But when AI systems deployed to address global challenges hallucinate or produce untraceable claims, the root cause is the knowledge the model was built on, not the model itself. Responsible AI requires responsible inputs.

    Wiley’s Apoorva Shah and Josh Jarrett will reframe what responsible AI means in practice, arguing that trust, fairness, transparency, and environmental sustainability all depend on whether AI systems are grounded in verified, attributable knowledge. They’ll show how retrieval-augmented generation connected to peer-reviewed scholarly content directly addresses responsible AI: reducing bias through more diverse, authoritative sources, enabling transparency through proper citation and provenance, cutting computational waste, and keeping humans in the loop.

    Attendees will see this demonstrated through Scholar Gateway, which connects platforms like Anthropic’s Claude to over three million peer-reviewed articles, and the Earth Virtual Expert built with the European Space Agency to make curated Earth science research accessible with full attribution.

    Whether you’re shaping AI Policy, building AI tools, or deploying them for global challenges, you’ll leave with a practical understanding of why knowledge infrastructure is the missing foundation of responsible AI, and what it looks like when we get it right.

     

    Are you sure you want to remove this speaker?