Symmetry, scale, and science: A geometric path to better AI

  • * Register (or log in) to the Neural Network to add this session to your agenda or watch the replay

  • Date
    10 March 2025
    Timeframe
    16:00 - 17:30 CET Geneva
    Duration
    90 minutes
    Share this session

    The success of modern AI systems has been largely driven by massive scaling of data and compute resources. However, in scientific applications, where physical constraints and geometric structures are crucial, the “scale is all you need” paradigm shows clear limitations. This talk presents cutting-edge approaches that combine the reliability of geometry-preserving neural architectures with the scalability demands of real-world scientific applications.

     
    Through the lens of geometric deep learning, we demonstrate how incorporating symmetry and equivariance as inductive biases leads to more reliable and data-efficient AI systems. The first part explores how geometric latent representations can be learned while preserving crucial symmetries through equivariant neural fields, enabling reliable geometric reasoning and physical modeling.  
    The second part showcases NeuralDEM and NeuralCFD, two groundbreaking approaches that scale architectures to simulate particulate flows and automotive aerodynamics in real-time, handling systems with hundreds of thousands of particles and millions of mesh cells, respectively. While demonstrated in industrial applications, the principles behind this scalable architecture have broader implications, including potential applications in large-scale molecular dynamics simulations. 
    By bringing together these complementary perspectives, we demonstrate how geometric deep learning principles can deliver both reliability through geometric structure preservation and scalability through efficient architectural design. 

  • Are you sure you want to remove this speaker?