Multi-Modal Sensing Aided Communications and the Role of Machine Learning

Go back to programme

Multi-Modal Sensing Aided Communications and the Role of Machine Learning

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

  • Wireless communication systems are moving to higher frequency bands (mmWave in 5G and above 100GHz in 6G and beyond) and deploying large antenna arrays at the infrastructure and mobile users (massive MIMO, mmWave/terahertz MIMO, reconfigurable intelligent surfaces, etc.). While migrating to higher frequency bands and using large antenna arrays enable satisfying the increasing demand in data rates, they also introduce new challenges that make it hard for these systems to support mobility and maintain high reliability and low latency.  

    This talk will explore the use of sensory data (radar, LiDAR, Camera RGB, position, etc.) and machine learning to address these challenges. It will present DeepSense 6G, the world’s first large-scale real-world multi-modal sensing and communication dataset that enables the research in a wide range of communications, sensing, and positioning applications. Finally, it will introduce the ITU AI/ML in 5G Challenge (that will start soon) for multi-modal sensing aided beam prediction using real-world measurements from the DeepSense 6G dataset.

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Share this session

    Are you sure you want to remove this speaker?