Multi-Modal Sensing Aided Communications and the Role of Machine Learning

Go back to programme

Multi-Modal Sensing Aided Communications and the Role of Machine Learning

  • Watch

    * Register (or log in) to the AI4G Neural Network to add this session to your agenda or watch the replay

  • Wireless communication systems are moving to higher frequency bands (mmWave in 5G and above 100GHz in 6G and beyond) and deploying large antenna arrays at the infrastructure and mobile users (massive MIMO, mmWave/terahertz MIMO, reconfigurable intelligent surfaces, etc.). While migrating to higher frequency bands and using large antenna arrays enable satisfying the increasing demand in data rates, they also introduce new challenges that make it hard for these systems to support mobility and maintain high reliability and low latency.  

    This talk will explore the use of sensory data (radar, LiDAR, Camera RGB, position, etc.) and machine learning to address these challenges. It will present DeepSense 6G, the world’s first large-scale real-world multi-modal sensing and communication dataset that enables the research in a wide range of communications, sensing, and positioning applications. Finally, it will introduce the ITU AI/ML in 5G Challenge (that will start soon) for multi-modal sensing aided beam prediction using real-world measurements from the DeepSense 6G dataset.

    This live event includes a 30-minute networking event hosted on the AI for Good Neural Network. This is your opportunity to ask questions, interact with the panelists and participants and build connections with the AI for Good community.

    Speaker(s)
    • i

      Speaker abstract

      From Multimodal Sensing to Digital Twin-Assisted Communications; Large-scale MIMO is a key enabler for 5G, 6G, and beyond. Scaling up MIMO systems, however, is subject to critical challenges, such as the large channel acquisition/beam training overhead and the sensitivity to blockages. These challenges make it difficult for MIMO systems to support applications with high mobility and strict reliability/latency constraints. In this talk, I will first motivate the use of multi-modal sensing data to address some of these challenges. Then, I will present a vision where multi-modal sensing, real-time ray-tracing, and machine learning can be integrated to construct real-time digital twins of the communication environments and comprehensively assist all the layers of the communication systems. I will discuss some of the open questions to realize this vision, present a research platform for investigating the digital twin problems, and highlight some initial results.
      Ahmed Alkhateeb
      Assistant Professor
      Arizona State University
    Share this session