AI and Machine Learning in communication networks

Go back to programme

AI and Machine Learning in communication networks

Discussion on use cases and AI-Native approaches to enable them takes us to foundational models, extensions, and customizations for each application domain and regional needs. Making open-source, human-centric, and autonomous AI solutions requires close collaboration between open-source, AI, and Standards. For example, a discussion on federated mechanisms for training, knowledge base creation, and transfer learning could enable us to bring solutions closer to real-life problems in various parts of the world. Some of the discussion points under this theme may be: how can open models enable trusted, AI-native networks? Can finetuning address customization requirements on the network from various regions? Are there open knowledge bases that could help in bridging and democratizing the access to and ability to develop new AI models? In the world of Generative AI, both standards and code could be a subject of analysis, and transformation by AI and for AI. However, what would be the factors that would increase trust in such generated output? What are the current gaps in standards that could be plugged for IMT-2030 (6G) towards enabling the use of such techniques? In the path towards IMT-2030 (6G), could AI play the bridging role between open source and standards?

One of the major areas of study is the energy overhead and environmental impact of AI. Study and optimization of the energy consumption and carbon footprint of AI systems needs study, thus aligning the rapid advancements in AI technology with sustainable environmental practices. By integrating upfront important considerations on energy efficiency and environmental impact into the core of AI development and deployment, we can maximize benefits. This involves optimizing the analysis, design, training, and deployment of AI models while keeping in mind the trade-off between performance and resource utilization. Fostering an AI ecosystem that not only drives innovation but is also firmly grounded on foundations of sustainability, leads to Green AI. This session will focus on benchmarks for (measuring and reporting) energy consumption and efficiency in various AI/ML models, in IMT-2030 (6G). What are the set of metrics and benchmarks for the energy efficiency of AI/ML models? Are there best practices to develop energy-efficient AI systems for various domain-specific use cases?

Share this session

Are you sure you want to remove this speaker?