The ITU Artificial Intelligence and Machine Learning (AI/ML) Challenges are competitions where anyone can participate to solve problem statements to advance the achievement of Sustainable Development Goals (SDGs) using AI/ML. The competitions enable participants to connect with new partners – and new tools and data resources – to achieve goals set out by problem statements contributed by industry and academia.
This work stems from a global initiative launched on October 10, 2024, at UNESCO headquarters, bringing together experts from ISO, ITU and IEEE, in partnership with the OECD and UNESCO. Led by the French Ministry in charge of Environment, this initiative led to the publication of a first document to ensure better coordination between standardization bodies and optimize resources dedicated to assessing and reducing the environmental impact of AI. The first document was published in the context of the Paris AI Action Summit (February 10-11, 2025). This new version is an update of the document, elaborated through a new work session with standardization organizations on December 4th, 2025, and subsequent feedback given by experts. It is published in the context of the AI Impact Summit in India (February 19-20, 2026) to ensure continuous coordination between experts.
This report provides an analysis of the Artificial Intelligence (AI) Readiness study aimed at developing a framework for assessing AI Readiness, which indicates the ability to reap the benefits of AI integration. By studying the actors and characteristics in different domains, a bottom-up approach is followed, which allows us to find common patterns, metrics, and evaluation mechanisms for the integration of AI in these domains. The ITU AI Readiness framework aims to engage with multiple stakeholders around the world, assess and improve the level of integration of AI in various domains, study use cases to validate the weightage of the key factors in those domains, improve global AI capacity building, and foster opportunities for international collaboration.
For nearly a decade, Artificial Intelligence (AI) leaders and experts have gathered in Geneva for the AI for Good Global Summit organized by International Telecommunication Union (ITU) in collaboration with 53 United Nations partners to explore opportunities for unlocking AI’s potential to serve humanity. Such initiatives are especially important considering rising geopolitical tension and conflict, deteriorating climate conditions and the long-term repercussions of the COVID-19 pandemic.
Artificial Intelligence (AI) is transforming the global landscape, influencing how societies learn, work, deliver health care, manage resources, and address environmental challenges. The AI for Good Impact Report 2025 provides an overview of AI’s current state, potential future trajectory, regulatory environment, and its application across key sectors.
This paper is primarily aimed at policymakers and regulators. It seeks to demystify the complexities of regulating the creation, use, and dissemination of synthetic multimedia content through prevention, detection, and response, and to present these issues in a clear and accessible manner for audiences with varying levels of expertise and technical understanding. In addition, the paper aims to highlight global initiatives and underscore the vital role and benefits of international standards in promoting regulatory coherence, alignment, and effective enforcement across jurisdictions. The document offers practical guidance and actionable recommendations, including a regulatory options matrix designed to help policymakers and regulators determine what to regulate (scope), how to regulate (voluntary or mandatory mechanisms), and to what extent (level of effort). It also explores a range of supporting tools – such as standards, conformity assessment mechanisms, and enabling technologies – that can contribute to addressing the challenges of misinformation and disinformation arising from the misuse of multimedia content. At the same time, it emphasizes the importance of striking a balance that enables the positive and legitimate use of either fully or partially synthetic multimedia for societal, governmental, and commercial benefit. Finally, the paper includes a set of practical checklists for use by policymakers, regulators, and technology providers. These can be used when designing regulations or enforcement frameworks, developing technological solutions, or preparing crisis response strategies. The checklists are intended to help align stakeholder expectations, identify critical gaps, support responsible innovation, and enable conformity with emerging standards and best practices.
This technical paper provides a comprehensive overview of the current landscape of standards and specifications related to digital media authenticity and artificial intelligence. It categorizes these standards into five key clusters: content provenance, trust and authenticity, asset identifiers, rights declarations, and watermarking. The report provides a short description of each standard along with link for further details. By mapping the contributions of various Standard Development Organizations (SDOs) and groups, we aim to identify gaps and opportunities for further standardization. This could serve as a valuable resource for stakeholders seeking to navigate the complex ecosystem of standards at the intersection of artificial intelligence and authenticity in digital media and to implement best practices to safeguard the authenticity of digital assets and the rights assigned to them. The findings underscore the critical role of robust specifications and standards in fostering trust and accountability in the evolving digital landscape.
This report highlights the outcomes of the International AI Standards Exchange which took place during the AI for Good Global Summit 2025 organized by ITU.
As the global AI race continues to accelerate, humanity stands at a unique, transformative moment. Our collective challenge is not whether to govern artificial intelligence, but to understand and ensure governance steers AI in the right direction. This is at the heart of ITU’s mission to offer a neutral, global platform for artificial intelligence where everyone has a voice and seat at the table. Our second annual AI Governance Dialogue provided a timely opportunity for exactly this kind of multi-stakeholder discussion among governments, the private sector, academia, civil society organizations, the technical community, and United Nations colleagues – each of whom has a key role to play.
The report Measuring What Matters: How to Assess AI’s Environmental Impact offers a comprehensive overview of current approaches to evaluating the environmental impacts of AI systems. The review focuses on identifying which components of AI’s environmental impacts are being measured, evaluating the transparency and methodology soundness of these measuring practices, and determining their relevance and actionability. Synthesizing findings from academic studies, corporate sustainability initiatives, and emerging environmental tracking technologies, the report examines measurement methodologies, identifies current limitations, and offers recommendations for key stakeholder groups: developers (producers), users (consumers), and policy-makers. One of the most pressing issues uncovered is the widespread reliance on indirect estimates when assessing energy consumption during the training phase of AI models. These estimates often lack real-time, empirical measurement. Furthermore, equally important lifecycle stages remain significantly underexplored. This reliance on proxies introduces substantial data gaps, impedes accountability, and restricts consumers’ ability to make informed, sustainable choices about AI.
Are you sure you want to remove this speaker?