This report provides an analysis of the Artificial Intelligence (AI) Readiness study aimed at developing a framework for assessing AI Readiness, which indicates the ability to reap the benefits of AI integration. By studying the actors and characteristics in different domains, a bottom-up approach is followed, which allows us to find common patterns, metrics, and evaluation mechanisms for the integration of AI in these domains. The ITU AI Readiness framework aims to engage with multiple stakeholders around the world, assess and improve the level of integration of AI in various domains, study use cases to validate the weightage of the key factors in those domains, improve global AI capacity building, and foster opportunities for international collaboration.
For nearly a decade, Artificial Intelligence (AI) leaders and experts have gathered in Geneva for the AI for Good Global Summit organized by International Telecommunication Union (ITU) in collaboration with 53 United Nations partners to explore opportunities for unlocking AI’s potential to serve humanity. Such initiatives are especially important considering rising geopolitical tension and conflict, deteriorating climate conditions and the long-term repercussions of the COVID-19 pandemic.
Artificial Intelligence (AI) is transforming the global landscape, influencing how societies learn, work, deliver health care, manage resources, and address environmental challenges. The AI for Good Impact Report 2025 provides an overview of AI’s current state, potential future trajectory, regulatory environment, and its application across key sectors.
This paper is primarily aimed at policymakers and regulators. It seeks to demystify the complexities of regulating the creation, use, and dissemination of synthetic multimedia content through prevention, detection, and response, and to present these issues in a clear and accessible manner for audiences with varying levels of expertise and technical understanding. In addition, the paper aims to highlight global initiatives and underscore the vital role and benefits of international standards in promoting regulatory coherence, alignment, and effective enforcement across jurisdictions. The document offers practical guidance and actionable recommendations, including a regulatory options matrix designed to help policymakers and regulators determine what to regulate (scope), how to regulate (voluntary or mandatory mechanisms), and to what extent (level of effort). It also explores a range of supporting tools – such as standards, conformity assessment mechanisms, and enabling technologies – that can contribute to addressing the challenges of misinformation and disinformation arising from the misuse of multimedia content. At the same time, it emphasizes the importance of striking a balance that enables the positive and legitimate use of either fully or partially synthetic multimedia for societal, governmental, and commercial benefit. Finally, the paper includes a set of practical checklists for use by policymakers, regulators, and technology providers. These can be used when designing regulations or enforcement frameworks, developing technological solutions, or preparing crisis response strategies. The checklists are intended to help align stakeholder expectations, identify critical gaps, support responsible innovation, and enable conformity with emerging standards and best practices.
This technical paper provides a comprehensive overview of the current landscape of standards and specifications related to digital media authenticity and artificial intelligence. It categorizes these standards into five key clusters: content provenance, trust and authenticity, asset identifiers, rights declarations, and watermarking. The report provides a short description of each standard along with link for further details. By mapping the contributions of various Standard Development Organizations (SDOs) and groups, we aim to identify gaps and opportunities for further standardization. This could serve as a valuable resource for stakeholders seeking to navigate the complex ecosystem of standards at the intersection of artificial intelligence and authenticity in digital media and to implement best practices to safeguard the authenticity of digital assets and the rights assigned to them. The findings underscore the critical role of robust specifications and standards in fostering trust and accountability in the evolving digital landscape.
This report highlights the outcomes of the International AI Standards Exchange which took place during the AI for Good Global Summit 2025 organized by ITU.
As the global AI race continues to accelerate, humanity stands at a unique, transformative moment. Our collective challenge is not whether to govern artificial intelligence, but to understand and ensure governance steers AI in the right direction. This is at the heart of ITU’s mission to offer a neutral, global platform for artificial intelligence where everyone has a voice and seat at the table. Our second annual AI Governance Dialogue provided a timely opportunity for exactly this kind of multi-stakeholder discussion among governments, the private sector, academia, civil society organizations, the technical community, and United Nations colleagues – each of whom has a key role to play.
The report Measuring What Matters: How to Assess AI’s Environmental Impact offers a comprehensive overview of current approaches to evaluating the environmental impacts of AI systems. The review focuses on identifying which components of AI’s environmental impacts are being measured, evaluating the transparency and methodology soundness of these measuring practices, and determining their relevance and actionability. Synthesizing findings from academic studies, corporate sustainability initiatives, and emerging environmental tracking technologies, the report examines measurement methodologies, identifies current limitations, and offers recommendations for key stakeholder groups: developers (producers), users (consumers), and policy-makers. One of the most pressing issues uncovered is the widespread reliance on indirect estimates when assessing energy consumption during the training phase of AI models. These estimates often lack real-time, empirical measurement. Furthermore, equally important lifecycle stages remain significantly underexplored. This reliance on proxies introduces substantial data gaps, impedes accountability, and restricts consumers’ ability to make informed, sustainable choices about AI.
This AI for Good Innovate for Impact Interim Report showcases practical AI solutions across sectors such as healthcare, climate resilience, education, and digital inclusion. It curates 160 use cases submitted from 32 countries, evaluated by a Technical Advisory Committee of global experts and edited by a team of handpicked AIforGood Scholars. The initiative fosters collaboration across industry, academia, government, civil society, and UN entities, promoting inclusive and responsible AI deployment. Covering eleven key domains, the report highlights regional innovation, lessons learned, and real-world applications, offering a comprehensive view of AI’s impact and a strategic outlook for future work.
This document is the final report of the ITU/WHO Focus Group on Artificial Intelligence for Health (FG-AI4H), detailing its activities from 2018 to 2023 to develop a standardized assessment framework for AI in health. The report covers the group’s structure, including various working and topic groups focused on specific governance and health areas, and lists its numerous deliverables, which provide guidance and best practices on ethical, regulatory, technical, and (clinical) evaluation aspects of AI for health as well as use case specific benchmarking procedures. It also highlights the Open Code Initiative (OCI) as a platform for testing AI assessment concepts and concludes by announcing a successor Global Initiative on AI for Health (GI-AI4H) to continue this important work.
Are you sure you want to remove this speaker?