This paper is primarily aimed at policymakers and regulators. It seeks to demystify the complexities of regulating the creation, use, and dissemination of synthetic multimedia content through prevention, detection, and response, and to present these issues in a clear and accessible manner for audiences with varying levels of expertise and technical understanding. In addition, the paper aims to highlight global initiatives and underscore the vital role and benefits of international standards in promoting regulatory coherence, alignment, and effective enforcement across jurisdictions. The document offers practical guidance and actionable recommendations, including a regulatory options matrix designed to help policymakers and regulators determine what to regulate (scope), how to regulate (voluntary or mandatory mechanisms), and to what extent (level of effort). It also explores a range of supporting tools – such as standards, conformity assessment mechanisms, and enabling technologies – that can contribute to addressing the challenges of misinformation and disinformation arising from the misuse of multimedia content. At the same time, it emphasizes the importance of striking a balance that enables the positive and legitimate use of either fully or partially synthetic multimedia for societal, governmental, and commercial benefit. Finally, the paper includes a set of practical checklists for use by policymakers, regulators, and technology providers. These can be used when designing regulations or enforcement frameworks, developing technological solutions, or preparing crisis response strategies. The checklists are intended to help align stakeholder expectations, identify critical gaps, support responsible innovation, and enable conformity with emerging standards and best practices.
This technical paper provides a comprehensive overview of the current landscape of standards and specifications related to digital media authenticity and artificial intelligence. It categorizes these standards into five key clusters: content provenance, trust and authenticity, asset identifiers, rights declarations, and watermarking. The report provides a short description of each standard along with link for further details. By mapping the contributions of various Standard Development Organizations (SDOs) and groups, we aim to identify gaps and opportunities for further standardization. This could serve as a valuable resource for stakeholders seeking to navigate the complex ecosystem of standards at the intersection of artificial intelligence and authenticity in digital media and to implement best practices to safeguard the authenticity of digital assets and the rights assigned to them. The findings underscore the critical role of robust specifications and standards in fostering trust and accountability in the evolving digital landscape.
This report highlights the outcomes of the International AI Standards Exchange which took place during the AI for Good Global Summit 2025 organized by ITU.
As the global AI race continues to accelerate, humanity stands at a unique, transformative moment. Our collective challenge is not whether to govern artificial intelligence, but to understand and ensure governance steers AI in the right direction. This is at the heart of ITU’s mission to offer a neutral, global platform for artificial intelligence where everyone has a voice and seat at the table. Our second annual AI Governance Dialogue provided a timely opportunity for exactly this kind of multi-stakeholder discussion among governments, the private sector, academia, civil society organizations, the technical community, and United Nations colleagues – each of whom has a key role to play.
The report Measuring What Matters: How to Assess AI’s Environmental Impact offers a comprehensive overview of current approaches to evaluating the environmental impacts of AI systems. The review focuses on identifying which components of AI’s environmental impacts are being measured, evaluating the transparency and methodology soundness of these measuring practices, and determining their relevance and actionability. Synthesizing findings from academic studies, corporate sustainability initiatives, and emerging environmental tracking technologies, the report examines measurement methodologies, identifies current limitations, and offers recommendations for key stakeholder groups: developers (producers), users (consumers), and policy-makers. One of the most pressing issues uncovered is the widespread reliance on indirect estimates when assessing energy consumption during the training phase of AI models. These estimates often lack real-time, empirical measurement. Furthermore, equally important lifecycle stages remain significantly underexplored. This reliance on proxies introduces substantial data gaps, impedes accountability, and restricts consumers’ ability to make informed, sustainable choices about AI.
This AI for Good Innovate for Impact Interim Report showcases practical AI solutions across sectors such as healthcare, climate resilience, education, and digital inclusion. It curates 160 use cases submitted from 32 countries, evaluated by a Technical Advisory Committee of global experts and edited by a team of handpicked AIforGood Scholars. The initiative fosters collaboration across industry, academia, government, civil society, and UN entities, promoting inclusive and responsible AI deployment. Covering eleven key domains, the report highlights regional innovation, lessons learned, and real-world applications, offering a comprehensive view of AI’s impact and a strategic outlook for future work.
This document is the final report of the ITU/WHO Focus Group on Artificial Intelligence for Health (FG-AI4H), detailing its activities from 2018 to 2023 to develop a standardized assessment framework for AI in health. The report covers the group’s structure, including various working and topic groups focused on specific governance and health areas, and lists its numerous deliverables, which provide guidance and best practices on ethical, regulatory, technical, and (clinical) evaluation aspects of AI for health as well as use case specific benchmarking procedures. It also highlights the Open Code Initiative (OCI) as a platform for testing AI assessment concepts and concludes by announcing a successor Global Initiative on AI for Health (GI-AI4H) to continue this important work.
This report details the progress and milestones of the ITU/FAO Focus Group on AI and IoT for Digital Agriculture (FG-AI4A), an open platform for discussing the integration of AI and IoT technologies in agriculture, laying the groundwork for technical standardization. Key achievements include developing a comprehensive glossary of digital agriculture terminology, mapping the current landscape of standardization, formulating best practices and guidelines, and data modelling framework for digital agriculture. By addressing standardization gaps, promoting for the ethical use of technology, and prioritizing data quality and integration, these efforts are geared towards enhancing agricultural production practices, overall productivity and efficiency while ensuring sustainability and resilience.
This report was launched at the Paris AI Action Summit to ensure the efficient use of resources, enhance clarity, promote consistency in AI environmental sustainability standardization, and facilitate the widespread adoption of best practices. The intention of its contributors is to work towards non-conflicting standards and to foster collaboration between international standardization bodies. This document is intended for policymakers, scientists, AI developers, and industry leaders working on or interested in AI environmental sustainability, providing them with visibility into the progress made by standardization organizations and the work that still lies ahead.
The AI for Good Innovation factory has since its launch in 2020 emerged as a key pitching and acceleration platform for AI startups from all corners of the world. These ventures address critical challenges of society, cross-cutting across all sectors. The selection of the finalist startups is a rigorous process that ensures the highest quality of innovations. With a growing startup community over the years, the Innovation Factory’s ambition is to identify practical solutions using AI, scale those solutions for global impact, and advance the SDGs.
Are you sure you want to remove this speaker?