An analysis of the UN system’s institutional models, functions, and existing international normative frameworks applicable to AI governance. The world is undergoing a fundamental technological shift in the age of rapid digitalization and deployment of artificial intelligence (AI) technologies. AI holds significant potential to be harnessed to support inclusivity, reduce inequalities, rescue the Sustainable Development Goals (SDGs) and bolster the operations of the United Nations (UN) system. However, harnessing the positive impacts of AI requires careful attention to ethical considerations, including safeguarding data privacy, mitigating biases, and ensuring transparent decision-making processes. It is therefore important to make the most of AI’s opportunities while addressing risks and harms.
The most recent edition of the “UN Activities on AI Report” represents a collaborative endeavor between the International Telecommunication Union (ITU) and 46 United Nations agencies and bodies. This comprehensive report showcases 408 artificial intelligence (AI) cases and projects run by the UN system, encompassing all 17 Sustainable Development Goals (SDGs). The initiatives range from forecasting food crises and monitoring water productivity to mapping schools through satellite imagery and optimizing the performance of communication networks, among other applications. Additionally, previous editions of the report from the years 2022, 2021, 2020, 2019, and 2018 are also available for reference.
The AI industry is driving a technological revolution, impacting the global economy and society. AI transforms healthcare with diagnoses, finance with analytics, retail with recommendations, and manufacturing with automation. It also enhances transportation safety and aids environmental conservation. However, experts caution about the widening digital divide and the perpetuation of biases. To tackle these issues, ITU’s AI for Good platform facilitates the sharing of AI applications and expertise worldwide. In February 2024, we issued two open calls simultaneously: a call for AI use cases, and a call for AI scholars.
This report provides an analysis of the Artificial Intelligence (AI) Readiness study aimed at developing a framework for assessing AI Readiness which indicates the ability to reap the benefits of AI integration. By studying the actors and characteristics in different domains, a bottom-up approach is followed which allows us to find common patterns, metrics, and evaluation mechanisms for the integration of AI in these domains.
THE TIME IS NOW Two days of never before presented, state-of-the-art AI solutions and cutting edge knowledge, aligned with the UN Sustainable Development Goal
The ITU Artificial Intelligence and Machine Learning (AI/ML) Challenges are competitions where anyone can participate to solve problem statements to advance the achievement of Sustainable Development Goals (SDGs) using AI/ML. The competitions enable participants to connect with new partners – and new tools and data resources – to achieve goals set out by problem statements contributed by industry and academia.
International standards provide the guidelines and benchmarks needed to measure and improve the environmental impact of AI. Codifying established best practices, standards help mitigate risks such as high energy consumption and lifecycle emissions. They also provide measurement methodologies to assess GHG emissions and energy consumption, and thereby identify the actions needed to improve. This report explores the environmental implications of AI and presents a summary of relevant standards available and under development.
This report provides an analysis of the Artificial Intelligence (AI) Readiness study aimed at developing a framework for assessing AI Readiness which indicates the ability to reap the benefits of AI integration. The analysis of characteristics of use cases led us to the main AI readiness factors including: 1) Availability of open data. 2) Access to Research. 3) Deployment capability along with Infrastructure. 4). Stakeholders buy-in enabled by Standards – trust, interoperability, security. 5) Developer Ecosystem created via Opensource. 6) Data collection and model validation via Sandbox pilot experimental setups.
Experts predict that 90 per cent of online content will be generated by AI by 2025. How can we identify whether content was human-generated, AI-generated, or some combination? The problem of AI-generated media and deepfakes is not only technical but also ethical and social. An AI for Good Global Summit workshop brought together technology and media companies, artists, international organizations, standards bodies, and academia, to discuss the security risks and challenges of deepfakes and generative artificial intelligence (AI), technological innovations, and areas where standards are needed.
Are you sure you want to remove this speaker?