Challenges and opportunities of Artificial Intelligence for Good
Ahead of the ITU Plenipotentiary Conference 2018 (PP-18) – the top policy-making body of the International Telecommunication Union, taking place from 29 October to 16 November in Dubai – ITU News is highlighting some important and emerging areas of ITU’s work. The following is an ITU Plenipotentiary Backgrounder, the original can be found on the PP-18 website here.
Overview
- In recent years, Artificial Intelligence (AI) has been advancing at an exponential pace. Artificially intelligent machines are able to sift through and interpret massive amounts of data from various sources to carry out a wide range of tasks.
- For example, AI’s ability to analyse high-resolution images from satellites, drones or medical scans can improve responses to humanitarian emergencies, increase agricultural productivity, and help doctors identify skin cancer or other illnesses.
- The transformative power of AI, however, also comes with challenges, ranging from issues of transparency, trust and security, to concerns about displacing jobs and exacerbating inequalities.
- When AI is leveraged for good by ensuring it is safe and beneficial for all, it can rapidly accelerate progress towards all 17 United Nations Sustainable Development Goals (SDGs).
AI promises
Software has become significantly smarter in recent years.
The current expansion of AI is the result of advances in a field known as machine learning. Machine learning involves using algorithms that allow computers to learn on their own by looking through data and performing tasks based on examples, rather than by relying on explicit programming by a human.[1]
A machine-learning technique called deep learning, inspired by biological neural networks , finds and remembers patterns in large volumes of data. Deep-learning systems perform tasks by considering examples, generally without being programmed, and out-perform traditional machine-learning algorithms.[2]
Big Data, referring to extremely large data sets that can be analysed computationally to reveal patterns, trends and associations,[3] together with the power of AI and high-performance computing, are generating new forms of information and insight with tremendous value for tackling humanity’s greatest challenges.
Below are just a few examples showing how AI can be applied for good:
- Agricultural productivity can be increased through digitization and analysis of images from automated drones and satellites.
- Improving the collection, processing and dissemination of health data and information can enhance patient diagnosis and treatment, especially for people living in rural and remote areas. Better data on climate and environmental conditions can also help governments better predict the occurrence of malaria, control the spread of the disease, and deploy medical resources more efficiently.
- AI can be used to assess the learning capability of students and help them develop confidence to master subjects.
- AI can help people with a disability or special needs in numerous ways. AI is getting better at doing text-to-voice translation as well as voice-to-text translation, and could thus help visually impaired people, or people with hearing impairments, to use information and communication technologies (ICTs).
- AI is already helping to create smart sustainable cities.
- Climate change data analysis and climate modelling infused with AI predicts climate-related challenges and disasters.
- Pattern recognition can track marine life migration, concentrations of life undersea and fishing activities to enhance sustainable marine ecosystems and combat illegal fishing.
Challenges
While the opportunities of AI are great, there are risks involved.
Datasets and algorithms can reflect or reinforce gender, racial or ideological biases[4] . When the datasets (fed by humans) that AI rely on are incomplete or biased, they may lead to biased AI conclusions.
Humans are increasingly using deep-learning technologies to decide who gets a loan or a job. But the workings of deep-learning algorithms are opaque, and do not provide humans with insight as to why AI is arriving at certain associations or conclusions, when failures may occur, and when and how AI may be reproducing bias.[5]
AI can deepen inequalities by automating routine tasks and displacing jobs.
Software, including the software that runs cell phones, security cameras, and electrical grids, can have security flaws.[6] These can lead to thefts of money and identity, or internet and electricity failures.
New threats to international peace and security can also emerge from advances in AI technologies. For example, machine learning can be used to generate fake video and audio to influence votes, policy-making and governance.[7]
Solutions: ensuring AI is used for good
The development and adoption of relevant international standards, and the availability of open-source software, will provide a common language and tool for coordination that will facilitate the participation of many independent parties in the development of AI applications. This can help to bring the benefits of AI advances to the entire world, while mitigating its negative effects.
Indeed, it is vital that a diverse range of stakeholders guide the design, development and application of AI systems. Accurate and representative AI conclusions require datasets that are accurate and representative of all. Furthermore, safeguards need to be put in place to promote the legal, ethical, private and secure use of AI and Big Data.
Increased transparency in AI, with the aim to inform legal or medical decision-making, will allow humans to understand why AI is arriving at certain associations or conclusions. This, in turn, will encourage people to use their expertise, experience and intuition to validate conclusions or make a different decision than the one proposed by the machine. While the machine analyses and arrives at conclusions at much greater speed and accuracy than before, it is still humans who have the power to question the machine´s conclusions and make final decisions.
To balance the consequences of AI on employment and benefit from the new job opportunities that AI offers, it is essential to create environments that are conducive to acquiring digital skills, be it through formal education or training at the workplace. In particular, AI will bring employment opportunities to people who have the advanced digital skills needed to create, manage, test and analyse ICTs.
Efforts that protect the safety, privacy, identity, money, and possessions of the end-user need to be deployed to address AI-related security challenges in areas as diverse as e–Finance, e-governance, smart sustainable cities, and connected cars.
ITU’s contribution to AI for good
Facilitating conducive policy and regulation
As the United Nations´ specialized agency for information and communication technologies, ITU brings together stakeholders representing governments, industries, academic institutions and civil society groups from all over the world to gain a better understanding of the emerging field of AI for good.
Building on the success of ITU´s first AI for Good Global Summit, the 2018 Summit collaborated with 32 UN family agencies and other global stakeholders to identify strategies to ensure that AI technologies are developed in a trusted, safe and inclusive manner, with equitable access to their benefits. The Summit spawned more than 30 pioneering ‘AI for Good’ project proposals on expanded and improved health care, enhanced monitoring of agriculture and biodiversity using satellite imagery, smart urban development and trust in AI.
ITU maintains an AI Repository where anyone working in the field of artificial intelligence can contribute key information about how to leverage AI for good. This is the only global repository that identifies AI-related projects, research initiatives, think tanks and organizations that aim to accelerate progress on the 17 United Nations Sustainable Development Goals (SDGs).
ITU regularly brings together heads of ICT regulatory authorities from around the world to share views and developments on AI and other pressing regulatory issues, address questions of governance and strengthen collaboration to use AI for good.
Setting standards
Moving forward, international standards—the technical specifications and requirements that AI and other technologies will need to fulfil to perform well—can help address the risks of AI by allowing machine learning to be ethical, predictable, reliable and efficient.
The ITU Focus Group on Machine Learning for Future Networks, including 5G, has been examining how technical standardization can support emerging applications of machine learning in fields such as Big Data analytics, as well as security and data protection in the coming 5G era. The Group will draft specifications to enable ICT networks and their components to adapt their behaviour autonomously in the interests of ethics, efficiency, security and optimal user experience.
Out of the 2018 AI for Good Global Summit came the call for more standardization for health, in the form of the newly created Focus Group on Artificial Intelligence for Health (FG-AI4H), which aims inter alia to create standardized benchmarks to evaluate Artificial Intelligence algorithms used in healthcare applications.
Relevant links
- AI for Good Global Summit: 2018|2017
- AI Repository
- Global Symposium for Regulators 2018
- ITU Focus Group on Artificial Intelligence for Health
- ITU Focus Group on Machine Learning for Future Networks including 5G
- ITU News Magazine on Artificial Intelligence
- ITU News blog on building trust for Artificial Intelligence
- ITU News Magazine on AI for Social Good
- ITU News blogs on Artificial Intelligence
READ MORE: Backgrounders on the ITU Plenipotentiary Conference Website