Addressing the dark sides of AI

Go back to programme

Addressing the dark sides of AI

Artificial intelligence (AI) is a frontline technology with profound implications for human beings, cultures, societies and the environment. AI has the potential to be our ally in the struggle for a more equitable, fair, and sustainable future. It is remarkable that AI generated some of the earliest alerts about the COVID-19 outbreak, even before it was confirmed. This analytical capacity has also helped accelerate the discovery of the vaccines. Self-learning algorithms and smart machines are playing an increasingly important role in our efforts to recover from the current crisis. Digital platforms and infrastructure helped to keep many economies, schools and societies going for those with access. However, the digital and technological divide risks further exacerbating inequalities.

But the dark side of AI has also come to light. AI technologies can be used for malicious intent which is sparking new conflicts and elevating existing ones. AI tools have been exploited to generate fake news articles and content, spreading hate speech, disinformation and misinformation online. Deep fakes, which are artificially generated videos and photos may become effective tools for disseminating fabricated political messages in the wrong hands, made even more persuasive with algorithmic micro-targeting on social media. Furthermore, AI technologies risk widening the gap between different groups and exacerbating inequalities and divides. Biases are also often present in AI datasets and have the potential of spreading and reinforcing harmful stereotypes. Surveillance technology is also impacting human rights. AI is also being used to manipulate behavior through nudges.

Governance seems to be failing in addressing these challenges. The UN has a key role to play in setting global standards and enhancing regulation to ensure the protection of human rights and human dignity to address threats related to new technologies. This is why UNESCO is developing the first global standard setting instrument on the Ethics of AI. In addition to the values and principles that set the ground for the normative part of the draft Recommendation, it includes concrete policy actions aiming at moving from the what to the how, and ensuring its implementation in different domains. In particular, it introduces two important tools: 1) ethical impact assessment (EIA) that aims to assess the impact of AI systems throughout their life cycle, and 2) readiness benchmarking methodology that will help countries assessing their preparedness level for ethical AI implementation.

The aim of the fireside chat is to discuss how best to develop these tools to put in place the processes needed to vet for biased algorithms, privacy violations and unexplainable outputs. Speakers will discuss existing examples, promising research avenues, practical applications and the ensuing policy implications for the field of AI. 

Share this session
In partnership with:
Scroll Up