AI for Good blog

AI is helping spread misinformation faster. How can we deal with that?

Disinformation | Ethics

Artificial Intelligence (AI) is poised to improve people’s lives worldwide and accelerate progress on the United Nations Sustainable Development Goals (SDGs).

Yet, AI can also bring with it a host of unintended consequences. One of the most pernicious areas could be AI’s ability to spread misinformation at a pace and scale not seen before.

At the recent AI for Good Global Summit, participants from academia, the United Nations, major media outlets and the private sector gathered to discuss the unintended consequences of AI and AI-powered misinformation.

The death of authenticity?

To be sure, AI has provided a wealth of task-facilitating tools for media and the field of journalism, where its impact can be seen in everything from the emergence of voice-recognition transcription tools to automatically generated content.

When deployed properly, these advances help professional journalists focus their energy on reporting the stories that really matter in a nuanced fashion.

On the other hand, AI software also facilitates the creation of synthetic media, such as fake videos or “deepfakes”, and synthetic audio content.

These tools for misinformation are even more invasive given the increasingly close connection we have with technology, as well as the ease and speed at which information spreads online.

“If we go to a time where we don’t trust our senses – what we hear, what we see – what does that mean for us?” –  Jennifer Strong, Wall Street Journal

One major aspect of the issue is the ethical-legal gap between the speed of technological innovation and the application of timely and appropriate policies.

Ethical-legal gap

Jo Floto of the BBC cites the discrepancies in legal obligations between a bot and a traditional broadcaster, who in the United Kingdom would be subject to a “legal compulsion” to be fair due to impartiality requirements within the Representation of the People’s Act.

“The Law recognizes that broadcasters have more of an influence… And so there is, in the UK, a legal compulsion on me, on the BBC… to be fair and balanced during an election campaign… the law is quite clear on that. The law on the internet, on social media and on funding, and on those responsibilities –  it hasn’t even begun to be drafted.”

Another aspect is a lack of understanding, on behalf of citizens and politicians alike, of the technology and its implications. “The technology is going way ahead of what people are prepared to accept, because they don’t know what they’re accepting,” said Floto.

What about intended (negative) consequences?

The risks of AI for misinformation are not all “unintentional,” panelists pointed out.

From a criminal research point of view, Irakli Beridze of the United Nations Interregional Crime and Justice Research Institute noted that malicious code or malicious AI to be used for the cyber-crimes will be very much intentional.

“One of the biggest challenges which we are facing here is to understand what are those intended consequences,” said Beridze. “And what are the mechanisms, what are the technology we can fight it with and how to empower, in our case, law enforcement agencies, or the UN member states with knowledge, understanding and the real tools for them to actually understand it.”

Transparency and bias

Jennifer Strong of the Wall Street Journal called for the need for better accountability and transparency, especially in the realm of technology where incentives for corporations can sometimes be to “move fast and break things,” rather than to be overly thoughtful.

AI doesn’t create these problems, panelists agreed. Rather, it magnifies existing human problems with transparency and bias.

For instance, millions of people are already impacted in real ways by non-transparent decisions, said Ms Strong who cited the example of credit scores in the US.

L. Song Richardson of the University of California, Irvine School of Law pointed out that the challenges we will see from algorithms will not be much different from what we are faced with today, referencing the existing issue of racial and gender biases in employment processes and the lack of accountability.

“Because these machines learn from existing data, it is not that different,” said Richardson. “The type of bias that we’ll see from algorithms [is similar to what] we already see without algorithms. If we are asking these questions in the world of artificial intelligence and attempting to solve them, let’s also ask the ground zero question of what are we doing to solve those problems that currently exist. That to me is the challenge.”

By Pamela Lian, ITU News

Are you sure you want to remove this speaker?