AI for Good blog

Is ‘provably beneficial’ AI possible?

Ethics | Inclusivity

At some stage we should expect the machines to take control – at least, that is what Alan Turing predicted for the future of machine learning in 1951.

Despite this troubling prognosis, there are reasons to be hopeful about the future of artificial intelligence (AI), according to Stuart Russell, Professor of Electrical Engineering and Computer Sciences at UC-Berkeley and author of ‘Human Compatible.’

“AI is already helping to solve global problems,” he noted in his Breakthrough Days keynote during the 2020 virtual AI for Good Summit.

So, why can’t we just make AI ‘good’? Because it is not yet possible using current AI frameworks and models, Russell argued.

The need for provably beneficial AI

The ‘standard model’ of AI is to create machines that optimize an externally specified objective which the machine executes – but the problem is humans cannot specify objectives perfectly.

“When we talk about ‘AI for good’, we do not know how to define ‘good’ as a fixed objective in the real world that could be supplied to standard model AI systems,” Russell explained. “The problem is that we don’t know how to specify objectives completely and correctly […] and if you create a machine that is optimizing the wrong objective, you really have a problem.”

So, in effect, we lose control over AI systems that are pursuing explicitly defined goals and objectives because the AI makes decisions based on poorly defined preferences.

Instead of designing AI with a fixed goal of ‘good’, we should build in uncertainty about human preferences.

That means shifting our thinking of AI to being complementary to – not in competition with – human intelligence.

“The machine’s goal is to help a person achieve those preferences, but it’s only the human that knows what those preferences are,” said Russell. “So the machine operates under uncertainty.”

In this way, the AI becomes an assistant and not a decision-maker; it will defer to the human and ask permission before taking any action. With this framework in mind, we can more freely use AI to help us solve problems and transform AI into a service for problem solving.

This is what Russell calls provably beneficial AI, which can help us solve problems and “make changes in the world for the better,” he said.

Creating the world we want with AI

Russell pointed to several examples in which AI is already being used to advance the UN Sustainable Development Goals, from delivering simple forms of education and healthcare advice to providing early detection of illegal fishing and deforestation and predicting potential famines, helping aid agencies to direct assistance and best practices.

The recently published UN Compendium on AI activities outlines several other ways that AI is being used to advance the Sustainable Development Goals.
But caution must also be exercised, Russell warned.

“AI is also creating new global problems and as the technology accelerates, the problems created through its misuse will also accelerate,” he said. Racial bias and spreading misinformation are just some of the problems that AI exacerbates today.

Shifting to a new foundation that allows for ‘provably beneficial AI’ – moving away from a fixed objective to one of preferences – will allow us to “think about what kind of society we might want,” Russell suggested, “rather than constantly responding to crisis after crisis and struggling to meet the basic needs of people.”

This will require communities and stakeholders to agree to what these preferences might be – but global collaboration is possible.

Russell pointed to one example in which “20,000 teams from 150 countries showed how to collaborate to improve the capabilities of AI systems for an important goal.” What was that goal? Netflix movie rating prediction.

In closing, Russell had one more piece of advice for the audience: despite its potential to accelerate human progress and development, even ‘provably beneficial AI’ is not a panacea.

“Let’s not forget that the solutions to our problems really are up to us and not up to a technology to solve,” he said.

 

Image credit: Alexander Sinn via Unsplash

Scroll Up