AI for Good stories

Can AI be made trustworthy?

As artificial systems (AI) get increasingly complex, they are being used to make forecasts – or rather generate predictive model results – in more and more areas of our lives. At the same time, concerns are on the rise about reliability, amid widening margins of error in elaborate AI predictions.

Featured Image

As artificial systems (AI) get increasingly complex, they are being used to make forecasts – or rather generate predictive model results – in more and more areas of our lives.

At the same time, concerns are on the rise about reliability, amid widening margins of error in elaborate AI predictions.

Management science offers a set of tools that can make AI systems more trustworthy. The discipline that brings human decision-makers to the top of their game can also be applied to machines, according to Thomas G Dietterich, Professor Emeritus and Director of Intelligent Systems Research at Oregon State University.

Focus on failures

Human intuition still beats AI hands down in making judgment calls in a crisis. People – and especially those working in their areas of experience and expertise – are simply more trustworthy.

Studies by University of California (UC), Berkeley, scholars Todd LaPorte, Gene Rochlin and Karlene Roberts found that certain groups of professionals, such as air traffic controllers or nuclear power plant operators, are highly reliable even in a high-risk situation.

These professionals develop a capability to detect, contain and recover from errors, and practice improvisational problem solving, said Dietterich, during a webinar on the AI for Good platform hosted by the International Telecommunication Union (ITU).

This is because of their “preoccupation with failure”, he added. “They are constantly watching for anomalies and near misses – and treating those as symptoms of a potential failure mode in the system.”

Anomalies and near misses, rather than being brushed aside, are then studied for possible explanations, normally by a diverse team with wide-ranging specializations. Human professionals bring far higher levels of “situational awareness” and know when to defer to each other’s expertise.

During a crisis, authority tends to migrate to whomever has the expertise to solve the problem, regardless of their rank in the organization. In operating theatres and airplane cockpits, for example, people other than the chief surgeon or pilot are empowered to call out potential risks.

A mixed report card

These principles are useful when thinking about how to build an entirely autonomous and reliable AI system, or how to design ways for human organizations and AI systems to work together, said Dietterich.

AI systems can also acquire high situational awareness, thanks to their ability to integrate data from multiple sources and continually re-assess risks.

While Dietterich would give current AI systems an A grade for situational awareness, they would get a B for anomaly detection and failing grades on their ability to explain anomalies and improvise solutions.

More research is needed before an AI system can reliably identify and explain near-misses, he added. “We have systems that can diagnose known failures, but how do we diagnose unknown failures? What would it mean for an AI system to engage in improvisational problem solving that somehow can extend the space of possibilities beyond the initial problem that the system was programmed to solve?”

Predicting behaviour

Where AI systems and humans collaborate, a shared mental model is needed. AI should not bombard its human counterparts with irrelevant information, for example, since humans should be aware of the details, capabilities and failure modes of the AI system.

Another form of anomaly is a breakdown in the teamwork between the human and the AI or between different humans on the human side of the organization. Moreover, human error shouldn’t be discounted.

AI systems, consequently, must also understand and be able to predict the behaviour of human teams, said Dietterich.

One way to train machines to explain anomalies, or to deal with spontaneity, could be exposure to the performing arts. For example, see an algorithm join human musicians in a phantom jam session.

Towards guaranteed trustworthiness

An AI system needs to have a model of its own limitations and capabilities and be able to communicate them. Dietterich and his team see two promising approaches to improve, and mathematically “guarantee” trustworthiness.

One is a competence model that can compute quantile regressions to predict AI behaviour, using the “conformal prediction” method to make additional corrections. Yet this approach requires lots of data and remains prone to misinterpretation.

The other way is to make autonomous systems deal with their “unknown unknowns” via open category detection. For instance, a self-driving car trained on European roads might find stumble over kangaroos in Australia. An anomaly detector using unlabelled data could help the AI system respond more effectively to surprises.

 

Learn more about the potential of artificial intelligence to deal with sudden surprises like heart attacks at the upcoming AI for Good keynote AI for Heart Attack Prevention: The Story of Iker Casillas, World Cup Winning Goalkeeper.

Image credit: Andrew Kostyrskiy via Pexels

Are you sure you want to remove this speaker?