AI for Good blog

Bringing A.R.T. to A.I.

Cybersecurity | Ethics

As intelligent systems increasingly make decisions that directly affect society, perhaps the most important upcoming research direction in AI is to rethink the ethical and societal implications of their actions.

As intelligent, autonomous, and efficient as they will ever be, AI systems are artefacts, tools that support us in daily tasks, in improving our lives, and in increasing our wellbeing. But we are the ones responsible for them. We are the ones determining the questions that AI systems can answer—and answer they will.

‘We’ and ‘our’ have two meanings: to refer to the moral, societal, and legal responsibility of those that develop, manufacture, and use AI systems; and to indicate that AI will affect all humankind. “AI for Good” demands that we truly consider all humankind when considering whose lives and wellbeing AI can improve. It is now the time to decide.

Are we following the money, or following humankind’s best interests? Are we basing AI developments on shareholder value or on human rights and human values?

Responsible AI rests on three main pillars: Accountability, Responsibility, and Transparency. Together, these considerations form the A.R.T. principles for AI.

RELATED: AI for Good Global Summit – 2017 Report

Responsibility is core to AI development. Responsibility refers to the role of people as they develop, manufacture, sell, and use AI systems, but also to the capability of AI systems to answer for their decisions and identify errors or unexpected results. As the chain of responsibility grows, means are needed to link the AI system’s decisions to the fair use of data and to the actions of stakeholders involved in the system’s decision. Means are needed to link moral, societal, and legal values to the technological developments in AI.

Responsible AI is more than the ticking of some ethical ‘boxes’ or the development of some add-on features in AI systems.

Rather, responsibility is fundamental to intelligence and to action in a social context. Education also plays an important role here, both in ensuring that knowledge of the potential AI is widespread, as well as in making people aware that they can participate in shaping the societal development.

RELATED: How artificial intelligence will improve our lives: Tanmay Bakshi (VIDEO)

A second pillar, Accountability, is the capability of explaining and answering for one’s own actions, and is associated with liability. Who is liable if an autonomous car harms a pedestrian? The builder of the hardware (sensors, actuators)? The builder of the software that enables the car to autonomously decide on a path? The authorities that allow the car on the road? The owner that personalised the car’s decision-making system to meet their preferences?

The car itself is not accountable, it is an artefact, but it represents all of these stakeholders. Models and algorithms are needed that will enable AI systems to reason and justify decisions based on principles of accountability. Current deep-learning algorithms are unable to link decisions to inputs, and therefore cannot explain their acts in meaningful ways.

Ensuring accountability in AI systems requires both the function of guiding action (by forming beliefs and making decisions), and the function of explanation (by placing decisions in a broader context and classifying these in terms of social values and norms).

RELATED: Next leader in AI? Hong Kong is set to unlock the potential of young innovators

The third pillar, Transparency, refers to the need to describe, inspect, and reproduce the mechanisms through which AI systems make decisions and learn to adapt to their environment, and to the governance of the data used or created.

Current AI algorithms are basically black boxes. Methods are needed to inspect algorithms and their results. Moreover, transparent data governance mechanisms are needed to ensure that data used to train algorithms and guide decision-making is collected, created, and managed in a fair and clear manner, taking care of minimizing bias and enforcing privacy and security. New and more ambitious forms of governance is one of the most pressing needs in order to ensure that inevitable AI advances will serve societal good.

The development of AI algorithms has so far been led by the goal of improving performance, leading to efficient but very opaque algorithms.

Putting human values at the core of AI systems calls for a mind shift of researchers and developers toward the goal of ensuring Accountability, Responsibility and Transparency rather than focusing on performance alone. I am sure that this shift will lead to novel and exciting techniques and applications, and prove to be the way forward in AI research.

For more information about how AI can help solve humanity’s greatest challenges, go to ai.xprize.org/AI-For-Good. This article first appeared in IBM Watson AI XPrize Blog.

The second AI for Good Global Summit will be hosted at ITU headquarters in Geneva, 15-17 May 2018. The aim of the 2018 summit is to identify practical applications of AI with the potential to accelerate progress towards the United Nations’ Sustainable Development Goals. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits. Read the full report from 2017’s AI for Good Global Summit.

Virginia Dignum
Virginia Dignum is Associate Professor on Social Artificial Intelligence at the Faculty of Technology Policy and Management at TU Delft. She holds a PhD from the Utrecht University.

Are you sure you want to remove this speaker?