AI for Good blog

Can we build guidelines for trustworthy, ethical AI?

Cybersecurity | Ethics

Just last week, the European Union published its Guidelines for Trustworthy AI.

A few weeks earlier, the first version of the IEEE initiative on Ethically Aligned Design of Intelligent and Autonomous Systems were presented.

The impact of these two reports, coming from the European Union and from one of the leading international professional organizations of engineers is potentially very large. (Full disclosure: I am a member of the EU high level group on AI and of the executive committee of IEEE ethically aligned design (EAD) initiative, the bodies behind these two reports.)

Engineers are those that ultimately will implement AI to meet ethical principles and human values, but it is policy-makers, regulators and society in general that can set and enforce the purpose.

We are all responsible.

Moving from principles to guidelines

Both documents go well beyond proposing a list of principles, but aim at providing concrete guidelines to the design of ethically aligned AI systems. Systems that we can trust, systems that we can rely on.

Based on the result of a public consultation process, the EU guidelines put forward seven requirements necessary (but not sufficient) to achieve trustworthy AI, together with methods to achieve these and an assessment list to check these requirements.

These requirements include:

  • Human agency and oversight
  • Technical robustness and safety
  • Privacy and data governance
  • Transparency
  • Diversity, non-discrimination and fairness
  • Societal and environmental well-being
  • Accountability

 

 

The IEEE-EAD report is a truly bottom-up international effort, resulting from the collaboration of many hundreds of experts across the globe including Asia and the Global South. It goes deeper and beyond a list of requirements or principles and provides in-depth background on many different topics.

The IEEE-EAD community is already hard at work on defining standards for the future of ethical intelligent and autonomous technologies, ensuring the prioritization of human well-being. The EU will be piloting its assessment list in the coming months, through an open call for interest.

Ensuring the purpose of AI is what we want

As mathematician and philosopher Norbert Wiener wrote back in 1960: “We had better be quite sure that the purpose put into the machine is the purpose which we really desire.” Moreover, we need to ensure that we put in place the social and technical constructs that ensure that the purpose remains in place when algorithms and their contexts evolve.

Ensuring an ethically aligned purpose is more than designing systems whose result can be trusted. It is about the way we design them, why we design them, and who is involved in designing them. It is a work of generations. It is a work always in progress.

Obviously, errors will be made, disasters will happen. We need to learn from mistakes and try again — try better.

It is not an option to ignore our responsibility. AI systems are artifacts decided, designed, implemented and used by us. We are responsible.

We are responsible to try again when we fail (and we will fail), to observe and denounce when we see things going wrong (and they will go wrong), to be informed and to inform, to rebuild and improve.

The principles put forward by the EU and the IEEE are the latest in a long list of sets of principles, by governments, civil organizations, private companies, think tanks and research groups (AsilomarBarcelonaMontrealGoogleMicrosoft,… just to mention a few). However, it is not just about checking that a system meets the principles on whatever is your favorite list.

These principles are not check lists, or boxes to tick once and forget. These principles are directions for action. They are codes of behavior — for AI systems, but, most importantly, for us.

It is us who need to be fair, non-discriminatory, accountable, to ensure privacy of ourselves and others, and to aim at social and environmental well-being. The codes of ethics are for us. AI systems will follow.

There is work to be done. We, the people, are the ones who can and must do it. We are responsible.

*Dr. Dignum will be a speaker at the AI for Good Global Summit in Geneva, Switzerland. The original version of this article was published on Medium.

Photo: Anthony Kwan/Bloomberg via Getty Images

Are you sure you want to remove this speaker?