AI for Good stories

Making sure AI is harnessed for good

The weaponization of technology is as old as warfare itself. Throughout history, innovations from dynamite to the airplane have been used for unintended, and often destructive, purposes.

Featured Image

The weaponization of technology is as old as warfare itself. Throughout history, innovations from dynamite to the airplane have been used for unintended, and often destructive, purposes.

Today’s technological boom — what is often referred to as the ‘Fourth Industrial Revolution’ — is ushering in a suite of technologies that have the potential to transform society, but they can also be repurposed for malicious ends. The growing range of artificial intelligence-based innovations are no exception.

Addressing the peace and security challenges posed by new weapons and means of warfare is an important responsibility of the United Nations and my office, the UN Office for Disarmament Affairs.

However, when I first started my job as the High Representative for Disarmament Affairs twelve months ago, I must admit I was not well-versed in the nuances of AI, including its impact on international peace and security.

For me, it was the AI for Global Good Summit, hosted by the International Telecommunication Union in Geneva last June, at which I began to realize just how much our actions now will shape the staggering advances taking place in computational science and robotics.

AI-based technologies are revolutionizing transportation, manufacturing, healthcare, and education, creating potential for radical improvements in the lives of the world’s most vulnerable people.

For key insights on trends in AI, read the latest edition of the ITU News Magazine:

When it comes to international peace and security, the same technology that underlies these innovations could be used to great benefit, from the verification of treaty compliance—such as the pioneering work being done by the Comprehensive Nuclear-Test-Ban Treaty Organization—to applications in peacekeeping operations and the delivery of humanitarian assistance.

However, it is also true that this technology could be weaponized, with the potential to transform existing weapons and their delivery systems, as well as decision-making structures. Military applications of AI could include autonomous weapons systems and command and control structures for use in every domain of warfare, including cyber- and outer-space, and possibly even in nuclear arsenals.

As countries seek to become technological leaders in this field, we are witnessing the beginnings of a 21st century version of the Cold War space race. We need to make sure it doesn’t become a 21st century version of the Cold War arms race.

“Our challenge is to maximize the benefits of the technological revolution while mitigating and preventing the dangers.” – Antonio Guterres, UN Secretary-General

The application of AI-based systems in military command and control, communications, intelligence, and weapons systems raises some serious questions, including those related to the potential for accidents, miscalculation, and escalation control, as well as accountability.

For autonomous weapons systems, these questions include how to ensure such systems are used in full compliance with international humanitarian and international human rights law. As with all networked devices, AI-based enhanced weapons systems could also potentially be vulnerable to outside interference or hacking, further increasing the chances of miscalculation and confusion.

The democratization of technology and its unprecedented dissemination has been a boon to millions, and AI-based applications are no exception. However, care also needs to be taken in regard to how this technology could be used. Malicious non-state actors such as terrorist groups are showing an increasing aptitude for using technology for their own purposes, such as drone attacks or online recruitment.

As Secretary-General Guterres recently said, “Our challenge is to maximize the benefits of the technological revolution while mitigating and preventing the dangers.”

Doing so requires a multifaceted response. But before we—and by we, I mean the international community—even start, we need to have some serious conversations to better understand how this technology is already being used, as well as its long-term ramifications. We need to make sure those deliberations are inclusive—levels of understanding about AI and its applications vary, and we all need to get on the same page.

We also need to develop the multi-stakeholder coalitions necessary to address the challenges and opportunities of the AI revolution. The private sector is a key driver behind much of this technology, and many leading companies in the field have already said they fear that advances made could be weaponized or repurposed in ways that challenge our collective ability to respond. In addition to industry, governments should also welcome discussions with humanitarian organizations and academia on the cutting edge of AI-based innovations.

RELATED: AI and jobs: 4 key steps governments can take to limit job displacement

Finally, there is a spectrum of potential responses that might be necessary—from “soft law” approaches such as industry codes of conduct, to more formal transparency and confidence measures, to, if necessary, legally binding instruments. Each of these possibilities should be properly examined.

I firmly believe that the United Nations remains the forum in which the global community can address the pressing peace and security challenges of the day, including those posed by AI and other emerging technologies.

Some of the risks and challenges associated with the onset and various applications of new technologies are already being addressed at the UN. For example, the Convention on Certain Conventional Weapons serves as one such forum for countries to discuss possible applications of AI with a range of stakeholders, as well as policy options to address concerns.

As the Secretary-General said, now is the time for all of us to come together and “consider what should constitute responsible state behavior and responsible innovation,” including in the field of AI-based innovations and applications.

We have an opportunity now to build shared understandings and ensure that the benefits of AI are distributed in an equitable manner that supports the prosperity and security of all. It is an opportunity we must take advantage of.

Izumi Nakamitsu, UN Under-Secretary-General and High Representative for Disarmament Affairs

The original version of this article first appeared in XPRIZE. Views expressed in this article do not necessarily reflect those of ITU.

Are you sure you want to remove this speaker?