AI for Good blog

How to govern AI to make it a force for good: Harvard’s Urs Gasser

Ethics | Inclusivity | Innovation & Creativity

There is incredible potential for Artificial Intelligence (AI) to help the world achieve the United Nations’ Sustainable Development Goals (SDG)s. But how successful these efforts will be will depend on how regulators and policymakers address questions of governance.

That was one of the main messages of Urs Gasser, Executive Director of the Berkman Klein Center for Internet and Society and Professor of Practice at Harvard Law School, in a video interview with ITU during the recent Global Symposium for Regulators (GSR) in Geneva.

‘How do we synchronize the speed of advancement in tech with the speed of being smart about regulation?’ – Urs Gasser, Executive Director of the Berkman Klein Center for Internet and Society and Professor of Practice at Harvard Law School

“Everyone is talking about Artificial Intelligence and its many different applications, whether it’s self-driving cars or personal assistance on the cell phone or AI in health,” he says. “It raises all sorts of governance questions, questions about how these technologies should be regulated to mitigate some of the risks but also, of course, to embrace the opportunities.”

Gasser wrote a paper for GSR’s AI Development Series on setting the stage for AI governance that draws from conversations with global policymakers and distills key themes to help leaders “chart the pathway forward,” Gasser explains.

In the interview, Gasser identifies three things policymakers and regulators should consider when developing strategies for dealing with emerging technologies like AI.

Bridge informational asymmetries

One of the largest challenges to AI is its complexity, which results in a divide between the knowledge of technologists and that of the policymakers and regulators tasked to address it, Gasser says.

“There is actually a relatively small group of people who understand the technology, and there are potentially a very large population affected by the technology,” he says.

RELATED: How ICT regulators can adapt to harness emerging technologies for good

This information asymmetry requires a concerted effort to increase education and awareness, he says.

“How do we train the next generation of leaders who are fluent enough to speak both languages and understand engineering enough as well as the world policy and law enough and ethics, importantly, to make these decisions about governance of AI?”

Ensure inclusive effects

Another challenge is to ensure that new technologies benefit all people in the same way, Gasser says.

For example, autonomous vehicles offer tremendous opportunities to increase efficiency in transportation, but they rely on digital services like Google Maps. There are many areas in cities, such as favelas in Brazil, that don’t have these maps available, Gasser says.

“You can make the argument that the places where the technology would show the most benefit because these are digital ‘have nots’ to begin with, these populations, they’re now disadvantaged yet again with the next generation of technology,” he says.

Increasing inclusivity requires efforts on the infrastructural level to expand connectivity and also on the data level to provide a “data commons” that is representative of all people, he says.

WATCH THE VIDEO:

Synchronize innovation with regulation

AI development is still in relatively early stages, yet the speed at which it is evolving makes it even more important to collaborate on efforts to use the technology “for good and avoid the pitfalls,” Gasser says.

RELATED: Emerging technologies will require innovative regulation: Brahima Sanou

“How do we synchronize the speed of advancement in tech with the speed of being smart about regulation that, to be sure, also supports and enables the innovation but yet also addresses some of these fundamental challenges?”

The issue of governance will only become more relevant in the future, as AI causes a gradual shift in autonomy from humans to machines, he says.

“Decisions that were previously made by humans are now moving toward the machine,” he says. “The scale at which this is going to happen is unprecedented. We are really only at the beginning of all of this. This will keep us busy for a while.”

Are you sure you want to remove this speaker?