AI for Good blog

How the responsible use of AI will determine its impact. Interview with Stuart Russell (VIDEO)

Ethics | Inclusivity

If humans are going to build something that’s potentially more intelligent and more powerful than themselves, knowing how to control it is of the utmost importance.

These was one key message from remarks made by Stuart Russell, Professor of Computer Science at the University of California, Berkeley, during the second edition of the AI for Good Global Summit at ITU headquarters in Geneva, Switzerland.

Thanks to gatherings like the Summit, Russell feels optimistic that there is a real awakening of the AI field, as well as a realization of the social responsibility that comes with it.

“Like many powerful technologies, AI offers us a choice. The real question is: ‘Are we good?’, not ‘Is the technology good?”

He believes that AI systems still have a long way to go before they understand enough about the world to actually pose a real threat, but that safety should always remain a priority.

“It’s not risks vs. benefits, it’s you can’t have the benefits unless you address the risks.”

WATCH THE VIDEO:

On whether AI is a force for good, Russell said that’s the wrong way to think about it.

“Is nuclear technology a force for good? If we choose to use it for making cheaper electricity without pollution then, yes, but if we choose to use it to make weapons to kill each other then, no.”

He went on to say, “Like many powerful technologies, AI offers us a choice. The real question is are we good, not is the technology good.”

RELATED: Action-oriented AI for Good Global Summit gets off to roaring start

While the future of AI looks promising with advances in many areas including achieving human levels of capability in dictation and machine translation and the acceptance of AI assistance, which have become a part of people’s lives, Russell says important questions still remain.