AI for Good stories

Navigating AI Standards: Perspectives from Leading Standardization Organizations at the AI Policy Summit 2024

“With every breakthrough in science and technology, we must continue coming together to develop the standards required to thrive in new frontiers and that's what our standards process is built for,” said Bilel Jamoussi. At the 5th AI Policy Summit in Zurich, organized by RegHorizon and ETH Zurich Center for Law and Economics, experts from academia, governments, civil society, and industry brought diverse perspectives to the discussion, all highlighting the need to navigate and harmonise AI standards processes.

by Cindy X. S. Zheng

Featured Image

“With every breakthrough in science and technology, we must continue coming together to develop the standards required to thrive in new frontiers and that’s what our standards process is built for,” said Bilel Jamoussi.

At the 5th AI Policy Summit in Zurich, organized by RegHorizon and ETH Zurich Center for Law and Economics, experts from academia, governments, civil society, and industry brought diverse perspectives to the discussion, all highlighting the need to navigate and harmonise AI standards processes.

AI for Good and the International Telecommunication Union (ITU) contributed as key strategic partners to this landmark Swiss conference.

Kicking off AI Policy with a nod to AI for Good

The event’s agenda was filled with intersecting policy topics in the field of AI, such as Energy and Climate, Economy, Youth, and a special focus on the Swiss regulatory ecosystem.

ITU’s Deputy to the Director and Chief of Telecommunication Standardization Policy Department (TSB) Bilel Jamoussi delivered a keynote that focused on how to harness AI for societal good. ITU has been instrumental in fostering the convergence of different expertise areas, the AI for Good platform specifically has stimulated connections among AI specialists, users, data owners, and domain experts.

AI for Good embodies a collaborative environment based on inclusive and diverse partnerships that are crucial for driving sustainable AI governance.  With over 40 UN partners, and members from the private sector, civil society, academia, and the significant online community, the platform embraces a multi-stakeholder approach.

Standard-setting bodies for AI development and governance

The 4th Panel of the AI Policy Summit discussed ‘Risk Management and Bringing Coherence to AI Governance through Standards – International Landscape’ focusing on the implementation aspects of AI policy.

This time as panellist, Bilel Jamoussi shared the stage with Elham Tabassi, Chief AI Advisor & Associate Director at the National Institute of Standards and Technology (NIST), USA, Cindy Parokkil, Head of Standards and Public Policy from the International Standards Organization (ISO), and Ladina Caduff, Director Corporate Affairs from Microsoft Switzerland, to focus specifically on the international landscape of AI policy.

“AI is changing our world, but we have deep experience to build on standards that have helped us navigate revolution after revolution,” said Bilel Jamoussi.

The panel was moderated by Marta Ziosi from the University of Oxford and kicked off with an overview of the U. S. standardization efforts for managing AI risks, provided by Elham Tabassi. These encompass 12 unique risks of generative AI, such as data privacy risks and misuse of chemical, biological, and nuclear information.

“[We’re trying to] provide a scientific underpinning for the conversations that provide robustness into conversations and input for the policymakers. But also building the technical building blocks needed for development of scientifically valid, clear implementable, rigorous standards,” said Elham Tabassi.

Next, Bilel Jamoussi highlighted ITU’s leading work in AI standardization. The creation of cohesive global AI standards is faced with the challenge of vastly varying levels of technological readiness and different AI risks for countries and regions. The AI for Good platform as a multi-stakeholder initiative is a great starting point for overcoming this, bringing together ITU’s 194 member states, private sector entities, academia, and civil society to collaborate and share knowledge about AI applications for good.

One concrete example is ITU’s partnership with the World Health Organization (WHO) in tackling the global shortage of medical professionals and the unstandardized use of AI in diagnostics. The resulting AI for Health Focus Group (FG-AI4H) facilitates the exchange of AI advancements between countries across the globe.

Cindy Parokkil views standards as agile tools that can help businesses and governments in governing AI, establishing guardrails, and building trust. Due to the development process of ISO standards which involves a global network of 172 national standards bodies, the resulting voluntary, consensus-based international standards can be adopted on a global and national level.

For an industry insight, Ladina Caduff from Microsoft Switzerland elaborated that the company’s first responsible AI standard emerged in 2019. Currently, Microsoft is evaluating how its responsible AI approach aligns with the new ISO standard.

Global standards through global inclusive collaboration

The panel touched on the ongoing actions of standardization institutions and industry to collaboratively create and implement standards that are flexible yet provide meaningful guidelines for AI governance.

However, challenges exist in ensuring coherence due to the multitude of existing and forthcoming standards. Greater collaboration, transparency, and interoperability, along with capacity building and dialogue between policymakers and standards development organizations, are necessary.

For example, one audience question addressed whether ISO standards offer too much leeway in implementation as some specifications do not necessarily comply with regulation.

“[I]f the standard is referenced by a regulator, [if you] demonstrate compliance to that standard, you’re complying with the regulation. The standard is developed through a multi-stakeholder consensus-building process, it is guidelines as they rightly note. But that’s the beauty of it. Because it helps organizations to take the standard and apply it in their context and see how best to navigate the risks,” explained Cindy Parokkil.

Bilel Jamoussi expands that there are different levels of standards: Technical interoperability standards require detailed specifications for products from different manufacturers to work together. But in AI and other emerging fields, the first step is often to establish a glossary of terms. Once a common language is established, guidelines and more detailed technical frameworks can be developed.

Most recently, the essential role of standardization for safe and responsible AI as part of the recently announced Global Digital Compact was brought forward at the ITU’s Fifth Global Standards Symposium (GSS) in New Delhi. ITU is committed to ensuring that the development of standards is a collective, consensus-driven process.

Not only the ITU but also standardization bodies in general approach AI from a collaborative perspective that is deeply rooted in principles of innovation, inclusiveness, and sustainability. This is evident in the ITU’s collaborations on the 2025 International AI Standard Summit with standard-setting bodies like the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC).

To learn more about the AI Policy Summit, watch the panel below:

Are you sure you want to remove this speaker?