AI for Good blog

Gender bias is a threat to future Artificial Intelligence (AI) applications: Opinion

Inclusivity

According to the latest World Economic Forum’s Global Gender Gap Report (2018), only 22 percent of Artificial Intelligence (AI) professionals globally are female compared to 78 percent who are male. This accounts for the general gender gap of 72 percent yet to close.

This finding is not only alarming in itself, but a stunning reminder of the urgent action needed by all stakeholders to mitigate the threat posed by gender bias outcomes in future AI applications.

Speaking during a session on Design by diversity at the recently concluded ITU Telecom World event in Budapest, Hungary, the Commonwealth Telecommunications Organisation (CTO) outlined the potential of AI as one of the key technologies that will enable the digital transformation for many countries both within the Commonwealth and beyond.

Emerging technologies, such as AI, are considered as digital technology equalisers rather than being used to create further digital divide.

“Transparency and accountability for the data behind AI is critical to reducing bias, but very difficult to govern or enforce.”

AI relies on algorithms that learn from real-world data and there is fear that AI applications will inadvertently exacerbate the existing gender biases.

Gartner predicts that by 2022, nearly 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing these projects.

Over-representation of men in the design and developments of AI technologies, risks undoing the advances gained over the years in ensuring gender equality in various levels of the society including workplace.

As stated by the CTO, it’s the data fueling AI applications that consists of gender bias traits and not AI technology itself. It’s therefore imperative to empower women throughout the tech industry, and specifically within the AI sector, as one way to ensure that gender biases are minimised or eliminated when generating relevant data sets.

Transparency and accountability for the data behind AI is critical to reducing bias, but very difficult to govern or enforce. There is a lengthy inventory of different biases which can be identified, documented and used to define parameters, clean data and make sure the models function.

While we are all responsible, in our own way, for eliminating gender biases in AI data, governments need to take the initiative to create relevant platforms or working groups to discuss AI issues, and develop AI frameworks or strategies relevant to the local context.

Governance, in terms of policy and regulation, of emerging technologies such as AI and others, should be on the national agenda of any government irrespective of the differing stages of AI development or deployment in the country.

*The original version of this article first appeared on LinkedIn. Views expressed in this article do not necessarily reflect those of ITU.

Are you sure you want to remove this speaker?