How ‘responsible AI’ can boost sustainable development
The potential value from artificial intelligence (AI) is enormous: some $16 trillion by 2030, according to a PwC study. But what are the costs of AI when not done properly — and what are some of the risks?
What does it really mean to be doing AI “responsibly”? Can you really do Responsible AI without worrying about the social consequences? How can you apply the principles of responsible AI more broadly toward achieving sustainable development goals?
AI is being applied in most industry sectors — ranging from agriculture to aerospace — and across functional areas, from strategy to support.
Increasingly, countries in varying stages of economic advancement are making plans to apply AI too.
While this can increase the profits of companies and gross domestic product (GDP) of countries in the short term, if not done responsibly, it could help create greater inequity within each country, greater inequity among countries, increased use and depletion of natural resources to fuel the AI-led growth of economies, further decreases in biodiversity and ill treatment of other species, and adverse effects on the climate.
A ‘holistic discipline’
PwC’s Responsible AI is a holistic discipline: It is not just about what you build, but why and how you build it — as well as the long-term implications of the use of AI for your customers, staff, and society at large. It is not just about the technology itself. It is about the governance of AI, its impact on people, and the process of designing, building, and maintaining it.
The overarching principles that govern these dimensions are rooted in the society’s ethics and values. The governance of AI and algorithms — especially the monetary value AI brings and the accompanying risks that need to be mitigated — are Board and executive management decisions.
RELATED: How will AI accelerate sustainable development? Find out at the AI for Good Global Summit.
The process of designing, building, running, and maintaining AI should be embedded within the broader context of how a company operates. In addition to all these, it is about how PwC’s AI models are built — specifically, addressing issues such as fairness, transparency, interpretability, explainability, safety, security, ethics, values, and accountability.
Addressing the SDGs
Responsible AI addresses four of the United Nations (UN)’s seventeen Sustainable Development Goals, namely gender equality, decent work and economic growth for all, industry innovation and infrastructure, and generally reducing societal inequality.
Primarily, responsible AI is concerned about fairness and equality of gender, race, or similar protected attributes.
Acting responsibly in the corporate context may or may not (depending on the purpose and stated vision of the company) take into account the broader topics of human rights, the well-being of humanity and other species, and protecting and nurturing our planet’s biodiversity and natural resources. In other words, responsible AI in the corporate context takes into account some of the people- and policy-related goals, while not always addressing those related to the planet and human condition.
‘Fourth Social Revolution’?
As has been argued at the World Economic Forum, the Fourth Industrial Revolution should be accompanied by the Fourth Social Revolution.
Individuals, corporate entities, nations, and other supra-national bodies should include metrics that are broader than revenues and profits.
Some or all objectives outlined in the SDGs should be part of a corporate socially responsible vision, plan, and metrics. For example, global corporations that rely heavily on air travel should commit to becoming carbon neutral; employees who travel regularly should be given data not just on the miles that they have flown, but the CO2 emissions created as a result of their travel.
Employees and corporations together can work toward offsetting emissions through initiatives to plant more trees. Air travel booking sites could feature the Carbon Emissions Calculator from ICAO, for example, and, based on the CO2 emissions, there could be a link to an environmental organisation such as Arbor Day or Carbonfund to offset the emissions by planting more trees.
As outlined in a World Economic Forum Harnessing Artificial Intelligence for the Earth report developed in partnership with PwC and the Stanford Woods Institute for the Environment, AI can play a pivotal role in addressing six key areas — specifically climate change, biodiversity and conservation, healthy oceans, water security, clean air, and weather and disaster resilience. But these AI use cases should not be looked at as isolated programs to address the effects of economic development, but should instead be addressed holistically to help get to the root causes impacting the planet, human rights, and human well-being.
Organizations that claim to apply AI in a socially responsible manner should incorporate not just attributes like fairness, accountability, safety, and transparency, but also take into account additional factors, such as AI’s impact on jobs, the human condition, biodiversity, energy, climate, and so on. The additional criteria will vary depending on what products and services a given company offers, the impact of these factors on the environment, and what AI algorithms the company is using in the creation of these products and services.
PwC refers to the PwC network and/or one or more of its member firms, each of which is a separate legal entity. Please see www.pwc.com/structure for further details. Views expressed in this article do not necessarily represent the views of ITU.