AI for Good stories

Your Voice Matters 

By Karishma Muthukumar In an era where AI holds the promise of transforming societies globally, it is crucial to ensure its development and deployment are guided by ethical principles and inclusive perspectives. Voices from around the world play a critical role in shaping our digital future.

Featured Image

By Karishma Muthukumar

In an era where AI holds the promise of transforming societies globally, it is crucial to ensure its development and deployment are guided by ethical principles and inclusive perspectives. Voices from around the world play a critical role in shaping our digital future. With this in mind, we launched a study in March to gather insights directly from people, helping us create a report that guides concrete, action-oriented steps toward the responsible development and implementation of AI. 

Over two months, this comprehensive study examined societal perspectives on AI, providing invaluable insights. Understanding how different communities perceive, interact with, and envision AI is crucial and decisions about its future should consider the diverse contexts of our global society. 

Our key findings:

Methodology

Through our anonymous quantitative research, we collected responses from over 325 participants across 64 countries. Participants were primarily recruited via the AI for Good mailing list and through calls posted on social media. The mailing list encompasses a diverse range of stakeholders, including government representatives, industry professionals, UN agencies, civil society members, international organizations, and academics. As members of the AI for Good community, most participants have some degree of interest or expertise in AI, while social media posts extended the initiative’s reach.

Of those who responded, 39% were women, representing the global age and gender distribution within a 5-10% margin.

 

Research Findings

Prior Awareness of AI Technologies 

Most know about AI.

Over 75% of participants were familiar with all 10 categories of AI applications provided, while approximately 90% were aware of at least five. Participants also frequently cited large language models (LLMs) for text and video generation, climate-focused AI, and personal assistants as familiar applications. This reflects a high level of awareness regarding AI technologies among respondents. In another global study conducted across 28 countries, about 67% of people reported having a good understanding of AI (Myers, 2022), reinforcing the observation that general awareness of AI is widespread.

Confidence in Using AI 

AI confidence is real.

At least seven out of tparticipants expressed confidence in their use of AI, suggesting it is generally seen as a comfortable and straightforward tool. Confidence is a crucial factor in decision-making, as research shows that one’s self-assurance influences whether they accept or reject AI outputs (Chong et al., 2022). Users often attribute errors to themselves, continuing to rely on imperfect AI systems despite their limitations. Poor AI performance may diminish both self-confidence and trust in AI, which can be quickly lost and slowly regained.

If a person rejects an AI output and performs well, they gain confidence in their ability while losing faith in AI. Conversely, if a person accepts an AI output and performs poorly, they may blame themselves for failing to recognize AI errors, diminishing confidence in both themselves and the technology. Further research should explore the connection between confidence, capability, and experience to understand where reported confidence originates.

Accountability for Ethical and Responsible AI Use

AI accountability should be shared. Companies and governments should be most responsible.

Most respondents believe companies and national governments should be held accountable for the ethical and responsible use of AI. However, many also feel that individuals, scientists, international organizations, and other stakeholders should share this responsibility. There was a recurring sentiment that all stakeholders involved in the AI life cycle should have a role in accountability, with some emphasizing that users must also contribute to responsible use.

The private sector, which leads AI development, must protect human rights (Lane, 2022). International human rights law could provide a framework to establish standards for emerging technologies, balancing the responsibilities among different stakeholders. With technological progress often outpacing legislation, new governance models are necessary. Key challenges such as transparency, human oversight, and data management must be addressed (Taeihagh, 2021). Collaborative and adaptive hybrid models could help build public consensus while aligning global governance strategies (Taeihagh, 2021).

Ranking Benefits of AI

Productivity is a key benefit to AI.

The productivity value of AI is widely recognized, with more than 75% of participants appreciating its ability to manage repetitive and time-consuming tasks. Additionally, over 50% acknowledge its role in promoting safety in dangerous environments, such as fires, deep oceans, and Mars. However, there is less consensus on AI’s impact on innovation, accuracy, and societal fairness. Despite varying views, 60% of respondents in a separate global study agreed that “products and services using artificial intelligence make my life easier” (Myers, 2022).

Ranking Risks of AI

AI’s core risks are bias, discrimination and lack of ethics and empathy.

More than half of the sample (over 50%) agrees that bias and discrimination are significant risks posed by AI. Around 50% also recognize the risks related to the lack of ethics, morals, emotions, and empathy. AI development demands substantial time, talent, and resources, contributing to inequality and a widening digital divide. It is also perceived to increase unemployment and replace humans in the workforce, while automating tasks may cause people to think less critically. Moreover, AI lacks inherent ethics, morals, emotions, and empathy, limiting its ability to address human needs appropriately.

Future of Society and AI Development

People are more comfortable than not about AI.

Most participants expressed more comfort than discomfort with the development of AI, with an average comfort level of 6.0 out of 10. However, further research is necessary to identify the sources of this comfort, including whether specific tools, platforms, people, or processes shape it. In another global study across 28 countries, 40% of respondents agreed that “products and services using artificial intelligence make me nervous” (Myers, 2022).

Factors That Could Increase Comfort with Emerging AI Technologies

Rules and regulations can help.

Enforcing clear rules and regulations is one of the primary ways people could become more comfortable with emerging AI technologies. Transparency and human-machine synergy are also crucial. Respondents suggested open-source models and AI systems that support, rather than overpower, users. This could involve requiring human interpretation or advancing hybrid systems. Global collaboration and multilateral agreements were emphasized to support ethical AI development. Participants also stressed the importance of foresight, anticipating potential negative outcomes and future applications of AI.

Meaning of AI to Participants

Perceptions of AI vary, highlighting both risk and opportunity.

Participants highlighted the dual nature of AI, considering it both an opportunity and a threat. This underscores the importance of regulation and governance. They viewed productivity as a major benefit and saw significant potential for solving complex human problems in new ways. However, there is also a recognition of the need to understand a rapidly evolving AI future. Participants emphasized the importance of addressing the misuse and missed use of AI technologies.

One participant said: “The continuous improvement of computers to act more similarly to humans and exceed our capabilities when beneficial, explainable, and responsible.

 

Image created through key points raised by participants.

Perspectives on the Rate of AI Development

Most believe AI is moving too fast.

A majority of participants (53%) believe that AI is developing too quickly, with very few considering it to be progressing too slowly. Concerns that AI is outpacing expectations are likely influenced by apprehensions surrounding its rapid technological advancements.

Perspectives on AI Explainability

Explainable AI is necessary.

Nearly 76% of respondents believe that AI outputs and recommendations should be explained, revealing a desire for greater transparency. Most current AI systems lack explanations for their recommendations, presenting an opportunity to improve explainability.

Perspectives on AI Governance

Regulation can be application-specific.

A majority (58%) believe that AI regulation should be tailored to specific application areas. Comprehensive legislation should accommodate differences in various fields to address the unique challenges associated with each.

Perspectives on the Risk of AI Replacing Jobs

We need to upskill the workforce.

The majority (70%) believe that training and reskilling programs are necessary to adapt to the evolving labor market conditions brought on by AI. Historical trends in technological innovations have had a significant impact on jobs and the economy, reinforcing the need to proactively prepare for workforce shifts.

Perspectives on the Impact of AI on Humans

Most think AI is making us smarter.

Most participants (51%) believe that AI enhances human cognition and capacities. Despite this positive outlook, further research is needed to ensure that AI fulfils its potential while mitigating emerging challenges.

Outcomes

In this study, we explored global perceptions and attitudes toward AI development, usage, and governance. The data reveals widespread awareness of AI technologies, with most participants expressing confidence in their use. However, the consensus around shared accountability indicates a clear need for collaborative frameworks involving companies, governments, and individuals to ensure ethical and responsible AI deployment.

Productivity emerged as a significant benefit, yet participants expressed concerns about biases, discrimination, and the absence of ethics and empathy in AI systems. Despite these challenges, a majority remain optimistic, with an average comfort level of 6.0 out of 10. This optimism is tempered by calls for clear rules and regulations to foster greater transparency, explainability, and human-machine synergy.

Participants also believe that AI regulation should be tailored to specific application areas, enabling legislation to adapt to the unique challenges posed by different fields. The rapid pace of AI development has heightened concerns about its impact on employment, emphasizing the importance of upskilling programs to prepare the workforce for changing labor conditions.

This research further highlighted that whilst participants recognize its potential to enhance productivity and augment human cognition, they see effective governance as essential to address risks. Balancing innovation with oversight will be therefore crucial as we shape an inclusive and ethical digital future for AI.

Are you sure you want to remove this speaker?